DEFF Research Database (Denmark)
Hong, Jinglan; Shaked, Shanna; Rosenbaum, Ralph K.;
2010-01-01
Background, aim, and scope Uncertainty information is essential for the proper use of Life Cycle Assessment (LCA) and environmental assessments in decision making. So far, parameter uncertainty propagation has mainly been studied using Monte Carlo techniques that are relatively computationally...... approach to the comparison of two or more LCA scenarios. Since in LCA it is crucial to account for both common inventory processes and common impact assessment characterization factors among the different scenarios, we further develop the approach to address this dependency. We provide a method to easily...... tested cases, we obtained a good concordance between the Monte Carlo and the Taylor series expansion methods regarding the probability that one scenario is better than the other. Discussion The Taylor series expansion method addresses the crucial need of accounting for dependencies in LCA, both for...
Sciacchitano, Andrea; Wieneke, Bernhard
2016-08-01
This paper discusses the propagation of the instantaneous uncertainty of PIV measurements to statistical and instantaneous quantities of interest derived from the velocity field. The expression of the uncertainty of vorticity, velocity divergence, mean value and Reynolds stresses is derived. It is shown that the uncertainty of vorticity and velocity divergence requires the knowledge of the spatial correlation between the error of the x and y particle image displacement, which depends upon the measurement spatial resolution. The uncertainty of statistical quantities is often dominated by the random uncertainty due to the finite sample size and decreases with the square root of the effective number of independent samples. Monte Carlo simulations are conducted to assess the accuracy of the uncertainty propagation formulae. Furthermore, three experimental assessments are carried out. In the first experiment, a turntable is used to simulate a rigid rotation flow field. The estimated uncertainty of the vorticity is compared with the actual vorticity error root-mean-square, with differences between the two quantities within 5–10% for different interrogation window sizes and overlap factors. A turbulent jet flow is investigated in the second experimental assessment. The reference velocity, which is used to compute the reference value of the instantaneous flow properties of interest, is obtained with an auxiliary PIV system, which features a higher dynamic range than the measurement system. Finally, the uncertainty quantification of statistical quantities is assessed via PIV measurements in a cavity flow. The comparison between estimated uncertainty and actual error demonstrates the accuracy of the proposed uncertainty propagation methodology.
Uncertainty propagation in nuclear forensics
International Nuclear Information System (INIS)
Uncertainty propagation formulae are presented for age dating in support of nuclear forensics. The age of radioactive material in this context refers to the time elapsed since a particular radionuclide was chemically separated from its decay product(s). The decay of the parent radionuclide and ingrowth of the daughter nuclide are governed by statistical decay laws. Mathematical equations allow calculation of the age of specific nuclear material through the atom ratio between parent and daughter nuclides, or through the activity ratio provided that the daughter nuclide is also unstable. The derivation of the uncertainty formulae of the age may present some difficulty to the user community and so the exact solutions, some approximations, a graphical representation and their interpretation are presented in this work. Typical nuclides of interest are actinides in the context of non-proliferation commitments. The uncertainty analysis is applied to a set of important parent–daughter pairs and the need for more precise half-life data is examined. - Highlights: • Uncertainty propagation formulae for age dating with nuclear chronometers. • Applied to parent–daughter pairs used in nuclear forensics. • Investigated need for better half-life data
Stochastic and epistemic uncertainty propagation in LCA
DEFF Research Database (Denmark)
Clavreul, Julie; Guyonnet, Dominique; Tonini, Davide;
2013-01-01
When performing uncertainty propagation, most LCA practitioners choose to represent uncertainties by single probability distributions and to propagate them using stochastic methods. However, the selection of single probability distributions appears often arbitrary when faced with scarce informati...
Uncertainty Propagation in an Ecosystem Nutrient Budget.
New aspects and advancements in classical uncertainty propagation methods were used to develop a nutrient budget with associated error for a northern Gulf of Mexico coastal embayment. Uncertainty was calculated for budget terms by propagating the standard error and degrees of fr...
Stochastic and epistemic uncertainty propagation in LCA
DEFF Research Database (Denmark)
Clavreul, Julie; Guyonnet, Dominique; Tonini, Davide;
2013-01-01
When performing uncertainty propagation, most LCA practitioners choose to represent uncertainties by single probability distributions and to propagate them using stochastic methods. However, the selection of single probability distributions appears often arbitrary when faced with scarce information...... information is rich, then a purely statistical representation mode is adequate, but if the information is scarce, then it may be better conveyed by possibility distributions....
The Role of Uncertainty, Awareness, and Trust in Visual Analytics.
Sacha, Dominik; Senaratne, Hansi; Kwon, Bum Chul; Ellis, Geoffrey; Keim, Daniel A
2016-01-01
Visual analytics supports humans in generating knowledge from large and often complex datasets. Evidence is collected, collated and cross-linked with our existing knowledge. In the process, a myriad of analytical and visualisation techniques are employed to generate a visual representation of the data. These often introduce their own uncertainties, in addition to the ones inherent in the data, and these propagated and compounded uncertainties can result in impaired decision making. The user's confidence or trust in the results depends on the extent of user's awareness of the underlying uncertainties generated on the system side. This paper unpacks the uncertainties that propagate through visual analytics systems, illustrates how human's perceptual and cognitive biases influence the user's awareness of such uncertainties, and how this affects the user's trust building. The knowledge generation model for visual analytics is used to provide a terminology and framework to discuss the consequences of these aspects in knowledge construction and though examples, machine uncertainty is compared to human trust measures with provenance. Furthermore, guidelines for the design of uncertainty-aware systems are presented that can aid the user in better decision making. PMID:26529704
Towards a complete propagation uncertainties in depletion calculations
Energy Technology Data Exchange (ETDEWEB)
Martinez, J.S. [Universidad Politecnica de Madrid (Spain). Dept. of Nuclear Engineering; Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Garching (Germany); Zwermann, W.; Gallner, L.; Puente-Espel, Federico; Velkov, K.; Hannstein, V. [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Garching (Germany); Cabellos, O. [Universidad Politecnica de Madrid (Spain). Dept. of Nuclear Engineering
2013-07-01
Propagation of nuclear data uncertainties to calculated values is interesting for design purposes and libraries evaluation. XSUSA, developed at GRS, propagates cross section uncertainties to nuclear calculations. In depletion simulations, fission yields and decay data are also involved and are a possible source of uncertainty that should be taken into account. We have developed tools to generate varied fission yields and decay libraries and to propagate uncertainties through depletion in order to complete the XSUSA uncertainty assessment capabilities. A generic test to probe the methodology is defined and discussed. (orig.)
Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design
Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.
2002-01-01
Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.
Quantifying uncertainty in nuclear analytical measurements
International Nuclear Information System (INIS)
The lack of international consensus on the expression of uncertainty in measurements was recognised by the late 1970s and led, after the issuance of a series of rather generic recommendations, to the publication of a general publication, known as GUM, the Guide to the Expression of Uncertainty in Measurement. This publication, issued in 1993, was based on co-operation over several years by the Bureau International des Poids et Mesures, the International Electrotechnical Commission, the International Federation of Clinical Chemistry, the International Organization for Standardization (ISO), the International Union of Pure and Applied Chemistry, the International Union of Pure and Applied Physics and the Organisation internationale de metrologie legale. The purpose was to promote full information on how uncertainty statements are arrived at and to provide a basis for harmonized reporting and the international comparison of measurement results. The need to provide more specific guidance to different measurement disciplines was soon recognized and the field of analytical chemistry was addressed by EURACHEM in 1995 in the first edition of a guidance report on Quantifying Uncertainty in Analytical Measurements, produced by a group of experts from the field. That publication translated the general concepts of the GUM into specific applications for analytical laboratories and illustrated the principles with a series of selected examples as a didactic tool. Based on feedback from the actual practice, the EURACHEM publication was extensively reviewed in 1997-1999 under the auspices of the Co-operation on International Traceability in Analytical Chemistry (CITAC), and a second edition was published in 2000. Still, except for a single example on the measurement of radioactivity in GUM, the field of nuclear and radiochemical measurements was not covered. The explicit requirement of ISO standard 17025:1999, General Requirements for the Competence of Testing and Calibration
Uncertainty propagation in fault trees using a quantile arithmetic methodology
International Nuclear Information System (INIS)
A methodology based on Quantile Arithmetic, the probabilistic analog to Interval Analysis (Dempster 1969), is proposed for the computation of uncertainty propagation in Fault Tree Analysis (Apostolakis 1977). The basic events' continuous probability density functions are represented by equivalent discrete distributions through dividing them into a number of quantiles N. Quantile Arithmetic is then used to perform the binary arithmetical operations corresponding to the logical gates in the Boolean expression for the Top Event of a given Fault Tree. The computational characteristics of the proposed methodology as compared with the exact analytical solutions are discussed for the cases of the summation of M normal variables. It is further compared with the Monte Carlo method through the use of the efficiency ratio defined as the product of the labor and error ratios. (orig./HP)
Remaining Useful Life Estimation in Prognosis: An Uncertainty Propagation Problem
Sankararaman, Shankar; Goebel, Kai
2013-01-01
The estimation of remaining useful life is significant in the context of prognostics and health monitoring, and the prediction of remaining useful life is essential for online operations and decision-making. However, it is challenging to accurately predict the remaining useful life in practical aerospace applications due to the presence of various uncertainties that affect prognostic calculations, and in turn, render the remaining useful life prediction uncertain. It is challenging to identify and characterize the various sources of uncertainty in prognosis, understand how each of these sources of uncertainty affect the uncertainty in the remaining useful life prediction, and thereby compute the overall uncertainty in the remaining useful life prediction. In order to achieve these goals, this paper proposes that the task of estimating the remaining useful life must be approached as an uncertainty propagation problem. In this context, uncertainty propagation methods which are available in the literature are reviewed, and their applicability to prognostics and health monitoring are discussed.
Navacerrada Saturio, Maria Angeles; Díaz Sanchidrián, César; Pedrero González, Antonio; Iglesias Martínez, Luis
2008-01-01
The new Spanish Regulation in Building Acoustic establishes values and limits for the different acoustic magnitudes whose fulfillment can be verify by means field measurements. In this sense, an essential aspect of a field measurement is to give the measured magnitude and the uncertainty associated to such a magnitude. In the calculus of the uncertainty it is very usual to follow the uncertainty propagation method as described in the Guide to the expression of Uncertainty in Measurements (GUM...
Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I.
Bautista, M. A.; Fivet, V.; Quinet, P.; Dunn, J.; Gull, T. R.; Kallman, T. R.; Mendoza, C.
2013-01-01
We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data.We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of Oiii and Fe ii and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe ii]. Key words: atomic data - atomic processes - line: formation - methods: data analysis - molecular data - molecular processes - techniques: spectroscopic
UNCERTAINTIES IN ATOMIC DATA AND THEIR PROPAGATION THROUGH SPECTRAL MODELS. I
International Nuclear Information System (INIS)
We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data. We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of O III and Fe II and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe II].
Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I
Bautista, Manuel A; Quinet, Pascal; Dunn, Jay; Kallman, Theodore R Gull Timothy R; Mendoza, Claudio
2013-01-01
We present a method for computing uncertainties in spectral models, i.e. level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data. We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of O III and Fe II and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe II].
UNCERTAINTIES IN ATOMIC DATA AND THEIR PROPAGATION THROUGH SPECTRAL MODELS. I
Energy Technology Data Exchange (ETDEWEB)
Bautista, M. A.; Fivet, V. [Department of Physics, Western Michigan University, Kalamazoo, MI 49008 (United States); Quinet, P. [Astrophysique et Spectroscopie, Universite de Mons-UMONS, B-7000 Mons (Belgium); Dunn, J. [Physical Science Department, Georgia Perimeter College, Dunwoody, GA 30338 (United States); Gull, T. R. [Code 667, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Kallman, T. R. [Code 662, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Mendoza, C., E-mail: manuel.bautista@wmich.edu [Centro de Fisica, Instituto Venezolano de Investigaciones Cientificas (IVIC), P.O. Box 20632, Caracas 1020A (Venezuela, Bolivarian Republic of)
2013-06-10
We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data. We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of O III and Fe II and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe II].
Uncertainty propagation in locally damped dynamic systems
Cortes Mochales, Lluis; Ferguson, Neil S.; Bhaskar, Atul
2012-01-01
In the field of stochastic structural dynamics, perturbation methods are widely used to estimate the response statistics of uncertain systems. When large built up systems are to be modelled in the mid-frequency range, perturbation methods are often combined with finite element model reduction techniques in order to considerably reduce the computation time of the response. Existing methods based on Component Mode Synthesis(CMS) allow the uncertainties in the system parameters to be treated ...
International Nuclear Information System (INIS)
The uncertainty in the redshift distributions of galaxies has a significant potential impact on the cosmological parameter values inferred from multi-band imaging surveys. The accuracy of the photometric redshifts measured in these surveys depends not only on the quality of the flux data, but also on a number of modeling assumptions that enter into both the training set and spectral energy distribution (SED) fitting methods of photometric redshift estimation. In this work we focus on the latter, considering two types of modeling uncertainties: uncertainties in the SED template set and uncertainties in the magnitude and type priors used in a Bayesian photometric redshift estimation method. We find that SED template selection effects dominate over magnitude prior errors. We introduce a method for parameterizing the resulting ignorance of the redshift distributions, and for propagating these uncertainties to uncertainties in cosmological parameters.
New challenges on uncertainty propagation assessment of flood risk analysis
Martins, Luciano; Aroca-Jiménez, Estefanía; Bodoque, José M.; Díez-Herrero, Andrés
2016-04-01
Natural hazards, such as floods, cause considerable damage to the human life, material and functional assets every year and around the World. Risk assessment procedures has associated a set of uncertainties, mainly of two types: natural, derived from stochastic character inherent in the flood process dynamics; and epistemic, that are associated with lack of knowledge or the bad procedures employed in the study of these processes. There are abundant scientific and technical literature on uncertainties estimation in each step of flood risk analysis (e.g. rainfall estimates, hydraulic modelling variables); but very few experience on the propagation of the uncertainties along the flood risk assessment. Therefore, epistemic uncertainties are the main goal of this work, in particular,understand the extension of the propagation of uncertainties throughout the process, starting with inundability studies until risk analysis, and how far does vary a proper analysis of the risk of flooding. These methodologies, such as Polynomial Chaos Theory (PCT), Method of Moments or Monte Carlo, are used to evaluate different sources of error, such as data records (precipitation gauges, flow gauges...), hydrologic and hydraulic modelling (inundation estimation), socio-demographic data (damage estimation) to evaluate the uncertainties propagation (UP) considered in design flood risk estimation both, in numerical and cartographic expression. In order to consider the total uncertainty and understand what factors are contributed most to the final uncertainty, we used the method of Polynomial Chaos Theory (PCT). It represents an interesting way to handle to inclusion of uncertainty in the modelling and simulation process. PCT allows for the development of a probabilistic model of the system in a deterministic setting. This is done by using random variables and polynomials to handle the effects of uncertainty. Method application results have a better robustness than traditional analysis
Quantile arithmetic methodology for uncertainty propagation in fault trees
International Nuclear Information System (INIS)
A methodology based on quantile arithmetic, the probabilistic analog to interval analysis, is proposed for the computation of uncertainties propagation in fault tree analysis. The basic events' continuous probability density functions (pdf's) are represented by equivalent discrete distributions by dividing them into a number of quantiles N. Quantile arithmetic is then used to performthe binary arithmetical operations corresponding to the logical gates in the Boolean expression of the top event expression of a given fault tree. The computational advantage of the present methodology as compared with the widely used Monte Carlo method was demonstrated for the cases of summation of M normal variables through the efficiency ratio defined as the product of the labor and error ratios. The efficiency ratio values obtained by the suggested methodology for M = 2 were 2279 for N = 5, 445 for N = 25, and 66 for N = 45 when compared with the results for 19,200 Monte Carlo samples at the 40th percentile point. Another advantage of the approach is that the exact analytical value of the median is always obtained for the top event
Estimation and propagation of uncertainties associated with paleomagnetic directions
Heslop, David; Roberts, Andrew P.
2016-04-01
Principal component analysis (PCA) is a well-established technique in paleomagnetism and provides a means to estimate magnetic remanence directions from univectorial segments of stepwise demagnetization data. Derived directions constrain past geomagnetic field behavior and form the foundation of chronological and tectonic reconstructions. PCA of isolated remanence segments relies on estimates of the segment mean and covariance matrix, which can carry large uncertainties given the relatively small number of demagnetization data points used to characterize individual specimens. Traditional PCA does not, however, lend itself to quantification of these uncertainties, and inferences drawn from paleomagnetic reconstructions suffer from an inability to propagate uncertainties from individual specimens to higher levels, such as in calculations of paleomagnetic site mean directions and pole positions. In this study, we employ a probabilistic reformulation of PCA that represents the unknowns involved in the data fitting process as probability density functions. Such probability density functions represent our state of knowledge about the unknowns in the fitting process and provide a tractable framework with which to rigorously quantify uncertainties associated with remanence directions estimated from demagnetization data. These uncertainties can be propagated readily through each step of a paleomagnetic reconstruction to enable quantification of uncertainties for all stages of the data interpretation sequence, removing the need for arbitrary selection/rejection criteria at the specimen level. Rigorous uncertainty determination helps to protect against spurious inferences being drawn from uncertain data.
HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks
Energy Technology Data Exchange (ETDEWEB)
Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-05-01
This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.
Investigation of Free Particle Propagator with Generalized Uncertainty Problem
Ghobakhloo, F
2016-01-01
We consider the Schrodinger equation with a generalized uncertainty principle for a free particle. We then transform the problem into a second ordinary differential equation and thereby obtain the corresponding propagator. The result of ordinary quantum mechanics is recovered for vanishing minimal length parameter.
Uncertainty propagation from raw data to final results
International Nuclear Information System (INIS)
Reduction of data from raw numbers (counts per channel) to physically meaningful quantities (such as cross sections) is in itself a complicated procedure. Propagation of experimental uncertainties through that reduction process has sometimes been perceived as even more difficult, if not impossible. At the Oak Ridge Electron Linear Accelerator, a computer code ALEX has been developed to assist in the propagation process. The purpose of ALEX is to carefully and correctly propagate all experimental uncertainties through the entire reduction procedure, yielding the complete covariance matrix for the reduced data, while requiring little additional input from the experimentalist beyond that which is needed for the data reduction itself. The theoretical method used in ALEX is described, with emphasis on transmission measurements. Application to the natural iron and natural nickel measurements of D.C. Larson is shown
Uncertainty propagation in a cascade modelling approach to flood mapping
Directory of Open Access Journals (Sweden)
J. P. Rodríguez-Rincón
2014-07-01
Full Text Available The purpose of this investigation is to study the propagation of meteorological uncertainty within a cascade modelling approach to flood mapping. The methodology is comprised of a Numerical Weather Prediction Model (NWP, a distributed rainfall–runoff model and a standard 2-D hydrodynamic model. The cascade of models is used to reproduce an extreme flood event that took place in the Southeast of Mexico, during November 2009. The event is selected as high quality field data (e.g. rain gauges; discharge and satellite imagery are available. Uncertainty in the meteorological model (Weather Research and Forecasting model is evaluated through the use of a multi-physics ensemble technique, which considers twelve parameterization schemes to determine a given precipitation. The resulting precipitation fields are used as input in a distributed hydrological model, enabling the determination of different hydrographs associated to this event. Lastly, by means of a standard 2-D hydrodynamic model, hydrographs are used as forcing conditions to study the propagation of the meteorological uncertainty to an estimated flooded area. Results show the utility of the selected modelling approach to investigate error propagation within a cascade of models. Moreover, the error associated to the determination of the runoff, is showed to be lower than that obtained in the precipitation estimation suggesting that uncertainty do not necessarily increase within a model cascade.
Uncertainty propagation in a cascade modelling approach to flood mapping
Rodríguez-Rincón, J. P.; Pedrozo-Acuña, A.; Breña Naranjo, J. A.
2014-07-01
The purpose of this investigation is to study the propagation of meteorological uncertainty within a cascade modelling approach to flood mapping. The methodology is comprised of a Numerical Weather Prediction Model (NWP), a distributed rainfall-runoff model and a standard 2-D hydrodynamic model. The cascade of models is used to reproduce an extreme flood event that took place in the Southeast of Mexico, during November 2009. The event is selected as high quality field data (e.g. rain gauges; discharge) and satellite imagery are available. Uncertainty in the meteorological model (Weather Research and Forecasting model) is evaluated through the use of a multi-physics ensemble technique, which considers twelve parameterization schemes to determine a given precipitation. The resulting precipitation fields are used as input in a distributed hydrological model, enabling the determination of different hydrographs associated to this event. Lastly, by means of a standard 2-D hydrodynamic model, hydrographs are used as forcing conditions to study the propagation of the meteorological uncertainty to an estimated flooded area. Results show the utility of the selected modelling approach to investigate error propagation within a cascade of models. Moreover, the error associated to the determination of the runoff, is showed to be lower than that obtained in the precipitation estimation suggesting that uncertainty do not necessarily increase within a model cascade.
On analytic formulas of Feynman propagators in position space
Institute of Scientific and Technical Information of China (English)
ZHANG Hong-Hao; FENG Kai-Xi; QIU Si-Wei; ZHAO An; LI Xue-Song
2010-01-01
We correct an inaccurate result of previous work on the Feynman propagator in position space of a free Dirac field in(3+1)-dimensional spacetime; we derive the generalized analytic formulas of both the scalar Feynman propagator and the spinor Feynman propagator in position space in arbitrary(D+1)-dimensional spacetime; and we further find a recurrence relation among the spinor Feynman propagator in(D+l)-dimensional spacetime and the scalar Feynman propagators in(D+1)-,(D-1)-and(D+3)-dimensional spacetimes.
Uncertainty propagation within an integrated model of climate change
International Nuclear Information System (INIS)
This paper demonstrates a methodology whereby stochastic dynamical systems are used to investigate a climate model's inherent capacity to propagate uncertainty over time. The usefulness of the methodology stems from its ability to identify the variables that account for most of the model's uncertainty. We accomplish this by reformulating a deterministic dynamical system capturing the structure of an integrated climate model into a stochastic dynamical system. Then, via the use of computational techniques of stochastic differential equations accurate uncertainty estimates of the model's variables are determined. The uncertainty is measured in terms of properties of probability distributions of the state variables. The starting characteristics of the uncertainty of the initial state and the random fluctuations are derived from estimates given in the literature. Two aspects of uncertainty are investigated: (1) the dependence on environmental scenario - which is determined by technological development and actions towards environmental protection; and (2) the dependence on the magnitude of the initial state measurement error determined by the progress of climate change and the total magnitude of the system's random fluctuations as well as by our understanding of the climate system. Uncertainty of most of the system's variables is found to be nearly independent of the environmental scenario for the time period under consideration (1990-2100). Even conservative uncertainty estimates result in scenario overlap of several decades during which the consequences of any actions affecting the environment could be very difficult to identify with a sufficient degree of confidence. This fact may have fundamental consequences on the level of social acceptance of any restrictive measures against accelerating global warming. In general, the stochastic fluctuations contribute more to the uncertainty than the initial state measurements. The variables coupling all major climate elements
Semi-analytical solution for soliton propagation in colloidal suspension
Directory of Open Access Journals (Sweden)
Senthilkumar Selvaraj
2013-04-01
Full Text Available We consider the propagation of soliton in colloidal nano-suspension. We derive the semi analytical solution for soliton propagation in colloidal nano-suspensions for both one and two spatial dimensions using variational method. This Variational method uses both Averaged Lagrangian and suitable trial functions. Finally we analyse about Rayleigh scattering loss in the soliton propagation through the colloidal nano-suspensions.
Propagation of Uncertainty in Rigid Body Attitude Flows
Lee, Taeyoung; Chaturvedi, Nalin A.; Sanyal, Amit K.; Leok, Melvin; McClamroch, N. Harris
2007-01-01
Motivated by attitude control and attitude estimation problems for a rigid body, computational methods are proposed to propagate uncertainties in the angular velocity and the attitude. The nonlinear attitude flow is determined by Euler-Poincar\\'e equations that describe the rotational dynamics of the rigid body acting under the influence of an attitude dependent potential and by a reconstruction equation that describes the kinematics expressed in terms of an orthogonal matrix representing the...
Uncertainty propagation in a cascade modelling approach to flood mapping
Rodríguez-Rincón, J. P.; Pedrozo-Acuña, A.; J. A. Breña Naranjo
2014-01-01
The purpose of this investigation is to study the propagation of meteorological uncertainty within a cascade modelling approach to flood mapping. The methodology is comprised of a Numerical Weather Prediction Model (NWP), a distributed rainfall–runoff model and a standard 2-D hydrodynamic model. The cascade of models is used to reproduce an extreme flood event that took place in the Southeast of Mexico, during November 2009. The event is selected as high quality field data...
Analysis of uncertainty propagation in nuclear fuel cycle scenarios
International Nuclear Information System (INIS)
Nuclear scenario studies model nuclear fleet over a given period. They enable the comparison of different options for the reactor fleet evolution, and the management of the future fuel cycle materials, from mining to disposal, based on criteria such as installed capacity per reactor technology, mass inventories and flows, in the fuel cycle and in the waste. Uncertainties associated with nuclear data and scenario parameters (fuel, reactors and facilities characteristics) propagate along the isotopic chains in depletion calculations, and through out the scenario history, which reduces the precision of the results. The aim of this work is to develop, implement and use a stochastic uncertainty propagation methodology adapted to scenario studies. The method chosen is based on development of depletion computation surrogate models, which reduce the scenario studies computation time, and whose parameters include perturbations of the depletion model; and fabrication of equivalence model which take into account cross-sections perturbations for computation of fresh fuel enrichment. Then the uncertainty propagation methodology is applied to different scenarios of interest, considering different options of evolution for the French PWR fleet with SFR deployment. (author)
Uncertainty and Sensitivity Analyses of Duct Propagation Models
Nark, Douglas M.; Watson, Willie R.; Jones, Michael G.
2008-01-01
This paper presents results of uncertainty and sensitivity analyses conducted to assess the relative merits of three duct propagation codes. Results from this study are intended to support identification of a "working envelope" within which to use the various approaches underlying these propagation codes. This investigation considers a segmented liner configuration that models the NASA Langley Grazing Incidence Tube, for which a large set of measured data was available. For the uncertainty analysis, the selected input parameters (source sound pressure level, average Mach number, liner impedance, exit impedance, static pressure and static temperature) are randomly varied over a range of values. Uncertainty limits (95% confidence levels) are computed for the predicted values from each code, and are compared with the corresponding 95% confidence intervals in the measured data. Generally, the mean values of the predicted attenuation are observed to track the mean values of the measured attenuation quite well and predicted confidence intervals tend to be larger in the presence of mean flow. A two-level, six factor sensitivity study is also conducted in which the six inputs are varied one at a time to assess their effect on the predicted attenuation. As expected, the results demonstrate the liner resistance and reactance to be the most important input parameters. They also indicate the exit impedance is a significant contributor to uncertainty in the predicted attenuation.
Pulse propagation in tapered granular chains: An analytic study
Harbola, Upendra; Rosas, Alexandre; ESPOSITO, massimiliano; Lindenberg, Katja
2009-01-01
We study pulse propagation in one-dimensional tapered chains of spherical granules. Analytic results for the pulse velocity and other pulse features are obtained using a binary collision approximation. Comparisons with numerical results show that the binary collision approximation provides quantitatively accurate analytic results for these chains.
Uncertainty propagation in probabilistic safety analysis of nuclear power plants
International Nuclear Information System (INIS)
The uncertainty propagation in probabilistic safety analysis of nuclear power plants, is done. The methodology of the minimal cut is implemented in the computer code SVALON and the results for several cases are compared with corresponding results obtained with the SAMPLE code, which employs the Monte Carlo method to propagate the uncertanties. The results have show that, for a relatively small number of dominant minimal cut sets (n approximately 25) and error factors (r approximately 5) the SVALON code yields results which are comparable to those obtained with SAMPLE. An analysis of the unavailability of the low pressure recirculation system of Angra 1 for both the short and long term recirculation phases, are presented. The results for the short term phase are in good agreement with the corresponding one given in WASH-1400. (E.G.)
Analytic solution for the propagation velocity in superconducting composities
International Nuclear Information System (INIS)
The propagation velocity of normal zones in composite superconductors has been calculated analytically for the case of constant thermophysical properties, including the effects of current sharing. The solution is compared with that of a more elementary theory in which current sharing is neglected, i.e., in which there is a sharp transition from the superconducting to the normal state. The solution is also compared with experiment. This comparison demonstrates the important influence of transient heat transfer on the propagation velocity
Uncertainty Quantification and Propagation in Nuclear Density Functional Theory
Energy Technology Data Exchange (ETDEWEB)
Schunck, N; McDonnell, J D; Higdon, D; Sarich, J; Wild, S M
2015-03-17
Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going eff orts seek to better root nuclear DFT in the theory of nuclear forces, energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in fi nite nuclei. In this paper, we review recent eff orts to quantify the related uncertainties, and propagate them to model predictions. In particular, we cover the topics of parameter estimation for inverse problems, statistical analysis of model uncertainties and Bayesian inference methods. Illustrative examples are taken from the literature.
Uncertainty quantification and propagation in nuclear density functional theory
International Nuclear Information System (INIS)
Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going efforts seek to better root nuclear DFT in the theory of nuclear forces (see Duguet et al., this Topical Issue), energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in finite nuclei. In this paper, we review recent efforts to quantify the related uncertainties, and propagate them to model predictions. In particular, we cover the topics of parameter estimation for inverse problems, statistical analysis of model uncertainties and Bayesian inference methods. Illustrative examples are taken from the literature. (orig.)
Propagation of radar rainfall uncertainty in urban flood simulations
Liguori, Sara; Rico-Ramirez, Miguel
2013-04-01
This work discusses the results of the implementation of a novel probabilistic system designed to improve ensemble sewer flow predictions for the drainage network of a small urban area in the North of England. The probabilistic system has been developed to model the uncertainty associated to radar rainfall estimates and propagate it through radar-based ensemble sewer flow predictions. The assessment of this system aims at outlining the benefits of addressing the uncertainty associated to radar rainfall estimates in a probabilistic framework, to be potentially implemented in the real-time management of the sewer network in the study area. Radar rainfall estimates are affected by uncertainty due to various factors [1-3] and quality control and correction techniques have been developed in order to improve their accuracy. However, the hydrological use of radar rainfall estimates and forecasts remains challenging. A significant effort has been devoted by the international research community to the assessment of the uncertainty propagation through probabilistic hydro-meteorological forecast systems [4-5], and various approaches have been implemented for the purpose of characterizing the uncertainty in radar rainfall estimates and forecasts [6-11]. A radar-based ensemble stochastic approach, similar to the one implemented for use in the Southern-Alps by the REAL system [6], has been developed for the purpose of this work. An ensemble generator has been calibrated on the basis of the spatial-temporal characteristics of the residual error in radar estimates assessed with reference to rainfall records from around 200 rain gauges available for the year 2007, previously post-processed and corrected by the UK Met Office [12-13]. Each ensemble member is determined by summing a perturbation field to the unperturbed radar rainfall field. The perturbations are generated by imposing the radar error spatial and temporal correlation structure to purely stochastic fields. A
Analytic Approximations for Transit Light Curve Observables, Uncertainties, and Covariances
Carter, Joshua A.; Yee, Jennifer C.; Eastman, Jason; Gaudi, B. Scott; Winn, Joshua N.
2008-01-01
The light curve of an exoplanetary transit can be used to estimate the planetary radius and other parameters of interest. Because accurate parameter estimation is a non-analytic and computationally intensive problem, it is often useful to have analytic approximations for the parameters as well as their uncertainties and covariances. Here we give such formulas, for the case of an exoplanet transiting a star with a uniform brightness distribution. We also assess the advantages of some relativel...
A new analytical framework for tidal propagation in estuaries
Cai, H.
2014-01-01
The ultimate aim of this thesis is to enhance our understanding of tidal wave propagation in convergent alluvial estuaries (of infinite length). In the process, a new analytical model has been developed as a function of externally defined dimensionless parameters describing friction, channel converg
Data Assimilation and Propagation of Uncertainty in Multiscale Cardiovascular Simulation
Schiavazzi, Daniele; Marsden, Alison
2015-11-01
Cardiovascular modeling is the application of computational tools to predict hemodynamics. State-of-the-art techniques couple a 3D incompressible Navier-Stokes solver with a boundary circulation model and can predict local and peripheral hemodynamics, analyze the post-operative performance of surgical designs and complement clinical data collection minimizing invasive and risky measurement practices. The ability of these tools to make useful predictions is directly related to their accuracy in representing measured physiologies. Tuning of model parameters is therefore a topic of paramount importance and should include clinical data uncertainty, revealing how this uncertainty will affect the predictions. We propose a fully Bayesian, multi-level approach to data assimilation of uncertain clinical data in multiscale circulation models. To reduce the computational cost, we use a stable, condensed approximation of the 3D model build by linear sparse regression of the pressure/flow rate relationship at the outlets. Finally, we consider the problem of non-invasively propagating the uncertainty in model parameters to the resulting hemodynamics and compare Monte Carlo simulation with Stochastic Collocation approaches based on Polynomial or Multi-resolution Chaos expansions.
Analytic structure of QCD propagators in Minkowski space
Siringo, Fabio
2016-01-01
Analytical functions for the propagators of QCD, including a set of chiral quarks, are derived by a one-loop massive expansion in the Landau gauge, deep in the infrared. By analytic continuation, the spectral functions are studied in Minkowski space, yielding a direct proof of positivity violation and confinement from first principles.The dynamical breaking of chiral symmetry is described on the same footing of gluon mass generation, providing a unified picture. While dealing with the exact Lagrangian, the expansion is based on massive free-particle propagators, is safe in the infrared and is equivalent to the standard perturbation theory in the UV. By dimensional regularization, all diverging mass terms cancel exactly without including mass counterterms that would spoil the gauge and chiral symmetry of the Lagrangian. Universal scaling properties are predicted for the inverse dressing functions and shown to be satisfied by the lattice data. Complex conjugated poles are found for the gluon propagator, in agre...
Ultrashort Optical Pulse Propagation in terms of Analytic Signal
Directory of Open Access Journals (Sweden)
Sh. Amiranashvili
2011-01-01
Full Text Available We demonstrate that ultrashort optical pulses propagating in a nonlinear dispersive medium are naturally described through incorporation of analytic signal for the electric field. To this end a second-order nonlinear wave equation is first simplified using a unidirectional approximation. Then the analytic signal is introduced, and all nonresonant nonlinear terms are eliminated. The derived propagation equation accounts for arbitrary dispersion, resonant four-wave mixing processes, weak absorption, and arbitrary pulse duration. The model applies to the complex electric field and is independent of the slowly varying envelope approximation. Still the derived propagation equation posses universal structure of the generalized nonlinear Schrödinger equation (NSE. In particular, it can be solved numerically with only small changes of the standard split-step solver or more complicated spectral algorithms for NSE. We present exemplary numerical solutions describing supercontinuum generation with an ultrashort optical pulse.
Risk classification and uncertainty propagation for virtual water distribution systems
International Nuclear Information System (INIS)
While the secrecy of real water distribution system data is crucial, it poses difficulty for research as results cannot be publicized. This data includes topological layouts of pipe networks, pump operation schedules, and water demands. Therefore, a library of virtual water distribution systems can be an important research tool for comparative development of analytical methods. A virtual city, 'Micropolis', has been developed, including a comprehensive water distribution system, as a first entry into such a library. This virtual city of 5000 residents is fully described in both geographic information systems (GIS) and EPANet hydraulic model frameworks. A risk classification scheme and Monte Carlo analysis are employed for an attempted water supply contamination attack. Model inputs to be considered include uncertainties in: daily water demand, seasonal demand, initial storage tank levels, the time of day a contamination event is initiated, duration of contamination event, and contaminant quantity. Findings show that reasonable uncertainties in model inputs produce high variability in exposure levels. It is also shown that exposure level distributions experience noticeable sensitivities to population clusters within the contaminant spread area. High uncertainties in exposure patterns lead to greater resources needed for more effective mitigation strategies.
Uncertainty propagation in orbital mechanics via tensor decomposition
Sun, Yifei; Kumar, Mrinal
2016-03-01
Uncertainty forecasting in orbital mechanics is an essential but difficult task, primarily because the underlying Fokker-Planck equation (FPE) is defined on a relatively high dimensional (6-D) state-space and is driven by the nonlinear perturbed Keplerian dynamics. In addition, an enormously large solution domain is required for numerical solution of this FPE (e.g. encompassing the entire orbit in the x-y-z subspace), of which the state probability density function (pdf) occupies a tiny fraction at any given time. This coupling of large size, high dimensionality and nonlinearity makes for a formidable computational task, and has caused the FPE for orbital uncertainty propagation to remain an unsolved problem. To the best of the authors' knowledge, this paper presents the first successful direct solution of the FPE for perturbed Keplerian mechanics. To tackle the dimensionality issue, the time-varying state pdf is approximated in the CANDECOMP/PARAFAC decomposition tensor form where all the six spatial dimensions as well as the time dimension are separated from one other. The pdf approximation for all times is obtained simultaneously via the alternating least squares algorithm. Chebyshev spectral differentiation is employed for discretization on account of its spectral ("super-fast") convergence rate. To facilitate the tensor decomposition and control the solution domain size, system dynamics is expressed using spherical coordinates in a noninertial reference frame. Numerical results obtained on a regular personal computer are compared with Monte Carlo simulations.
Spin-Stabilized Spacecrafts: Analytical Attitude Propagation Using Magnetic Torques
Hélio Koiti Kuga; Maria Cecília F. P. S. Zanardi; Roberta Veloso Garcia
2009-01-01
An analytical approach for spin-stabilized satellites attitude propagation is presented, considering the influence of the residual magnetic torque and eddy currents torque. It is assumed two approaches to examine the influence of external torques acting during the motion of the satellite, with the Earth's magnetic field described by the quadripole model. In the first approach is included only the residual magnetic torque in the motion equations, with the satellites in circular or elliptical o...
Uncertainties in workplace external dosimetry - An analytical approach
International Nuclear Information System (INIS)
The uncertainties associated with external dosimetry measurements at workplaces depend on the type of dosemeter used together with its performance characteristics and the information available on the measurement conditions. Performance characteristics were determined in the course of a type test and information about the measurement conditions can either be general, e.g. 'research' and 'medicine', or specific, e.g. 'X-ray testing equipment for aluminium wheel rims'. This paper explains an analytical approach to determine the measurement uncertainty. It is based on the Draft IEC Technical Report IEC 62461 Radiation Protection Instrumentation - Determination of Uncertainty in Measurement. Both this paper and the report cannot eliminate the fact that the determination of the uncertainty requires a larger effort than performing the measurement itself. As a counterbalance, the process of determining the uncertainty results not only in a numerical value of the uncertainty but also produces the best estimate of the quantity to be measured, which may differ from the indication of the instrument. Thus it also improves the result of the measurement. (authors)
Approximate analytical solutions for excitation and propagation in cardiac tissue
Greene, D'Artagnan; Shiferaw, Yohannes
2015-04-01
It is well known that a variety of cardiac arrhythmias are initiated by a focal excitation in heart tissue. At the single cell level these currents are typically induced by intracellular processes such as spontaneous calcium release (SCR). However, it is not understood how the size and morphology of these focal excitations are related to the electrophysiological properties of cardiac cells. In this paper a detailed physiologically based ionic model is analyzed by projecting the excitation dynamics to a reduced one-dimensional parameter space. Based on this analysis we show that the inward current required for an excitation to occur is largely dictated by the voltage dependence of the inward rectifier potassium current (IK 1) , and is insensitive to the detailed properties of the sodium current. We derive an analytical expression relating the size of a stimulus and the critical current required to induce a propagating action potential (AP), and argue that this relationship determines the necessary number of cells that must undergo SCR in order to induce ectopic activity in cardiac tissue. Finally, we show that, once a focal excitation begins to propagate, its propagation characteristics, such as the conduction velocity and the critical radius for propagation, are largely determined by the sodium and gap junction currents with a substantially lesser effect due to repolarizing potassium currents. These results reveal the relationship between ion channel properties and important tissue scale processes such as excitation and propagation.
Uncertainty propagation for systems of conservation laws, stochastic spectral methods
International Nuclear Information System (INIS)
Uncertainty quantification through stochastic spectral methods has been recently applied to several kinds of stochastic PDEs. This thesis deals with stochastic systems of conservation laws. These systems are non linear and develop discontinuities in finite times: these difficulties can trigger the loss of hyperbolicity of the truncated system resulting of the application of sG-gPC (stochastic Galerkin-generalized Polynomial Chaos). We introduce a formalism based on both kinetic theory and moments theory in order to close the truncated system in such a way that the hyperbolicity of the latter is ensured. The idea is to close the truncated system obtained by Galerkin projection via the introduction of an entropy - strictly convex function on the definition domain of our unknowns. In the case this entropy is the mathematical entropy of the non truncated system, the hyperbolicity is ensured. We state several properties of this truncated system from a general non truncated system of conservation laws. We then apply the method to the case of the stochastic inviscid Burgers' equation with random initial conditions and to the stochastic Euler system in one and two space dimensions. In the vicinity of discontinuities, the new method bounds the oscillations due to Gibbs phenomenon to a certain range through the entropy of the system without the use of any adaptative random space discretizations. It is found to be more precise than the stochastic Galerkin method for several test problems. In a last chapter, we present two prospective outlooks: we first suggest an uncertainty propagation method based on the coupling of intrusive and non intrusive methods. We finally emphasize the modelling possibilities of the intrusive Polynomial Chaos methods in order to take into account three dimensional perturbations of a mean one dimensional flow. (author)
Propagation of nuclear data uncertainty: Exact or with covariances
Directory of Open Access Journals (Sweden)
van Veen D.
2010-10-01
Full Text Available Two distinct methods of propagation for basic nuclear data uncertainties to large scale systems will be presented and compared. The “Total Monte Carlo” method is using a statistical ensemble of nuclear data libraries randomly generated by means of a Monte Carlo approach with the TALYS system. These libraries are then directly used in a large number of reactor calculations (for instance with MCNP after which the exact probability distribution for the reactor parameter is obtained. The second method makes use of available covariance files and can be done in a single reactor calculation (by using the perturbation method. In this exercise, both methods are using consistent sets of data files, which implies that covariance files used in the second method are directly obtained from the randomly generated nuclear data libraries from the first method. This is a unique and straightforward comparison allowing to directly apprehend advantages and drawbacks of each method. Comparisons for different reactions and criticality-safety benchmarks from 19F to actinides will be presented. We can thus conclude whether current methods for using covariance data are good enough or not.
Uncertainty-aware video visual analytics of tracked moving objects
Directory of Open Access Journals (Sweden)
Markus Höferlin
2011-01-01
Full Text Available Vast amounts of video data render manual video analysis useless while recent automatic video analytics techniques suffer from insufficient performance. To alleviate these issues, we present a scalable and reliable approach exploiting the visual analytics methodology. This involves the user in the iterative process of exploration, hypotheses generation, and their verification. Scalability is achieved by interactive filter definitions on trajectory features extracted by the automatic computer vision stage. We establish the interface between user and machine adopting the VideoPerpetuoGram (VPG for visualization and enable users to provide filter-based relevance feedback. Additionally, users are supported in deriving hypotheses by context-sensitive statistical graphics. To allow for reliable decision making, we gather uncertainties introduced by the computer vision step, communicate these information to users through uncertainty visualization, and grant fuzzy hypothesis formulation to interact with the machine. Finally, we demonstrate the effectiveness of our approach by the video analysis mini challenge which was part of the IEEE Symposium on Visual Analytics Science and Technology 2009.
Propagating Uncertainty in Solar Panel Performance for Life Cycle Modeling in Early Stage Design
Honda, Tomonori; Chen, Heidi Qianyi; Chan, Kennis Y.; Yang, Maria
2011-01-01
One of the challenges in accurately applying metrics for life cycle assessment lies in accounting for both irreducible and inherent uncertainties in how a design will perform under real world conditions. This paper presents a preliminary study that compares two strategies, one simulation-based and one set-based, for propagating uncertainty in a system. These strategies for uncertainty propagation are then aggregated. This work is conducted in the context of an amorphou...
Assessment and Propagation of Input Uncertainty in Tree-based Option Pricing Models
Gzyl, Henryk; Molina, German; ter Horst, Enrique
2007-01-01
This paper aims to provide a practical example on the assessment and propagation of input uncertainty for option pricing when using tree-based methods. Input uncertainty is propagated into output uncertainty, reflecting that option prices are as unknown as the inputs they are based on. Option pricing formulas are tools whose validity is conditional not only on how close the model represents reality, but also on the quality of the inputs they use, and those inputs are usually not observable. W...
Preliminary Results on Uncertainty Quantification for Pattern Analytics
Energy Technology Data Exchange (ETDEWEB)
Stracuzzi, David John [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Brost, Randolph [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Chen, Maximillian Gene [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Malinas, Rebecca [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Peterson, Matthew Gregor [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Phillips, Cynthia A. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robinson, David G. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Woodbridge, Diane [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
This report summarizes preliminary research into uncertainty quantification for pattern ana- lytics within the context of the Pattern Analytics to Support High-Performance Exploitation and Reasoning (PANTHER) project. The primary focus of PANTHER was to make large quantities of remote sensing data searchable by analysts. The work described in this re- port adds nuance to both the initial data preparation steps and the search process. Search queries are transformed from does the specified pattern exist in the data? to how certain is the system that the returned results match the query? We show example results for both data processing and search, and discuss a number of possible improvements for each.
Díez, C. J.; Cabellos, O.; Martínez, J. S.
2015-01-01
Several approaches have been developed in last decades to tackle nuclear data uncertainty propagation problems of burn-up calculations. One approach proposed was the Hybrid Method, where uncertainties in nuclear data are propagated only on the depletion part of a burn-up problem. Because only depletion is addressed, only one-group cross sections are necessary, and hence, their collapsed one-group uncertainties. This approach has been applied successfully in several advanced reactor systems like EFIT (ADS-like reactor) or ESFR (Sodium fast reactor) to assess uncertainties on the isotopic composition. However, a comparison with using multi-group energy structures was not carried out, and has to be performed in order to analyse the limitations of using one-group uncertainties.
An Analytical Study of the Mode Propagation along the Plasmaline
Szeremley, Daniel; Brinkmann, Ralf Peter; Mussenbrock, Thomas; Eremin, Denis; Theoretical Electrical Engineering Team
2014-10-01
The market shows in recent years a growing demand for bottles made of polyethylene terephthalate (PET). Therefore, fast and efficient sterilization processes as well as barrier coatings to decrease gas permeation are required. A specialized microwave plasma source - referred to as the plasmaline - has been developed to allow for treatment of the inner surface of such PET bottles The plasmaline is a coaxial waveguide combined with a gas-inlet which is inserted into the empty bottle and initiates a reactive plasma. To optimize and control the different surface processes, it is essential to fully understand the microwave power coupling to the plasma inside the bottle and thus the electromagnetic wave propagation along the plasmaline. In this contribution, we present a detailed dispersion analysis based on an analytical approach. We study how modes of guided waves are propagating under different conditions (if at all). The analytical results are supported by a series of self-consistent numerical simulations of the plasmaline and the plasma. The authors acknowledge funding by the Deutsche Forschungsgemeinschaft within the frame of SFB-TR 87.
International Nuclear Information System (INIS)
The reliability of a system, notwithstanding it intended function, can be significantly affected by the uncertainty in the reliability estimate of the components that define the system. This paper implements the Unscented Transformation to quantify the effects of the uncertainty of component reliability through two approaches. The first approach is based on the concept of uncertainty propagation, which is the assessment of the effect that the variability of the component reliabilities produces on the variance of the system reliability. This assessment based on UT has been previously considered in the literature but only for system represented through series/parallel configuration. In this paper the assessment is extended to systems whose reliability cannot be represented through analytical expressions and require, for example, Monte Carlo Simulation. The second approach consists on the evaluation of the importance of components, i.e., the evaluation of the components that most contribute to the variance of the system reliability. An extension of the UT is proposed to evaluate the so called “main effects” of each component, as well to assess high order component interaction. Several examples with excellent results illustrate the proposed approach. - Highlights: • Simulation based approach for computing reliability estimates. • Computation of reliability variance via 2n+1 points. • Immediate computation of component importance. • Application to network systems
A Multi-Model Approach for Uncertainty Propagation and Model Calibration in CFD Applications
Wang, Jian-xun; Xiao, Heng
2015-01-01
Proper quantification and propagation of uncertainties in computational simulations are of critical importance. This issue is especially challenging for CFD applications. A particular obstacle for uncertainty quantifications in CFD problems is the large model discrepancies associated with the CFD models used for uncertainty propagation. Neglecting or improperly representing the model discrepancies leads to inaccurate and distorted uncertainty distribution for the Quantities of Interest. High-fidelity models, being accurate yet expensive, can accommodate only a small ensemble of simulations and thus lead to large interpolation errors and/or sampling errors; low-fidelity models can propagate a large ensemble, but can introduce large modeling errors. In this work, we propose a multi-model strategy to account for the influences of model discrepancies in uncertainty propagation and to reduce their impact on the predictions. Specifically, we take advantage of CFD models of multiple fidelities to estimate the model ...
Analytical probabilistic proton dose calculation and range uncertainties
International Nuclear Information System (INIS)
We introduce the concept of analytical probabilistic modeling (APM) to calculate the mean and the standard deviation of intensity-modulated proton dose distributions under the influence of range uncertainties in closed form. For APM, range uncertainties are modeled with a multivariate Normal distribution p(z) over the radiological depths z. A pencil beam algorithm that parameterizes the proton depth dose d(z) with a weighted superposition of ten Gaussians is used. Hence, the integrals ∫ dz p(z) d(z) and ∫ dz p(z) d(z)2 required for the calculation of the expected value and standard deviation of the dose remain analytically tractable and can be efficiently evaluated. The means μk, widths δk, and weights ωk of the Gaussian components parameterizing the depth dose curves are found with least squares fits for all available proton ranges. We observe less than 0.3% average deviation of the Gaussian parameterizations from the original proton depth dose curves. Consequently, APM yields high accuracy estimates for the expected value and standard deviation of intensity-modulated proton dose distributions for two dimensional test cases. APM can accommodate arbitrary correlation models and account for the different nature of random and systematic errors in fractionated radiation therapy. Beneficial applications of APM in robust planning are feasible.
A comparative study: top event unavailability by point estimates and uncertainty propagation
International Nuclear Information System (INIS)
The results of five cases studied are presented to identify how close the cumulative value represented by the point estimate is to the corresponding statistics of the top event distribution. The computer code FTA-J is used for quantification of the fault trees studied, top event unavailabilities, moments and cumulative probability distributions inclusive. The FTA-J demonstrates its usefulness for large trees. In all cases, the point estimate unavailability of the top event based on the median values of the basic events, which has been widely and commonly used in risk assessment for the sake of its simplicity, are lower than its median unavailability by uncertainty propagation. The top event unavailability thus obtained can represent much lower values: i.e. the system would appear much better than it actually is. The point estimate based on the mean values, however, is shown the same as that obtained by uncertainty propagation numerically and analytically. The mean of the top event can be well approximated by forming the product of the means of the components in each cut set, then summing these products. The point estimate can not represent all of the probability distribution characteristics of the top event, so that the estimation of probability intervals for the top event unavailability should be made either by a Monte Carlo simulation or other analytical method. When it is forced to calculate the top event unavailability only by the point estimate, it is the mean value of the component failure data that should be used for its quantification. (author)
An analytical approach for the Propagation Saw Test
Benedetti, Lorenzo; Fischer, Jan-Thomas; Gaume, Johan
2016-04-01
The Propagation Saw Test (PST) [1, 2] is an experimental in-situ technique that has been introduced to assess crack propagation propensity in weak snowpack layers buried below cohesive snow slabs. This test attracted the interest of a large number of practitioners, being relatively easy to perform and providing useful insights for the evaluation of snow instability. The PST procedure requires isolating a snow column of 30 centimeters of width and -at least-1 meter in the downslope direction. Then, once the stratigraphy is known (e.g. from a manual snow profile), a saw is used to cut a weak layer which could fail, potentially leading to the release of a slab avalanche. If the length of the saw cut reaches the so-called critical crack length, the onset of crack propagation occurs. Furthermore, depending on snow properties, the crack in the weak layer can initiate the fracture and detachment of the overlying slab. Statistical studies over a large set of field data confirmed the relevance of the PST, highlighting the positive correlation between test results and the likelihood of avalanche release [3]. Recent works provided key information on the conditions for the onset of crack propagation [4] and on the evolution of slab displacement during the test [5]. In addition, experimental studies [6] and simplified models [7] focused on the qualitative description of snowpack properties leading to different failure types, namely full propagation or fracture arrest (with or without slab fracture). However, beside current numerical studies utilizing discrete elements methods [8], only little attention has been devoted to a detailed analytical description of the PST able to give a comprehensive mechanical framework of the sequence of processes involved in the test. Consequently, this work aims to give a quantitative tool for an exhaustive interpretation of the PST, stressing the attention on important parameters that influence the test outcomes. First, starting from a pure
Measuring the Gas Constant "R": Propagation of Uncertainty and Statistics
Olsen, Robert J.; Sattar, Simeen
2013-01-01
Determining the gas constant "R" by measuring the properties of hydrogen gas collected in a gas buret is well suited for comparing two approaches to uncertainty analysis using a single data set. The brevity of the experiment permits multiple determinations, allowing for statistical evaluation of the standard uncertainty u[subscript…
'spup' - an R package for uncertainty propagation in spatial environmental modelling
Sawicka, Kasia; Heuvelink, Gerard
2016-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in
Propagation of nuclear data uncertainties in fuel cycle calculations using Monte-Carlo technique
International Nuclear Information System (INIS)
Nowadays, the knowledge of uncertainty propagation in depletion calculations is a critical issue because of the safety and economical performance of fuel cycles. Response magnitudes such as decay heat, radiotoxicity and isotopic inventory and their uncertainties should be known to handle spent fuel in present fuel cycles (e.g. high burnup fuel programme) and furthermore in new fuel cycles designs (e.g. fast breeder reactors and ADS). To deal with this task, there are different error propagation techniques, deterministic (adjoint/forward sensitivity analysis) and stochastic (Monte-Carlo technique) to evaluate the error in response magnitudes due to nuclear data uncertainties. In our previous works, cross-section uncertainties were propagated using a Monte-Carlo technique to calculate the uncertainty of response magnitudes such as decay heat and neutron emission. Also, the propagation of decay data, fission yield and cross-section uncertainties was performed, but only isotopic composition was the response magnitude calculated. Following the previous technique, the nuclear data uncertainties are taken into account and propagated to response magnitudes, decay heat and radiotoxicity. These uncertainties are assessed during cooling time. To evaluate this Monte-Carlo technique, two different applications are performed. First, a fission pulse decay heat calculation is carried out to check the Monte-Carlo technique, using decay data and fission yields uncertainties. Then, the results, experimental data and reference calculation (JEFF Report20), are compared. Second, we assess the impact of basic nuclear data (activation cross-section, decay data and fission yields) uncertainties on relevant fuel cycle parameters (decay heat and radiotoxicity) for a conceptual design of a modular European Facility for Industrial Transmutation (EFIT) fuel cycle. After identifying which time steps have higher uncertainties, an assessment of which uncertainties have more relevance is performed
Pragmatic aspects of uncertainty propagation: A conceptual review
Thacker, W.Carlisle
2015-09-11
When quantifying the uncertainty of the response of a computationally costly oceanographic or meteorological model stemming from the uncertainty of its inputs, practicality demands getting the most information using the fewest simulations. It is widely recognized that, by interpolating the results of a small number of simulations, results of additional simulations can be inexpensively approximated to provide a useful estimate of the variability of the response. Even so, as computing the simulations to be interpolated remains the biggest expense, the choice of these simulations deserves attention. When making this choice, two requirement should be considered: (i) the nature of the interpolation and ii) the available information about input uncertainty. Examples comparing polynomial interpolation and Gaussian process interpolation are presented for three different views of input uncertainty.
Analytic Matrix Method for the Study of Propagation Characteristics of a Bent Planar Waveguide
Institute of Scientific and Technical Information of China (English)
LIU Qing; CAO Zhuang-Qi; SHEN Qi-Shun; DOU Xiao-Ming; CHEN Ying-Li
2000-01-01
An analytic matrix method is used to analyze and accurately calculate the propagation constant and bendinglosses of a bent planar waveguide. This method gives not only a dispersion equation with explicit physical insight,but also accurate complex propagation constants.
Monte Carlo uncertainty propagation approaches in ADS burn-up calculations
International Nuclear Information System (INIS)
Highlights: ► Two Monte Carlo uncertainty propagation approaches are compared. ► How to make both approaches equivalent is presented and applied. ► ADS burn-up calculation is selected as the application of approaches. ► The cross-section uncertainties of 239Pu and 241Pu are propagated. ► Cross-correlations appear as a source of differences between approaches. - Abstract: In activation calculations, there are several approaches to quantify uncertainties: deterministic by means of sensitivity analysis, and stochastic by means of Monte Carlo. Here, two different Monte Carlo approaches for nuclear data uncertainty are presented: the first one is the Total Monte Carlo (TMC). The second one is by means of a Monte Carlo sampling of the covariance information included in the nuclear data libraries to propagate these uncertainties throughout the activation calculations. This last approach is what we named Covariance Uncertainty Propagation, CUP. This work presents both approaches and their differences. Also, they are compared by means of an activation calculation, where the cross-section uncertainties of 239Pu and 241Pu are propagated in an ADS activation calculation
An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method
Energy Technology Data Exchange (ETDEWEB)
Campolina, Daniel; Lima, Paulo Rubens I., E-mail: campolina@cdtn.br, E-mail: pauloinacio@cpejr.com.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil). Servico de Tecnologia de Reatores; Pereira, Claubia; Veloso, Maria Auxiliadora F., E-mail: claubia@nuclear.ufmg.br, E-mail: dora@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Dept. de Engenharia Nuclear
2015-07-01
Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k{sub eff} was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)
An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method
International Nuclear Information System (INIS)
Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor keff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)
Wansik Yu; Eiichi Nakakita; Sunmin Kim; Kosei Yamaguchi
2016-01-01
The common approach to quantifying the precipitation forecast uncertainty is ensemble simulations where a numerical weather prediction (NWP) model is run for a number of cases with slightly different initial conditions. In practice, the spread of ensemble members in terms of flood discharge is used as a measure of forecast uncertainty due to uncertain precipitation forecasts. This study presents the uncertainty propagation of rainfall forecast into hydrological response with catchment scale t...
Pedroni, Nicola; Zio, Enrico; Ferrario, Elisa; Pasanisi, Alberto; Couplet, Mathieu
2013-01-01
We consider a model for the risk-based design of a flood protection dike, and use probability distributions to represent aleatory uncertainty and possibility distributions to describe the epistemic uncertainty associated to the poorly known parameters of such probability distributions. A hybrid method is introduced to hierarchically propagate the two types of uncertainty, and the results are compared with those of a Monte Carlo-based Dempster-Shafer approach employing independent random sets ...
Propagation of uncertainties in the nuclear DFT models
International Nuclear Information System (INIS)
Parameters of the nuclear density functional theory (DFT) models are usually adjusted to experimental data. As a result they carry certain theoretical error, which, as a consequence, carries through to the predicted quantities. In this work we address the propagation of theoretical error, within the nuclear DFT models, from the model parameters to the predicted observables. In particularly, the focus is set on the Skyrme energy density functional models. (paper)
Propagation of uncertainties in the nuclear DFT models
Kortelainen, Markus
2014-01-01
Parameters of the nuclear density functional theory (DFT) models are usually adjusted to experimental data. As a result they carry certain theoretical error, which, as a consequence, carries out to the predicted quantities. In this work we address the propagation of theoretical error, within the nuclear DFT models, from the model parameters to the predicted observables. In particularly, the focus is set on the Skyrme energy density functional models.
Propagation of Computational Uncertainty Using the Modern Design of Experiments
DeLoach, Richard
2007-01-01
This paper describes the use of formally designed experiments to aid in the error analysis of a computational experiment. A method is described by which the underlying code is approximated with relatively low-order polynomial graduating functions represented by truncated Taylor series approximations to the true underlying response function. A resource-minimal approach is outlined by which such graduating functions can be estimated from a minimum number of case runs of the underlying computational code. Certain practical considerations are discussed, including ways and means of coping with high-order response functions. The distributional properties of prediction residuals are presented and discussed. A practical method is presented for quantifying that component of the prediction uncertainty of a computational code that can be attributed to imperfect knowledge of independent variable levels. This method is illustrated with a recent assessment of uncertainty in computational estimates of Space Shuttle thermal and structural reentry loads attributable to ice and foam debris impact on ascent.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Soft computing approaches to uncertainty propagation in environmental risk mangement
Kumar, Vikas
2008-01-01
Real-world problems, especially those that involve natural systems, are complex and composed of many nondeterministic components having non-linear coupling. It turns out that in dealing with such systems, one has to face a high degree of uncertainty and tolerate imprecision. Classical system models based on numerical analysis, crisp logic or binary logic have characteristics of precision and categoricity and classified as hard computing approach. In contrast soft computing approaches like pro...
Gadomski, P. J.; Deems, J. S.; Glennie, C. L.; Hartzell, P. J.; Butler, H.; Finnegan, D. C.
2015-12-01
The use of high-resolution topographic data in the form of three-dimensional point clouds obtained from laser scanning systems (LiDAR) is becoming common across scientific disciplines.However little consideration has typically been given to the accuracy and the precision of LiDAR-derived measurements at the individual point scale.Numerous disparate sources contribute to the aggregate precision of each point measurement, including uncertainties in the range measurement, measurement of the attitude and position of the LiDAR collection platform, uncertainties associated with the interaction between the laser pulse and the target surface, and more.We have implemented open-source software tools to calculate per-point stochastic measurement errors for a point cloud using the general LiDAR georeferencing equation.We demonstrate the use of these propagated uncertainties by applying our methods to data collected by the Airborne Snow Observatory ALS, a NASA JPL project using a combination of airborne hyperspectral and LiDAR data to estimate snow-water equivalent distributions over full river basins.We present basin-scale snow depth maps with associated uncertainties, and demonstrate the propagation of those uncertainties to snow volume and snow-water equivalent calculations.
Epistemic uncertainty propagation in energy flows between structural vibrating systems
Xu, Menghui; Du, Xiaoping; Qiu, Zhiping; Wang, Chong
2016-03-01
A dimension-wise method for predicting fuzzy energy flows between structural vibrating systems coupled by joints with epistemic uncertainties is established. Based on its Legendre polynomial approximation at α=0, both the minimum and maximum point vectors of the energy flow of interest are calculated dimension by dimension within the space spanned by the interval parameters determined by fuzzy those at α=0 and the resulted interval bounds are used to assemble the concerned fuzzy energy flows. Besides the proposed method, vertex method as well as two current methods is also applied. Comparisons among results by different methods are accomplished by two numerical examples and the accuracy of all methods is simultaneously verified by Monte Carlo simulation.
Servin, Christian
2015-01-01
On various examples ranging from geosciences to environmental sciences, this book explains how to generate an adequate description of uncertainty, how to justify semiheuristic algorithms for processing uncertainty, and how to make these algorithms more computationally efficient. It explains in what sense the existing approach to uncertainty as a combination of random and systematic components is only an approximation, presents a more adequate three-component model with an additional periodic error component, and explains how uncertainty propagation techniques can be extended to this model. The book provides a justification for a practically efficient heuristic technique (based on fuzzy decision-making). It explains how the computational complexity of uncertainty processing can be reduced. The book also shows how to take into account that in real life, the information about uncertainty is often only partially known, and, on several practical examples, explains how to extract the missing information about uncer...
International Nuclear Information System (INIS)
For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95. percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input. Particularly it was shown that during the burnup, the variances when considering all the parameters uncertainties is equivalent to the sum of variances if the parameter uncertainties are sampled separately
Rodríguez-Rincón, J. P.; Pedrozo-Acuña, A.; Breña-Naranjo, J. A.
2015-01-01
This investigation aims to study the propagation of meteorological uncertainty within a cascade modelling approach to flood prediction. The methodology was comprised of a numerical weather prediction (NWP) model, a distributed rainfall–runoff model and a 2-D hydrodynamic model. The uncertainty evaluation was carried out at the meteorological and hydrological levels of the model chain, which enabled the investigation of how errors that originated in the rainfall prediction ...
Myers, Casey A.; Laz, Peter J.; Shelburne, Kevin B.; Davidson, Bradley S.
2014-01-01
Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse ki...
Understanding uncertainty propagation in life cycle assessments of waste management systems
DEFF Research Database (Denmark)
Bisinella, Valentina; Conradsen, Knut; Christensen, Thomas Højlund; Astrup, Thomas Fruergaard
2015-01-01
Uncertainty analysis in Life Cycle Assessments (LCAs) of waste management systems often results obscure and complex, with key parameters rarely determined on a case-by-case basis. The paper shows an application of a simplified approach to uncertainty coupled with a Global Sensitivity Analysis (GSA......) perspective on three alternative waste management systems for Danish single-family household waste. The approach provides a fast and systematic method to select the most important parameters in the LCAs, understand their propagation and contribution to uncertainty....
Comparison of nuclear data uncertainty propagation methodologies for PWR burn-up simulations
Diez, Carlos Javier; Hoefer, Axel; Porsch, Dieter; Cabellos, Oscar
2014-01-01
Several methodologies using different levels of approximations have been developed for propagating nuclear data uncertainties in nuclear burn-up simulations. Most methods fall into the two broad classes of Monte Carlo approaches, which are exact apart from statistical uncertainties but require additional computation time, and first order perturbation theory approaches, which are efficient for not too large numbers of considered response functions but only applicable for sufficiently small nuclear data uncertainties. Some methods neglect isotopic composition uncertainties induced by the depletion steps of the simulations, others neglect neutron flux uncertainties, and the accuracy of a given approximation is often very hard to quantify. In order to get a better sense of the impact of different approximations, this work aims to compare results obtained based on different approximate methodologies with an exact method, namely the NUDUNA Monte Carlo based approach developed by AREVA GmbH. In addition, the impact ...
Propagation of Nuclear Data Uncertainties for ELECTRA Burn-up Calculations
Sjöstrand, H.; Alhassan, E.; Duan, J.; Gustavsson, C.; Koning, A. J.; Pomp, S.; Rochman, D.; Österlund, M.
2014-04-01
The European Lead-Cooled Training Reactor (ELECTRA) has been proposed as a training reactor for fast systems within the Swedish nuclear program. It is a low-power fast reactor cooled by pure liquid lead. In this work, we propagate the uncertainties in 239Pu transport data to uncertainties in the fuel inventory of ELECTRA during the reactor lifetime using the Total Monte Carlo approach (TMC). Within the TENDL project, nuclear models input parameters were randomized within their uncertainties and 740 239Pu nuclear data libraries were generated. These libraries are used as inputs to reactor codes, in our case SERPENT, to perform uncertainty analysis of nuclear reactor inventory during burn-up. The uncertainty in the inventory determines uncertainties in: the long-term radio-toxicity, the decay heat, the evolution of reactivity parameters, gas pressure and volatile fission product content. In this work, a methodology called fast TMC is utilized, which reduces the overall calculation time. The uncertainty of some minor actinides were observed to be rather large and therefore their impact on multiple recycling should be investigated further. It was also found that, criticality benchmarks can be used to reduce inventory uncertainties due to nuclear data. Further studies are needed to include fission yield uncertainties, more isotopes, and a larger set of benchmarks.
Energy Technology Data Exchange (ETDEWEB)
Mullor, R. [Dpto. Estadistica e Investigacion Operativa, Universidad Alicante (Spain); Sanchez, A., E-mail: aisanche@eio.upv.e [Dpto. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica Valencia, Camino de Vera s/n 46022 (Spain); Martorell, S. [Dpto. Ingenieria Quimica y Nuclear, Universidad Politecnica Valencia (Spain); Martinez-Alzamora, N. [Dpto. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica Valencia, Camino de Vera s/n 46022 (Spain)
2011-06-15
Safety related systems performance optimization is classically based on quantifying the effects that testing and maintenance activities have on reliability and cost (R+C). However, R+C quantification is often incomplete in the sense that important uncertainties may not be considered. An important number of studies have been published in the last decade in the field of R+C based optimization considering uncertainties. They have demonstrated that inclusion of uncertainties in the optimization brings the decision maker insights concerning how uncertain the R+C results are and how this uncertainty does matter as it can result in differences in the outcome of the decision making process. Several methods of uncertainty propagation based on the theory of tolerance regions have been proposed in the literature depending on the particular characteristics of the variables in the output and their relations. In this context, the objective of this paper focuses on the application of non-parametric and parametric methods to analyze uncertainty propagation, which will be implemented on a multi-objective optimization problem where reliability and cost act as decision criteria and maintenance intervals act as decision variables. Finally, a comparison of results of these applications and the conclusions obtained are presented.
Directory of Open Access Journals (Sweden)
Wansik Yu
2016-01-01
Full Text Available The common approach to quantifying the precipitation forecast uncertainty is ensemble simulations where a numerical weather prediction (NWP model is run for a number of cases with slightly different initial conditions. In practice, the spread of ensemble members in terms of flood discharge is used as a measure of forecast uncertainty due to uncertain precipitation forecasts. This study presents the uncertainty propagation of rainfall forecast into hydrological response with catchment scale through distributed rainfall-runoff modeling based on the forecasted ensemble rainfall of NWP model. At first, forecast rainfall error based on the BIAS is compared with flood forecast error to assess the error propagation. Second, the variability of flood forecast uncertainty according to catchment scale is discussed using ensemble spread. Then we also assess the flood forecast uncertainty with catchment scale using an estimation regression equation between ensemble rainfall BIAS and discharge BIAS. Finally, the flood forecast uncertainty with RMSE using specific discharge in catchment scale is discussed. Our study is carried out and verified using the largest flood event by typhoon “Talas” of 2011 over the 33 subcatchments of Shingu river basin (2,360 km2, which is located in the Kii Peninsula, Japan.
International Nuclear Information System (INIS)
Safety related systems performance optimization is classically based on quantifying the effects that testing and maintenance activities have on reliability and cost (R+C). However, R+C quantification is often incomplete in the sense that important uncertainties may not be considered. An important number of studies have been published in the last decade in the field of R+C based optimization considering uncertainties. They have demonstrated that inclusion of uncertainties in the optimization brings the decision maker insights concerning how uncertain the R+C results are and how this uncertainty does matter as it can result in differences in the outcome of the decision making process. Several methods of uncertainty propagation based on the theory of tolerance regions have been proposed in the literature depending on the particular characteristics of the variables in the output and their relations. In this context, the objective of this paper focuses on the application of non-parametric and parametric methods to analyze uncertainty propagation, which will be implemented on a multi-objective optimization problem where reliability and cost act as decision criteria and maintenance intervals act as decision variables. Finally, a comparison of results of these applications and the conclusions obtained are presented.
Parker, Jack C.; Park, Eungyu; Tang, Guoping
2008-11-01
A vertically-integrated analytical model for dissolved phase transport is described that considers a time-dependent DNAPL source based on the upscaled dissolution kinetics model of Parker and Park with extensions to consider time-dependent source zone biodecay, partial source mass reduction, and remediation-enhanced source dissolution kinetics. The model also considers spatial variability in aqueous plume decay, which is treated as the sum of aqueous biodecay and volatilization due to diffusive transport and barometric pumping through the unsaturated zone. The model is implemented in Excel/VBA coupled with (1) an inverse solution that utilizes prior information on model parameters and their uncertainty to condition the solution, and (2) an error analysis module that computes parameter covariances and total prediction uncertainty due to regression error and parameter uncertainty. A hypothetical case study is presented to evaluate the feasibility of calibrating the model from limited noisy field data. The results indicate that prediction uncertainty increases significantly over time following calibration, primarily due to propagation of parameter uncertainty. However, differences between the predicted performance of source zone partial mass reduction and the known true performance were reasonably small. Furthermore, a clear difference is observed between the predicted performance for the remedial action scenario versus that for a no-action scenario, which is consistent with the true system behavior. The results suggest that the model formulation can be effectively utilized to assess monitored natural attenuation and source remediation options if careful attention is given to model calibration and prediction uncertainty issues.
Parker, Jack C; Park, Eungyu; Tang, Guoping
2008-11-14
A vertically-integrated analytical model for dissolved phase transport is described that considers a time-dependent DNAPL source based on the upscaled dissolution kinetics model of Parker and Park with extensions to consider time-dependent source zone biodecay, partial source mass reduction, and remediation-enhanced source dissolution kinetics. The model also considers spatial variability in aqueous plume decay, which is treated as the sum of aqueous biodecay and volatilization due to diffusive transport and barometric pumping through the unsaturated zone. The model is implemented in Excel/VBA coupled with (1) an inverse solution that utilizes prior information on model parameters and their uncertainty to condition the solution, and (2) an error analysis module that computes parameter covariances and total prediction uncertainty due to regression error and parameter uncertainty. A hypothetical case study is presented to evaluate the feasibility of calibrating the model from limited noisy field data. The results indicate that prediction uncertainty increases significantly over time following calibration, primarily due to propagation of parameter uncertainty. However, differences between the predicted performance of source zone partial mass reduction and the known true performance were reasonably small. Furthermore, a clear difference is observed between the predicted performance for the remedial action scenario versus that for a no-action scenario, which is consistent with the true system behavior. The results suggest that the model formulation can be effectively utilized to assess monitored natural attenuation and source remediation options if careful attention is given to model calibration and prediction uncertainty issues. PMID:18502537
Development of analytical orbit propagation technique with drag
1979-01-01
Two orbit computation methods were used: (1) numerical method- The solution to the satellite differential equations were solved in a step-by-step manner, using a mathematical algorithm taken from numerical analysis; and (2) analytical method - The solution was expressed by explicit functions of the independent variable. Analytical drag modules, tesseral terms initialization module, second order and long period terms module, and verification testing of the ASOP program were also considered.
Propagation of statistical and nuclear data uncertainties in Monte Carlo burn-up calculations
Energy Technology Data Exchange (ETDEWEB)
Garcia-Herranz, Nuria [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain)], E-mail: nuria@din.upm.es; Cabellos, Oscar [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain); Sanz, Javier [Departamento de Ingenieria Energetica, Universidad Nacional de Educacion a Distancia, UNED (Spain); Juan, Jesus [Laboratorio de Estadistica, Universidad Politecnica de Madrid, UPM (Spain); Kuijper, Jim C. [NRG - Fuels, Actinides and Isotopes Group, Petten (Netherlands)
2008-04-15
Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB. A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants. This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files.
International Nuclear Information System (INIS)
This thesis presents a comprehensive study of sensitivity/uncertainty analysis for reactor performance parameters (e.g. the k-effective) to the base nuclear data from which they are computed. The analysis starts at the fundamental step, the Evaluated Nuclear Data File and the uncertainties inherently associated with the data they contain, available in the form of variance/covariance matrices. We show that when a methodical and consistent computation of sensitivity is performed, conventional deterministic formalisms can be sufficient to propagate nuclear data uncertainties with the level of accuracy obtained by the most advanced tools, such as state-of-the-art Monte Carlo codes. By applying our developed methodology to three exercises proposed by the OECD (Uncertainty Analysis for Criticality Safety Assessment Benchmarks), we provide insights of the underlying physical phenomena associated with the used formalisms. (author)
International Nuclear Information System (INIS)
Analysis of analytical uncertainties of the methodology of simulation of processes for obtaining isotopic ending inventory of spent fuel, the ARIANE experiment explores the part of simulation of burning.
Analytical Model for Fictitious Crack Propagation in Concrete Beams
DEFF Research Database (Denmark)
Ulfkjær, J. P.; Krenk, Steen; Brincker, Rune
1995-01-01
are modeled by beam theory. The state of stress in the elastic layer is assumed to depend bilinearly on local elongation corresponding to a linear softening relation for the fictitious crack. Results from the analytical model are compared with results from a more detailed model based on numerical......An analytical model for load-displacement curves of concrete beams is presented. The load-displacement curve is obtained by combining two simple models. The fracture is modeled by a fictitious crack in an elastic layer around the midsection of the beam. Outside the elastic layer the deformations...... methods for different beam sizes. The analytical model is shown to be in agreement with the numerical results if the thickness of the elastic layer is taken as half the beam depth. It is shown that the point on the load-displacement curve where the fictitious crack starts to develop and the point where...
Analytical and numerical methods for wave propagation in fluid media
Murawski, K
2002-01-01
This book surveys analytical and numerical techniques appropriate to the description of fluid motion with an emphasis on the most widely used techniques exhibiting the best performance.Analytical and numerical solutions to hyperbolic systems of wave equations are the primary focus of the book. In addition, many interesting wave phenomena in fluids are considered using examples such as acoustic waves, the emission of air pollutants, magnetohydrodynamic waves in the solar corona, solar wind interaction with the planet venus, and ion-acoustic solitons.
Propagation of nuclear data uncertainties for ELECTRA burn-up calculations
ostrand, H; Duan, J; Gustavsson, C; Koning, A; Pomp, S; Rochman, D; Osterlund, M
2013-01-01
The European Lead-Cooled Training Reactor (ELECTRA) has been proposed as a training reactor for fast systems within the Swedish nuclear program. It is a low-power fast reactor cooled by pure liquid lead. In this work, we propagate the uncertainties in Pu-239 transport data to uncertainties in the fuel inventory of ELECTRA during the reactor life using the Total Monte Carlo approach (TMC). Within the TENDL project the nuclear models input parameters were randomized within their uncertainties and 740 Pu-239 nuclear data libraries were generated. These libraries are used as inputs to reactor codes, in our case SERPENT, to perform uncertainty analysis of nuclear reactor inventory during burn-up. The uncertainty in the inventory determines uncertainties in: the long-term radio-toxicity, the decay heat, the evolution of reactivity parameters, gas pressure and volatile fission product content. In this work, a methodology called fast TMC is utilized, which reduces the overall calculation time. The uncertainty in the ...
International Nuclear Information System (INIS)
The report describes the how-to-use of the codes: MUP (Monte Carlo Uncertainty Propagation) for uncertainty analysis by Monte Carlo simulation, including correlation analysis, extreme value identification and study of selected ranges of the variable space; CEC-DES (Central Composite Design) for building experimental matrices according to the requirements of Central Composite and Factorial Experimental Designs; and, STRADE (Stratified Random Design) for experimental designs based on the Latin Hypercube Sampling Techniques. Application fields, of the codes are probabilistic risk assessment, experimental design, sensitivity analysis and system identification problems
Analytical solution to an investment problem under uncertainties with shocks
Cl\\'audia Nunes; Rita Pimentel
2015-01-01
We derive the optimal investment decision in a project where both demand and investment costs are stochastic processes, eventually subject to shocks. We extend the approach used in Dixit and Pindyck (1994), chapter 6.5, to deal with two sources of uncertainty, but assuming that the underlying processes are no longer geometric Brownian diffusions but rather jump diffusion processes. For the class of isoelastic functions that we address in this paper, it is still possible to derive a closed exp...
Non-parametric order statistics method applied to uncertainty propagation in fuel rod calculations
International Nuclear Information System (INIS)
Advances in modeling fuel rod behavior and accumulations of adequate experimental data have made possible the introduction of quantitative methods to estimate the uncertainty of predictions made with best-estimate fuel rod codes. The uncertainty range of the input variables is characterized by a truncated distribution which is typically a normal, lognormal, or uniform distribution. While the distribution for fabrication parameters is defined to cover the design or fabrication tolerances, the distribution of modeling parameters is inferred from the experimental database consisting of separate effects tests and global tests. The final step of the methodology uses a Monte Carlo type of random sampling of all relevant input variables and performs best-estimate code calculations to propagate these uncertainties in order to evaluate the uncertainty range of outputs of interest for design analysis, such as internal rod pressure and fuel centerline temperature. The statistical method underlying this Monte Carlo sampling is non-parametric order statistics, which is perfectly suited to evaluate quantiles of populations with unknown distribution. The application of this method is straightforward in the case of one single fuel rod, when a 95/95 statement is applicable: 'with a probability of 95% and confidence level of 95% the values of output of interest are below a certain value'. Therefore, the 0.95-quantile is estimated for the distribution of all possible values of one fuel rod with a statistical confidence of 95%. On the other hand, a more elaborate procedure is required if all the fuel rods in the core are being analyzed. In this case, the aim is to evaluate the following global statement: with 95% confidence level, the expected number of fuel rods which are not exceeding a certain value is all the fuel rods in the core except only a few fuel rods. In both cases, the thresholds determined by the analysis should be below the safety acceptable design limit. An indirect
Analytical Model for Fictitious Crack Propagation in Concrete Beams
DEFF Research Database (Denmark)
Ulfkjær, J. P.; Krenk, S.; Brincker, Rune
the elastic layer the deformations are modelled by the Timoshenko beam theory. The state of stress in the elastic layer is assumed to depend bi-lineary on local elongation corresponding to a linear softening relation for the fictitious crack. For different beam size results from the analytical model......An analytical model for load-displacement curves of unreinforced notched and un-notched concrete beams is presented. The load displacement-curve is obtained by combining two simple models. The fracture is modelled by a fictitious crack in an elastic layer around the mid-section of the beam. Outside...... the load-displacement curve where the fictitious crack starts to develope, and the point where the real crack starts to grow will always correspond to the same bending moment. Closed from solutions for the maximum size of the fracture zone and the minimum slope on the load-displacement curve is given...
Development and depletion code surrogate models for uncertainty propagation in scenario studies
International Nuclear Information System (INIS)
Transition scenario studies are necessary to compare different options of the reactor fleet evolution. COSI code is developed by CEA and is used to perform scenario calculations. It allows us to model any fuel type, reactor fleet, fuel facility, and permits the tracking of U, Pu, minor actinides and fission products nuclides on a large time scale. COSI is coupled with the CESAR code which performs the depletion calculations based on one-group cross-section libraries and nuclear data. Different types of uncertainties have an impact on scenario studies: nuclear data and scenario assumptions. Therefore, it is necessary to evaluate their impact on the major scenario results. The methodology adopted to propagate these uncertainties throughout the scenario calculations is a stochastic approach. Considering the amount of inputs to be sampled in order to perform a stochastic calculation of the propagated uncertainty, it appears necessary to reduce the calculation time. Given that evolution calculations represent approximately 95% of the total scenario simulation time, an optimization can be done, with the development and implementation of a surrogate models library of CESAR in COSI. The input parameters of CESAR are sampled with URANIE, the CEA uncertainty platform, and for every sample, the isotopic composition after evolution evaluated with CESAR is stored. Then statistical analysis of the input and output tables allow to model the behavior of CESAR on each CESAR library, i.e. building a surrogate model. Several quality tests are performed on each surrogate model to insure the prediction power is satisfying. Afterwards, a new routine implemented in COSI reads these surrogate models and using them in replacement of CESAR calculations. A preliminary study of the calculation time gain shows that the use of surrogate models allows stochastic calculation of the uncertainty propagation. Once the set of surrogate models is complete, one of the first expected results will be the
DEFF Research Database (Denmark)
Sin, Gürkan; Gernaey, Krist; Eliasson Lantz, Anna
2009-01-01
compared to the large uncertainty observed in the antibiotic and off-gas CO2 predictions. The output uncertainty was observed to be lower during the exponential growth phase, while higher in the stationary and death phases - meaning the model describes some periods better than others. To understand which...... input parameters are responsible for the output uncertainty, three sensitivity methods (Standardized Regression Coefficients, Morris and differential analysis) were evaluated and compared. The results from these methods were mostly in agreement with each other and revealed that only few parameters......The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model-building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input...
Nuclear data uncertainty propagation in a lattice physics code using stochastic sampling
International Nuclear Information System (INIS)
A methodology is presented for 'black box' nuclear data uncertainty propagation in a lattice physics code using stochastic sampling. The methodology has 4 components: i) processing nuclear data variance/covariance matrices including converting the native group structure to a group structure 'compatible' with the lattice physics code, ii) generating (relative) random samples of nuclear data, iii) perturbing the lattice physics code nuclear data according to the random samples, and iv) analyzing the distribution of outputs to estimate the uncertainty. The scheme is described as implemented at PSI, in a modified version of the lattice physics code CASMO-5M, including all relevant practical details. Uncertainty results are presented for a BWR pin-cell at hot zero power conditions and a PWR assembly at hot full power conditions with depletion. Results are presented for uncertainties in eigenvalue, 1-group microscopic cross sections, 2-group macroscopic cross sections, and isotopics. Interesting behavior is observed with burnup, including a minimum uncertainty due to the presence of fertile U-238 and a global effect described as 'synergy', observed when comparing the uncertainty resulting from simultaneous and one-at-a-time variations of nuclear data. (authors)
Propagation of systematic uncertainty due to data reduction in transmission measurement of iron
International Nuclear Information System (INIS)
A technique of determinantal inequalities to estimate the bounds for statistical and systematic uncertainties in neutron cross section measurement have been developed. In the measurement of neutron cross section, correlation is manifested due to the process of measurement and due to many systematic components like geometrical factor, half life, back scattering etc. However propagation of experimental uncertainties through the reduction of cross section data is itself a complicated procedure and has been inviting attention in recent times. The concept of determinantal inequalities to a transmission measurement of iron cross section and demonstration of how in such data reduction procedures the systematic uncertainty dominates over the statistical and estimate their individual bounds have been applied in this paper. (author). 2 refs., 1 tab
Uncertainty propagation in a 3-D thermal code for performance assessment of a nuclear waste disposal
International Nuclear Information System (INIS)
Given the very large time scale involved, the performance assessment of a nuclear waste repository requires numerical modelling. Because we are uncertain of the exact value of the input parameters, we have to analyse the impact of these uncertainties on the outcome of the physical models. The EDF Division Research and Development has set a reliability method to propagate these uncertainties or variability through models which requires much less physical simulations than the usual simulation methods. We apply the reliability method MEFISTO to a base case modelling the heat transfers in a virtual disposal in the future site of the French underground research laboratory, in the East of France. This study is led in collaboration with ANDRA which is the French Nuclear Waste Management Agency. With this exercise, we want to evaluate the thermal behaviour of a concept related to the variation of physical parameters and their uncertainty. (author)
International Nuclear Information System (INIS)
This PhD study is in the field of nuclear energy, the back end of nuclear fuel cycle and uncertainty calculations. The CEA must design the prototype ASTRID, a sodium cooled fast reactor (SFR) and one of the selected concepts of the Generation IV forum, for which the calculation of the value and the uncertainty of the decay heat have a significant impact. In this study is developed a code of propagation of uncertainties of nuclear data on the decay heat in SFR. The process took place in three stages. The first step has limited the number of parameters involved in the calculation of the decay heat. For this, an experiment on decay heat on the reactor PHENIX (PUIREX 2008) was studied to validate experimentally the DARWIN package for SFR and quantify the source terms of the decay heat. The second step was aimed to develop a code of propagation of uncertainties: CyRUS (Cycle Reactor Uncertainty and Sensitivity). A deterministic propagation method was chosen because calculations are fast and reliable. Assumptions of linearity and normality have been validated theoretically. The code has also been successfully compared with a stochastic code on the example of the thermal burst fission curve of 235U. The last part was an application of the code on several experiments: decay heat of a reactor, isotopic composition of a fuel pin and the burst fission curve of 235U. The code has demonstrated the possibility of feedback on nuclear data impacting the uncertainty of this problem. Two main results were highlighted. Firstly, the simplifying assumptions of deterministic codes are compatible with a precise calculation of the uncertainty of the decay heat. Secondly, the developed method is intrusive and allows feedback on nuclear data from experiments on the back end of nuclear fuel cycle. In particular, this study showed how important it is to measure precisely independent fission yields along with their covariance matrices in order to improve the accuracy of the calculation of the
Cosmin Sandric, Ionut; Chitu, Zenaida; Jurchescu, Marta; Malet, Jean-Philippe; Ciprian Margarint, Mihai; Micu, Mihai
2015-04-01
An increasing number of free and open access global digital elevation models has become available in the past 15 years and these DEMs have been widely used for the assessment of landslide susceptibility at medium and small scales. Even though the global vertical and horizontal accuracies of each DEM are known, what it is still unknown is the uncertainty that propagates from the first and second derivatives of DEMs, like slope gradient, into the final landslide susceptibility map For the present study we focused on the assessment of the uncertainty propagation from the following digital elevation models: SRTM 90m spatial resolution, ASTERDEM 30m spatial resolution, EUDEM 30m spatial resolution and the latest release SRTM 30m spatial resolution. From each DEM dataset the slope gradient was generated and used in the landslide susceptibility analysis. A restricted number of spatial predictors are used for landslide susceptibility assessment, represented by lithology, land-cover and slope, were the slope is the only predictor that changes with each DEM. The study makes use of the first national landslide inventory (Micu et al, 2014) obtained from compiling literature data, personal or institutional landslide inventories. The landslide inventory contains more than 27,900 cases classified in three main categories: slides flows and falls The results present landslide susceptibility maps obtained from each DEM and from the combinations of DEM datasets. Maps with uncertainty propagation at country level and differentiated by topographic regions from Romania and by landslide typology (slides, flows and falls) are obtained for each DEM dataset and for the combinations of these. An objective evaluation of each DEM dataset and a final map of landslide susceptibility and the associated uncertainty are provided
Myers, Casey A; Laz, Peter J; Shelburne, Kevin B; Davidson, Bradley S
2015-05-01
Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse kinematics, inverse dynamics, and muscle force prediction. A series of Monte Carlo simulations were performed that considered intrarater variability in marker placement, movement artifacts in each phase of gait, variability in body segment parameters, and variability in muscle parameters calculated from cadaveric investigations. Propagation of uncertainty was performed by also using the output distributions from one stage as input distributions to subsequent stages. Confidence bounds (5-95%) and sensitivity of outputs to model input parameters were calculated throughout the gait cycle. The combined impact of uncertainty resulted in mean bounds that ranged from 2.7° to 6.4° in joint kinematics, 2.7 to 8.1 N m in joint moments, and 35.8 to 130.8 N in muscle forces. The impact of movement artifact was 1.8 times larger than any other propagated source. Sensitivity to specific body segment parameters and muscle parameters were linked to where in the gait cycle they were calculated. We anticipate that through the increased use of probabilistic tools, researchers will better understand the strengths and limitations of their musculoskeletal simulations and more effectively use simulations to evaluate hypotheses and inform clinical decisions. PMID:25404535
Rodríguez-Rincón, J. P.; Pedrozo-Acuña, A.; Breña-Naranjo, J. A.
2015-07-01
This investigation aims to study the propagation of meteorological uncertainty within a cascade modelling approach to flood prediction. The methodology was comprised of a numerical weather prediction (NWP) model, a distributed rainfall-runoff model and a 2-D hydrodynamic model. The uncertainty evaluation was carried out at the meteorological and hydrological levels of the model chain, which enabled the investigation of how errors that originated in the rainfall prediction interact at a catchment level and propagate to an estimated inundation area and depth. For this, a hindcast scenario is utilised removing non-behavioural ensemble members at each stage, based on the fit with observed data. At the hydrodynamic level, an uncertainty assessment was not incorporated; instead, the model was setup following guidelines for the best possible representation of the case study. The selected extreme event corresponds to a flood that took place in the southeast of Mexico during November 2009, for which field data (e.g. rain gauges; discharge) and satellite imagery were available. Uncertainty in the meteorological model was estimated by means of a multi-physics ensemble technique, which is designed to represent errors from our limited knowledge of the processes generating precipitation. In the hydrological model, a multi-response validation was implemented through the definition of six sets of plausible parameters from past flood events. Precipitation fields from the meteorological model were employed as input in a distributed hydrological model, and resulting flood hydrographs were used as forcing conditions in the 2-D hydrodynamic model. The evolution of skill within the model cascade shows a complex aggregation of errors between models, suggesting that in valley-filling events hydro-meteorological uncertainty has a larger effect on inundation depths than that observed in estimated flood inundation extents.
Fuzzy probability based fault tree analysis to propagate and quantify epistemic uncertainty
International Nuclear Information System (INIS)
Highlights: • Fuzzy probability based fault tree analysis is to evaluate epistemic uncertainty in fuzzy fault tree analysis. • Fuzzy probabilities represent likelihood occurrences of all events in a fault tree. • A fuzzy multiplication rule quantifies epistemic uncertainty of minimal cut sets. • A fuzzy complement rule estimate epistemic uncertainty of the top event. • The proposed FPFTA has successfully evaluated the U.S. Combustion Engineering RPS. - Abstract: A number of fuzzy fault tree analysis approaches, which integrate fuzzy concepts into the quantitative phase of conventional fault tree analysis, have been proposed to study reliabilities of engineering systems. Those new approaches apply expert judgments to overcome the limitation of the conventional fault tree analysis when basic events do not have probability distributions. Since expert judgments might come with epistemic uncertainty, it is important to quantify the overall uncertainties of the fuzzy fault tree analysis. Monte Carlo simulation is commonly used to quantify the overall uncertainties of conventional fault tree analysis. However, since Monte Carlo simulation is based on probability distribution, this technique is not appropriate for fuzzy fault tree analysis, which is based on fuzzy probabilities. The objective of this study is to develop a fuzzy probability based fault tree analysis to overcome the limitation of fuzzy fault tree analysis. To demonstrate the applicability of the proposed approach, a case study is performed and its results are then compared to the results analyzed by a conventional fault tree analysis. The results confirm that the proposed fuzzy probability based fault tree analysis is feasible to propagate and quantify epistemic uncertainties in fault tree analysis
Energy Technology Data Exchange (ETDEWEB)
Heo, Jaeseok, E-mail: jheo@kaeri.re.kr; Kim, Kyung Doo, E-mail: kdkim@kaeri.re.kr
2015-10-15
Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper.
International Nuclear Information System (INIS)
Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper
Wave-like warp propagation in circumbinary discs I. Analytic theory and numerical simulations
Facchini, Stefano; Lodato, Giuseppe; Price, Daniel J.
2013-01-01
In this paper we analyse the propagation of warps in protostellar circumbinary discs. We use these systems as a test environment in which to study warp propagation in the bending-wave regime, with the addition of an external torque due to the binary gravitational potential. In particular, we want to test the linear regime, for which an analytic theory has been developed. In order to do so, we first compute analytically the steady state shape of an inviscid disc subject to the binary torques. ...
Watson, Cameron S.; Carrivick, Jonathan; Quincey, Duncan
2015-10-01
Modelling glacial lake outburst floods (GLOFs) or 'jökulhlaups', necessarily involves the propagation of large and often stochastic uncertainties throughout the source to impact process chain. Since flood routing is primarily a function of underlying topography, communication of digital elevation model (DEM) uncertainty should accompany such modelling efforts. Here, a new stochastic first-pass assessment technique was evaluated against an existing GIS-based model and an existing 1D hydrodynamic model, using three DEMs with different spatial resolution. The analysis revealed the effect of DEM uncertainty and model choice on several flood parameters and on the prediction of socio-economic impacts. Our new model, which we call MC-LCP (Monte Carlo Least Cost Path) and which is distributed in the supplementary information, demonstrated enhanced 'stability' when compared to the two existing methods, and this 'stability' was independent of DEM choice. The MC-LCP model outputs an uncertainty continuum within its extent, from which relative socio-economic risk can be evaluated. In a comparison of all DEM and model combinations, the Shuttle Radar Topography Mission (SRTM) DEM exhibited fewer artefacts compared to those with the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM), and were comparable to those with a finer resolution Advanced Land Observing Satellite Panchromatic Remote-sensing Instrument for Stereo Mapping (ALOS PRISM) derived DEM. Overall, we contend that the variability we find between flood routing model results suggests that consideration of DEM uncertainty and pre-processing methods is important when assessing flow routing and when evaluating potential socio-economic implications of a GLOF event. Incorporation of a stochastic variable provides an illustration of uncertainty that is important when modelling and communicating assessments of an inherently complex process.
Nikolopoulos, Efthymios I.; Polcher, Jan; Anagnostou, Emmanouil N.; Eisner, Stephanie; Fink, Gabriel; Kallos, George
2016-04-01
Precipitation is arguably one of the most important forcing variables that drive terrestrial water cycle processes. The process of precipitation exhibits significant variability in space and time, is associated with different water phases (liquid or solid) and depends on several other factors (aerosols, orography etc), which make estimation and modeling of this process a particularly challenging task. As such, precipitation information from different sensors/products is associated with uncertainty. Propagation of this uncertainty into hydrologic simulations can have a considerable impact on the accuracy of the simulated hydrologic variables. Therefore, to make hydrologic predictions more useful, it is important to investigate and assess the impact of precipitation uncertainty in hydrologic simulations in order to be able to quantify it and identify ways to minimize it. In this work we investigate the impact of precipitation uncertainty in hydrologic simulations using land surface models (e.g. ORCHIDEE) and global hydrologic models (e.g. WaterGAP3) for the simulation of several hydrologic variables (soil moisture, ET, runoff) over the Iberian Peninsula. Uncertainty in precipitation is assessed by utilizing various sources of precipitation input that include one reference precipitation dataset (SAFRAN), three widely-used satellite precipitation products (TRMM 3B42v7, CMORPH, PERSIANN) and a state-of-the-art reanalysis product (WFDEI) based on the ECMWF ERA-Interim reanalysis. Comparative analysis is based on using the SAFRAN-simulations as reference and it is carried out at different space (0.5deg or regional average) and time (daily or seasonal) scales. Furthermore, as an independent verification, simulated discharge is compared against available discharge observations for selected major rivers of Iberian region. Results allow us to draw conclusions regarding the impact of precipitation uncertainty with respect to i) hydrologic variable of interest, ii
International Nuclear Information System (INIS)
Part of the application for a license for a high-level radioactive waste repository is an assessment of repository performance over thousands of years, which will inevitably have to treat uncertainties. One source of uncertainty is in the conceptualization of the natural repository system. Generally, conceptual models are developed based on interpretation of existing data using expert judgment. Uncertainties in conceptual models, which are propagated through the performance assessment calculations, are introduced when simplifying assumptions are made about the behavior of the real system. These assumptions are made because the data, knowledge that is based on the data, and other information considered in the interpretation are incomplete. Additionally, any relationships that exist among the data are generally inexact or may be undefined. In this work, causal networks have been applied to the conceptual model development process. This representation of the conceptual model expresses existing knowledge about the real system in a graphical form and extracts the qualitative dependency relationships among the underlying data and assumptions. Strict probabilistic reasoning is used to quantitatively explore these relationships. This probabilistic network provides a means by which to quantify, propagate, and reduce the pervading uncertainty in a coherent probabilistic manner. The conceptualization of the Avra Valley regional ground water flow system in Arizona and the ground water flow system of the proposed high-level radioactive waste repository site at Yucca Mountain in Nevada have been investigated to develop a preliminary data base of important assumptions and their relationships. Based on the conceptual models for these sites, a prototype version of the probabilistic network for the development of conceptual models is under development on a microExplorer Lisp workstation. 9 refs., 1 fig., 1 tab
Wijnant, Ysbrand; Spiering, Ruud; Blijderveen, van Maarten; Boer, de André
2006-01-01
Previous research has shown that viscothermal wave propagation in narrow gaps can efficiently be described by means of the low reduced frequency model. For simple geometries and boundary conditions, analytical solutions are available. For example, Beltman [4] gives the acoustic pressure in the gap b
Uncertainty Propagation in a Fundamental Climate Data Record derived from Meteosat Visible Band Data
Rüthrich, Frank; John, Viju; Roebeling, Rob; Wagner, Sebastien; Viticchie, Bartolomeo; Hewison, Tim; Govaerts, Yves; Quast, Ralf; Giering, Ralf; Schulz, Jörg
2016-04-01
The series of Meteosat First Generation (MFG) Satellites provides a unique opportunity for the monitoring of climate variability and of possible changes. 6 Satellites were operationally employed; all equipped with identical MVIRI radiometers. The time series now covers, for some parts of the globe, more than 34 years with a high temporal (30 minutes) and spatial (2.5 x 2.5 km²) resolution for the visible band. However, subtle differences between the radiometers in terms of the silicon photodiodes, sensor spectral ageing and variability due to other sources of uncertainties have limited the thorough exploitation of this unique time series so far. For instance upper level wind fields and surface albedo data records could be derived and used for the assimilation into Numerical Weather Prediction models for re-analysis and climate studies, respectively. However, the derivation of aerosol depth with high quality has not been possible so far. In order to enhance the quality of MVIRI reflectances for enabling an aerosol and improved surface albedo data record it is necessary to perform a re-calibration of the MVIRI instruments visible bands that corrects for above mentioned effects and results in an improved Fundamental Climate Data Record (FCDR) of Meteosat/MVIRI radiance data. This re-calibration has to be consistent over the entire period, to consider the ageing of the sensor's spectral response functions and to add accurate information about the combined uncertainty of the radiances. Therefore the uncertainties from all different sources have to be thoroughly investigated and propagated into the final product. This presentation aims to introduce all sources of uncertainty present in MVIRI visible data and points on the major mechanisms of uncertainty propagation. An outlook will be given on the enhancements of the calibration procedure as it will be carried out at EUMETSAT in the course of the EU Horizon 2020 FIDUCEO project (FIDelity and Uncertainty in Climate data
International Nuclear Information System (INIS)
The present document constitutes my Habilitation thesis report. It recalls my scientific activity of the twelve last years, since my PhD thesis until the works completed as a research engineer at CEA Cadarache. The two main chapters of this document correspond to two different research fields both referring to the uncertainty treatment in engineering problems. The first chapter establishes a synthesis of my work on high frequency wave propagation in random medium. It more specifically relates to the study of the statistical fluctuations of acoustic wave travel-times in random and/or turbulent media. The new results mainly concern the introduction of the velocity field statistical anisotropy in the analytical expressions of the travel-time statistical moments according to those of the velocity field. This work was primarily carried by requirements in geophysics (oil exploration and seismology). The second chapter is concerned by the probabilistic techniques to study the effect of input variables uncertainties in numerical models. My main applications in this chapter relate to the nuclear engineering domain which offers a large variety of uncertainty problems to be treated. First of all, a complete synthesis is carried out on the statistical methods of sensitivity analysis and global exploration of numerical models. The construction and the use of a meta-model (inexpensive mathematical function replacing an expensive computer code) are then illustrated by my work on the Gaussian process model (kriging). Two additional topics are finally approached: the high quantile estimation of a computer code output and the analysis of stochastic computer codes. We conclude this memory with some perspectives about the numerical simulation and the use of predictive models in industry. This context is extremely positive for future researches and application developments. (author)
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
For structural system with random basic variables as well as fuzzy basic variables,uncertain propagation from two kinds of basic variables to the response of the structure is investigated.A novel algorithm for obtaining membership function of fuzzy reliability is presented with saddlepoint approximation(SA)based line sampling method.In the presented method,the value domain of the fuzzy basic variables under the given membership level is firstly obtained according to their membership functions.In the value domain of the fuzzy basic variables corresponding to the given membership level,bounds of reliability of the structure response satisfying safety requirement are obtained by employing the SA based line sampling method in the reduced space of the random variables.In this way the uncertainty of the basic variables is propagated to the safety measurement of the structure,and the fuzzy membership function of the reliability is obtained.Compared to the direct Monte Carlo method for propagating the uncertainties of the fuzzy and random basic variables,the presented method can considerably improve computational efficiency with acceptable precision.The presented method has wider applicability compared to the transformation method,because it doesn’t limit the distribution of the variable and the explicit expression of performance function, and no approximation is made for the performance function during the computing process.Additionally,the presented method can easily treat the performance function with cross items of the fuzzy variable and the random variable,which isn’t suitably approximated by the existing transformation methods.Several examples are provided to illustrate the advantages of the presented method.
International Nuclear Information System (INIS)
Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM
International Nuclear Information System (INIS)
Highlights: • We performed burnup calculations of PWR and BWR benchmarks using ALEPH and SCALE. • We propagated nuclear data uncertainty and correlations using different procedures and code. • Decay data uncertainties have negligible impact on nuclide densities. • Uncorrelated fission yields play a major role on the uncertainties of fission products. • Fission yields impact is strongly reduced by the introduction of correlations. - Abstract: Two fuel assemblies, one belonging to the Takahama-3 PWR and the other to the Fukushima-Daini-2 BWR, were modelled and the fuel irradiation was simulated with the TRITON module of SCALE 6.2 and with the ALEPH-2 code. Our results were compared to the experimental measurements of four samples: SF95-4 and SF96-4 were taken from the Takahama-3 reactor, while samples SF98-6 and SF99-6 belonged to the Fukushima-Daini-2. Then, we propagated the uncertainties coming from the nuclear data to the isotopic inventory of sample SF95-4. We used the ALEPH-2 adjoint procedure to propagate the decay constant uncertainties. The impact was inappreciable. The cross-section covariance information was propagated with the SAMPLER module of the beta3 version of SCALE 6.2. This contribution mostly affected the uncertainties of the actinides. Finally, the uncertainties of the fission yields were propagated both through ALEPH-2 and TRITON with a Monte Carlo sampling approach and appeared to have the largest impact on the uncertainties of the fission products. However, the lack of fission yield correlations results is a serious overestimation of the response uncertainties
eHabitat - A web service for habitat similarity modeling with uncertainty propagation
Olav Skøien, Jon; Schulz, Michael; Dubois, Gregoire; Heuvelink, Gerard
2013-04-01
We are developing eHabitat, a Web Processing Service (WPS) that can model current and future habitat similarity for point observations, polygons defining an existing or hypothetical protected area, or sets of polygons defining the estimated ranges for one or more species. A range of Web Clients makes it easy to use the WPS with predefined data for predictions of the current or future climatic niche. The WPS is also able to document propagating uncertainties of the input data to the estimated similarity maps, if such information is available. The presentation will focus on the architecture of the service and the clients, on how uncertainties are handled by the model and on the presentation of uncertain results. The idea behind eHabitat is that one can classify the similarity between a reference geometry (point locations or polygons) and the surroundings based on one or more species distribution models (SDMs) and a set of ecological indicators. The ecological indicators are typically raster bioclimatic data (DEMs, climate data, vegetation maps …) describing important features for the species or habitats of interest. All these data sets have uncertainties, which can usually be described by treating the value of each pixel as a mean with a standard deviation. As the standard deviation will also be pixel based, it can be given as rasters. If standard deviations of the rasters are not available in the input data, this can also be guesstimated by the service to allow end-users to generate uncertainty scenarios. Rasters of standard deviations are used for simulating a set of spatially correlated maps of the input data, which are then used in the SDM. Additionally, the service can do bootstrapping samples from the input data, which is one of the classic methods for assessing uncertainty of SDMs. The two methods can also be combined, a convenient solution considering that simulation is a computationally much slower process than bootstrapping. Uncertainties in the results
On the uncertainty of stream networks derived from elevation data: the error propagation approach
Directory of Open Access Journals (Sweden)
T. Hengl
2010-01-01
Full Text Available DEM error propagation methodology is extended to the derivation of vector-based objects (stream networks using geostatistical simulations. First, point sampled elevations are used to fit a variogram model. Next 100 DEM realizations are generated using conditional sequential Gaussian simulation; the stream network map is extracted for each of these realizations, and the collection of stream networks is analyzed to quantify the error propagation. At each grid cell, the probability of the occurrence of a stream and the propagated error are estimated. The method is illustrated using two small data sets: Baranja hill (30 m grid cell size; 16 512 pixels; 6367 sampled elevations, and Zlatibor (30 m grid cell size; 15 000 pixels; 2051 sampled elevations. All computations are run in the open source software for statistical computing R: package geoR is used to fit variogram; package gstat is used to run sequential Gaussian simulation; streams are extracted using the open source GIS SAGA via the RSAGA library. The resulting stream error map (Information entropy of a Bernoulli trial clearly depicts areas where the extracted stream network is least precise – usually areas of low local relief, slightly concave. In both cases, significant parts of the study area (17.3% for Baranja Hill; 6.2% for Zlatibor show high error (H>0.5 of locating streams. By correlating the propagated uncertainty of the derived stream network with various land surface parameters sampling of height measurements can be optimized so that delineated streams satisfy a required accuracy level. Remaining issue to be tackled is the computational burden of geostatistical simulations: this framework is at the moment limited to small to moderate data sets with several hundreds of points. Scripts and data sets used in this article are available on-line via the http://www.geomorphometry.org/ website and can be easily adopted
A Framework for Propagation of Uncertainties in the Kepler Data Analysis Pipeline
Clarke, Bruce D.; Allen, Christopher; Bryson, Stephen T.; Caldwell, Douglas A.; Chandrasekaran, Hema; Cote, Miles T.; Girouard, Forrest; Jenkins, Jon M.; Klaus, Todd C.; Li, Jie; Middour, Chris; McCauliff, Sean; Quintana, Elisa V.; Tenebaum, Peter; Twicken, Joseph D.; Wohler, Bill; Wu, Hayley
2010-01-01
The Kepler space telescope is designed to detect Earth-like planets around Sun-like stars using transit photometry by simultaneously observing 100,000 stellar targets nearly continuously over a three and a half year period. The 96-megapixel focal plane consists of 42 charge-coupled devices (CCD) each containing two 1024 x 1100 pixel arrays. Cross-correlations between calibrated pixels are introduced by common calibrations performed on each CCD requiring downstream data products access to the calibrated pixel covariance matrix in order to properly estimate uncertainties. The prohibitively large covariance matrices corresponding to the 75,000 calibrated pixels per CCD preclude calculating and storing the covariance in standard lock-step fashion. We present a novel framework used to implement standard propagation of uncertainties (POU) in the Kepler Science Operations Center (SOC) data processing pipeline. The POU framework captures the variance of the raw pixel data and the kernel of each subsequent calibration transformation allowing the full covariance matrix of any subset of calibrated pixels to be recalled on-the-fly at any step in the calibration process. Singular value decomposition (SVD) is used to compress and low-pass filter the raw uncertainty data as well as any data dependent kernels. The combination of POU framework and SVD compression provide downstream consumers of the calibrated pixel data access to the full covariance matrix of any subset of the calibrated pixels traceable to pixel level measurement uncertainties without having to store, retrieve and operate on prohibitively large covariance matrices. We describe the POU Framework and SVD compression scheme and its implementation in the Kepler SOC pipeline.
Analytical Algorithms to Quantify the Uncertainty in Remaining Useful Life Prediction
Sankararaman, Shankar; Saxena, Abhinav; Daigle, Matthew; Goebel, Kai
2013-01-01
This paper investigates the use of analytical algorithms to quantify the uncertainty in the remaining useful life (RUL) estimate of components used in aerospace applications. The prediction of RUL is affected by several sources of uncertainty and it is important to systematically quantify their combined effect by computing the uncertainty in the RUL prediction in order to aid risk assessment, risk mitigation, and decisionmaking. While sampling-based algorithms have been conventionally used for quantifying the uncertainty in RUL, analytical algorithms are computationally cheaper and sometimes, are better suited for online decision-making. While exact analytical algorithms are available only for certain special cases (for e.g., linear models with Gaussian variables), effective approximations can be made using the the first-order second moment method (FOSM), the first-order reliability method (FORM), and the inverse first-order reliability method (Inverse FORM). These methods can be used not only to calculate the entire probability distribution of RUL but also to obtain probability bounds on RUL. This paper explains these three methods in detail and illustrates them using the state-space model of a lithium-ion battery.
Propagation of Isotopic Bias and Uncertainty to Criticality Safety Analyses of PWR Waste Packages
Energy Technology Data Exchange (ETDEWEB)
Radulescu, Georgeta [ORNL
2010-06-01
predicted spent fuel compositions (i.e., determine the penalty in reactivity due to isotopic composition bias and uncertainty) for use in disposal criticality analysis employing burnup credit. The method used in this calculation to propagate the isotopic bias and bias-uncertainty values to k{sub eff} is the Monte Carlo uncertainty sampling method. The development of this report is consistent with 'Test Plan for: Isotopic Validation for Postclosure Criticality of Commercial Spent Nuclear Fuel'. This calculation report has been developed in support of burnup credit activities for the proposed repository at Yucca Mountain, Nevada, and provides a methodology that can be applied to other criticality safety applications employing burnup credit.
Vecherin, S.; Ketcham, S.; Parker, M.; Picucci, J.
2015-12-01
To make a prediction for the propagation of seismic pulses, one needs to specify physical properties and subsurface ground structure of the site. This information is frequently unknown or estimated with significant uncertainty. We developed a methodology for the ensemble prediction of the propagation of weak seismic pulses for short ranges. The ranges of interest are 10-100 of meters, and the pulse bandwidth is up to 200 Hz. Instead of specifying specific values for viscoelastic site properties, the methodology operates with probability distribution functions of the inputs. This yields ensemble realizations of the pulse at specified locations, where mean, median, and maximum likelihood predictions can be made, and confidence intervals are estimated. Starting with the site's Vs30, the methodology creates an ensemble of plausible vertically stratified Vs profiles for the site. The number and thickness of the layers are modeled using inhomogeneous Poisson process, and the Vs values in the layers are modeled by Gaussian correlated process. The Poisson expectation rate and Vs correlation between adjacent layers take into account layers depth and thickness, and are specific for a site class, as defined by the Federal Emergency Management Agency (FEMA). High-fidelity three-dimension thin layer method (TLM) is used to yield an ensemble of frequency response functions. Comparison with experiments revealed that measured signals are not always within the predicted ensemble. Variance-based global sensitivity analysis has shown that the most significant parameter in the TLM for the prediction of the pulse energy is the shear quality factor, Qs. Some strategies how to account for significant uncertainty in this parameter and to improve accuracy of the ensemble predictions for a specific site are investigated and discussed.
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-09-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
International Nuclear Information System (INIS)
This paper presents a 3D uncertainty propagation methodology and its application to the case of a small heterogeneous reactor system ('slab' reactor benchmark). Key neutron parameters (keff, reactivity worth, local power, ...) and their corresponding cross-section sensitivities are derived by using the French calculation route APOLLO2 (2D transport lattice code), CRONOS2 (3D diffusion code) and TRIPOLI4 (3D Monte-Carlo reference calculations) with consistent JEF2.2 cross-section libraries (punctual or CEA93 multigroup cross-sections) and adapted perturbation methods (the Heuristically-based Generalized Perturbation Theory implemented in the framework of the CRONOS2 diffusion method or the correlation techniques used in Monte-Carlo simulations). The investigation of the slab system underlined notable differences between the 2D/3D computed sensitivity coefficients and consequently a priori uncertainties (when sensitivity coefficients are combined with covariance matrices the discrepancies rise up to 20% due to thermal and fast flux variations). In addition, the induced local power effect of nuclear data perturbations (JEF-2.2 vs. Leal-Derrien-Wright-Larson 235U evaluation) had been be correctly estimated with the standard 3D CRONOS2 depletion calculations. For industrial applications (PWR neutron parameters optimization problems, R and D studies dealing with the design of future fission reactors, ...), the same calculation route could be advantageously applied to infer the target accuracies (knowing the required safety criteria) of future nuclear data evaluation (JEFF-3 data library for instance). (author)
Long-time uncertainty propagation using generalized polynomial chaos and flow map composition
International Nuclear Information System (INIS)
We present an efficient and accurate method for long-time uncertainty propagation in dynamical systems. Uncertain initial conditions and parameters are both addressed. The method approximates the intermediate short-time flow maps by spectral polynomial bases, as in the generalized polynomial chaos (gPC) method, and uses flow map composition to construct the long-time flow map. In contrast to the gPC method, this approach has spectral error convergence for both short and long integration times. The short-time flow map is characterized by small stretching and folding of the associated trajectories and hence can be well represented by a relatively low-degree basis. The composition of these low-degree polynomial bases then accurately describes the uncertainty behavior for long integration times. The key to the method is that the degree of the resulting polynomial approximation increases exponentially in the number of time intervals, while the number of polynomial coefficients either remains constant (for an autonomous system) or increases linearly in the number of time intervals (for a non-autonomous system). The findings are illustrated on several numerical examples including a nonlinear ordinary differential equation (ODE) with an uncertain initial condition, a linear ODE with an uncertain model parameter, and a two-dimensional, non-autonomous double gyre flow
Brault, A; Lucor, D
2016-01-01
SUMMARY This work aims at quantifying the effect of inherent uncertainties from cardiac output on the sensitivity of a human compliant arterial network response based on stochastic simulations of a reduced-order pulse wave propagation model. A simple pulsatile output form is utilized to reproduce the most relevant cardiac features with a minimum number of parameters associated with left ventricle dynamics. Another source of critical uncertainty is the spatial heterogeneity of the aortic compliance which plays a key role in the propagation and damping of pulse waves generated at each cardiac cycle. A continuous representation of the aortic stiffness in the form of a generic random field of prescribed spatial correlation is then considered. Resorting to a stochastic sparse pseudospectral method, we investigate the spatial sensitivity of the pulse pressure and waves reflection magnitude with respect to the different model uncertainties. Results indicate that uncertainties related to the shape and magnitude of th...
Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
Helgesson, P.; Sjöstrand, H.; Koning, A. J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.
2016-01-01
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also
A Semi-Analytical Orbit Propagator Program for Highly Elliptical Orbits
Lara, M.; San-Juan, J. F.; Hautesserres, D.
2016-05-01
A semi-analytical orbit propagator to study the long-term evolution of spacecraft in Highly Elliptical Orbits is presented. The perturbation model taken into account includes the gravitational effects produced by the first nine zonal harmonics and the main tesseral harmonics affecting to the 2:1 resonance, which has an impact on Molniya orbit-types, of Earth's gravitational potential, the mass-point approximation for third body perturbations, which on ly include the Legendre polynomial of second order for the sun and the polynomials from second order to sixth order for the moon, solar radiation pressure and atmospheric drag. Hamiltonian formalism is used to model the forces of gravitational nature so as to avoid time-dependence issues the problem is formulated in the extended phase space. The solar radiation pressure is modeled as a potential and included in the Hamiltonian, whereas the atmospheric drag is added as a generalized force. The semi-analytical theory is developed using perturbation techniques based on Lie transforms. Deprit's perturbation algorithm is applied up to the second order of the second zonal harmonics, J2, including Kozay-type terms in the mean elements Hamiltonian to get "centered" elements. The transformation is developed in closed-form of the eccentricity except for tesseral resonances and the coupling between J_2 and the moon's disturbing effects are neglected. This paper describes the semi-analytical theory, the semi-analytical orbit propagator program and some of the numerical validations.
Directory of Open Access Journals (Sweden)
S.V. Bystrov
2016-05-01
Full Text Available Subject of Research.We present research results for the signal uncertainty problem that naturally arises for the developers of servomechanisms, including analytical design of serial compensators, delivering the required quality indexes for servomechanisms. Method. The problem was solved with the use of Besekerskiy engineering approach, formulated in 1958. This gave the possibility to reduce requirements for input signal composition of servomechanisms by using only two of their quantitative characteristics, such as maximum speed and acceleration. Information about input signal maximum speed and acceleration allows entering into consideration the equivalent harmonic input signal with calculated amplitude and frequency. In combination with requirements for maximum tracking error, the amplitude and frequency of the equivalent harmonic effects make it possible to estimate analytically the value of the amplitude characteristics of the system by error and then convert it to amplitude characteristic of open-loop system transfer function. While previously Besekerskiy approach was mainly used in relation to the apparatus of logarithmic characteristics, we use this approach for analytical synthesis of consecutive compensators. Main Results. Proposed technique is used to create analytical representation of "input–output" and "error–output" polynomial dynamic models of the designed system. In turn, the desired model of the designed system in the "error–output" form of analytical representation of transfer functions is the basis for the design of consecutive compensator, that delivers the desired placement of state matrix eigenvalues and, consequently, the necessary set of dynamic indexes for the designed system. The given procedure of consecutive compensator analytical design on the basis of Besekerskiy engineering approach under conditions of signal uncertainty is illustrated by an example. Practical Relevance. The obtained theoretical results are
Directory of Open Access Journals (Sweden)
Soheil Salahshour
2015-02-01
Full Text Available In this paper, we apply the concept of Caputo’s H-differentiability, constructed based on the generalized Hukuhara difference, to solve the fuzzy fractional differential equation (FFDE with uncertainty. This is in contrast to conventional solutions that either require a quantity of fractional derivatives of unknown solution at the initial point (Riemann–Liouville or a solution with increasing length of their support (Hukuhara difference. Then, in order to solve the FFDE analytically, we introduce the fuzzy Laplace transform of the Caputo H-derivative. To the best of our knowledge, there is limited research devoted to the analytical methods to solve the FFDE under the fuzzy Caputo fractional differentiability. An analytical solution is presented to confirm the capability of the proposed method.
Antoshchenkova, Ekaterina; Imbert, David; Richet, Yann; Bardet, Lise; Duluc, Claire-Marie; Rebour, Vincent; Gailler, Audrey; Hébert, Hélène
2016-04-01
The aim of this study is to assess evaluation the tsunamigenic potential of the Azores-Gibraltar Fracture Zone (AGFZ). This work is part of the French project TANDEM (Tsunamis in the Atlantic and English ChaNnel: Definition of the Effects through numerical Modeling; www-tandem.cea.fr), special attention is paid to French Atlantic coasts. Structurally, the AGFZ region is complex and not well understood. However, a lot of its faults produce earthquakes with significant vertical slip, of a type that can result in tsunami. We use the major tsunami event of the AGFZ on purpose to have a regional estimation of the tsunamigenic potential of this zone. The major reported event for this zone is the 1755 Lisbon event. There are large uncertainties concerning source location and focal mechanism of this earthquake. Hence, simple deterministic approach is not sufficient to cover on the one side the whole AGFZ with its geological complexity and on the other side the lack of information concerning the 1755 Lisbon tsunami. A parametric modeling environment Promethée (promethee.irsn.org/doku.php) was coupled to tsunami simulation software based on shallow water equations with the aim of propagation of uncertainties. Such a statistic point of view allows us to work with multiple hypotheses simultaneously. In our work we introduce the seismic source parameters in a form of distributions, thus giving a data base of thousands of tsunami scenarios and tsunami wave height distributions. Exploring our tsunami scenarios data base we present preliminary results for France. Tsunami wave heights (within one standard deviation of the mean) can be about 0.5 m - 1 m for the Atlantic coast and approaching 0.3 m for the English Channel.
Hutton, Christopher; Brazier, Richard
2012-06-01
SummaryAdvances in remote sensing technology, notably in airborne Light Detection And Ranging (LiDAR), have facilitated the acquisition of high-resolution topographic and vegetation datasets over increasingly large areas. Whilst such datasets may provide quantitative information on surface morphology and vegetation structure in riparian zones, existing approaches for processing raw LiDAR data perform poorly in riparian channel environments. A new algorithm for separating vegetation from topography in raw LiDAR data, and the performance of the Elliptical Inverse Distance Weighting (EIDW) procedure for interpolating the remaining ground points, are evaluated using data derived from a semi-arid ephemeral river. The filtering procedure, which first applies a threshold (either slope or elevation) to classify vegetation high-points, and second a regional growing algorithm from these high-points, avoids the classification of high channel banks as vegetation, preserving existing channel morphology for subsequent interpolation (2.90-9.21% calibration error; 4.53-7.44% error in evaluation for slope threshold). EIDW, which accounts for surface anisotropy by converting the remaining elevation points to streamwise co-ordinates, can outperform isoptropic interpolation (IDW) on channel banks, however, performs less well in isotropic conditions, and when local anisotropy is different to that of the main channel. A key finding of this research is that filtering parameter uncertainty affects the performance of the interpolation procedure; resultant errors may propagate into the Digital Elevation Model (DEM) and subsequently derived products, such as Canopy Height Models (CHMs). Consequently, it is important that this uncertainty is assessed. Understanding and developing methods to deal with such errors is important to inform users of the true quality of laser scanning products, such that they can be used effectively in hydrological applications.
Korun, M
2001-11-01
Explicit expressions are derived describing the variance of the counting efficiency for a homogeneous cylindrical sample, placed coaxially on the detector's symmetry axis, in terms of the variances of the sample properties thickness, density and composition. In the derivation, the emission of gamma-rays parallel to the sample axis and the efficiency for an area source proportional to the solid angle subtended by the source from the effective point of interaction of the gamma-rays within the detector crystal are assumed. For the uncertainties of the mass attenuation coefficients, as well as for the uncertainties of concentrations of admixtures to the sample matrix, constant relative uncertainties are assumed. PMID:11573802
Wave-like warp propagation in circumbinary discs I. Analytic theory and numerical simulations
Facchini, Stefano; Price, Daniel J
2013-01-01
In this paper we analyse the propagation of warps in protostellar circumbinary discs. We use these systems as a test environment in which to study warp propagation in the bending-wave regime, with the addition of an external torque due to the binary gravitational potential. In particular, we want to test the linear regime, for which an analytic theory has been developed. In order to do so, we first compute analytically the steady state shape of an inviscid disc subject to the binary torques. The steady state tilt is a monotonically increasing function of radius. In the absence of viscosity, the disc does not present any twist. Then, we compare the time-dependent evolution of the warped disc calculated via the known linearised equations both with the analytic solutions and with full 3D numerical simulations, which have been performed with the PHANTOM SPH code using 2 million particles. We find a good agreement both in the tilt and in the phase evolution for small inclinations, even at very low viscosities. Mor...
Sega, Michela; Pennecchi, Francesca; Rinaldi, Sarah; Rolle, Francesca
2016-05-12
A proper evaluation of the uncertainty associated to the quantification of micropollutants in the environment, like Polycyclic Aromatic Hydrocarbons (PAHs), is crucial for the reliability of the measurement results. The present work describes a comparison between the uncertainty evaluation carried out according to the GUM uncertainty framework and the Monte Carlo (MC) method. This comparison was carried out starting from real data sets obtained from the quantification of benzo[a]pyrene (BaP), spiked on filters commonly used for airborne particulate matter sampling. BaP was chosen as target analyte as it is listed in the current European legislation as marker of the carcinogenic risk for the whole class of PAHs. MC method, being useful for nonlinear models and when the resulting output distribution for the measurand is non-symmetric, can particularly fit the cases in which the results of intrinsically positive quantities are very small and the lower limit of a desired coverage interval, obtained according to the GUM uncertainty framework, can be dramatically close to zero, if not even negative. In the case under study, it was observed that the two approaches for the uncertainty evaluation provide different results for BaP masses in samples containing different masses of the analyte, MC method giving larger coverage intervals. In addition, in cases of analyte masses close to zero, the GUM uncertainty framework would give even negative lower limit of uncertainty coverage interval for the measurand, an unphysical result which is avoided when using MC method. MC simulations, indeed, can be configured in a way that only positive values are generated thus obtaining a coverage interval for the measurand that is always positive. PMID:27114218
Cardiff, Michael; Liu, Xiaoyi; Kitanidis, Peter K.; Parker, Jack; Kim, Ungtae
2010-04-01
Dense non-aqueous phase liquid (DNAPL) spills represent a potential long-term source of aquifer contamination, and successful low-cost remediation may require a combination of both plume management and source treatment. In addition, substantial uncertainty exists in many of the parameters that control field-scale behavior of DNAPL sources and plumes. For these reasons, cost optimization of DNAPL cleanup needs to consider multiple treatment options and their associated costs while also gauging the influence of prediction uncertainty on expected costs. In this paper, we present a management methodology for field-scale DNAPL source and plume management under uncertainty. Using probabilistic methods, historical data and prior information are combined to produce a set of equally likely realizations of true field conditions (i.e., parameter sets). These parameter sets are then used in a simulation-optimization framework to produce DNAPL cleanup solutions that have the lowest possible expected net present value (ENPV) cost and that are suitably cautious in the presence of high uncertainty. For simulation, we utilize a fast-running semi-analytic field-scale model of DNAPL source and plume evolution that also approximates the effects of remedial actions. The degree of model prediction uncertainty is gauged using a restricted maximum likelihood method, which helps to produce suitably cautious remediation strategies. We test our methodology on a synthetic field-scale problem with multiple source architectures, for which source zone thermal treatment and electron donor injection are considered as remedial actions. The lowest cost solution found utilizes a combination of source and plume remediation methods, and is able to successfully meet remediation constraints for a majority of possible scenarios. Comparisons with deterministic optimization results show that not taking into account uncertainty can result in optimization strategies that are not aggressive enough and result
International Nuclear Information System (INIS)
This paper presents an evaluation of uncertainty associated to analytical measurement of eighteen polycyclic aromatic compounds (PACs) in ambient air by liquid chromatography with fluorescence detection (HPLC/FD). The study was focused on analyses of PM10, PM2.5 and gas phase fractions. Main analytical uncertainty was estimated for eleven polycyclic aromatic hydrocarbons (PAHs), four nitro polycyclic aromatic hydrocarbons (nitro-PAHs) and two hydroxy polycyclic aromatic hydrocarbons (OH-PAHs) based on the analytical determination, reference material analysis and extraction step. Main contributions reached 15-30% and came from extraction process of real ambient samples, being those for nitro- PAHs the highest (20-30%). Range and mean concentration of PAC mass concentrations measured in gas phase and PM10/PM2.5 particle fractions during a full year are also presented. Concentrations of OH-PAHs were about 2-4 orders of magnitude lower than their parent PAHs and comparable to those sparsely reported in literature. (Author)
Ryerson, F. J.; Ezzedine, S. M.; Antoun, T.
2013-12-01
The success of implementation and execution of numerous subsurface energy technologies such shale gas extraction, geothermal energy, underground coal gasification rely on detailed characterization of the geology and the subsurface properties. For example, spatial variability of subsurface permeability controls multi-phase flow, and hence impacts the prediction of reservoir performance. Subsurface properties can vary significantly over several length scales making detailed subsurface characterization unfeasible if not forbidden. Therefore, in common practices, only sparse measurements of data are available to image or characterize the entire reservoir. For example pressure, P, permeability, k, and production rate, Q, measurements are only available at the monitoring and operational wells. Elsewhere, the spatial distribution of k is determined by various deterministic or stochastic interpolation techniques and P and Q are calculated from the governing forward mass balance equation assuming k is given at all locations. Several uncertainty drivers, such as PSUADE, are then used to propagate and quantify the uncertainty (UQ) of quantities (variable) of interest using forward solvers. Unfortunately, forward-solver techniques and other interpolation schemes are rarely constrained by the inverse problem itself: given P and Q at observation points determine the spatially variable map of k. The approach presented here, motivated by fluid imaging for subsurface characterization and monitoring, was developed by progressively solving increasingly complex realistic problems. The essence of this novel approach is that the forward and inverse partial differential equations are the interpolator themselves for P, k and Q rather than extraneous and sometimes ad hoc schemes. Three cases with different sparsity of data are investigated. In the simplest case, a sufficient number of passive pressure data (pre-production pressure gradients) are given. Here, only the inverse hyperbolic
Efficiency of analytical methodologies in uncertainty analysis of seismic core damage frequency
International Nuclear Information System (INIS)
Fault Tree and Event Tree analysis is almost exclusively relied upon in the assessments of seismic Core Damage Frequency (CDF). In this approach, Direct Quantification of Fault tree using Monte Carlo simulation (DQFM) method, or simply called Monte Carlo (MC) method, and Binary Decision Diagram (BDD) method were introduced as alternatives for a traditional approximation method, namely Minimal Cut Set (MCS) method. However, there is still no agreement as to which method should be used in a risk assessment of seismic CDF, especially for uncertainty analysis. The purpose of this study is to examine the efficiencies of the three methods in uncertainty analysis as well as in point estimation so that the decision of selecting a proper method can be made effectively. The results show that the most efficient method would be BDD method in terms of accuracy and computational time. However, it will be discussed that BDD method is not always applicable to PSA models while MC method is so in theory. In turn, MC method was confirmed to agree with the exact solution obtained by BDD method, but it took a large amount of time, in particular for uncertainty analysis. On the other hand, it was shown that the approximation error of MCS method may not be as bad in uncertainty analysis as it is in point estimation. Based on these results and previous works, this paper will propose a scheme to select an appropriate analytical method for a seismic PSA study. Throughout this study, SECOM2-DQFM code was expanded to be able to utilize BDD method and to conduct uncertainty analysis with both MC and BDD method. (author)
Gosset, Marielle; Casse, Claire; Peugeot, christophe; boone, aaron; pedinotti, vanessa
2015-04-01
Global measurement of rainfall offers new opportunity for hydrological monitoring, especially for some of the largest Tropical river where the rain gauge network is sparse and radar is not available. Member of the GPM constellation, the new French-Indian satellite Mission Megha-Tropiques (MT) dedicated to the water and energy budget in the tropical atmosphere contributes to a better monitoring of rainfall in the inter-tropical zone. As part of this mission, research is developed on the use of satellite rainfall products for hydrological research or operational application such as flood monitoring. A key issue for such applications is how to account for rainfall products biases and uncertainties, and how to propagate them into the end user models ? Another important question is how to choose the best space-time resolution for the rainfall forcing, given that both model performances and rain-product uncertainties are resolution dependent. This paper analyses the potential of satellite rainfall products combined with hydrological modeling to monitor the Niger river floods in the city of Niamey, Niger. A dramatic increase of these floods has been observed in the last decades. The study focuses on the 125000 km2 area in the vicinity of Niamey, where local runoff is responsible for the most extreme floods recorded in recent years. Several rainfall products are tested as forcing to the SURFEX-TRIP hydrological simulations. Differences in terms of rainfall amount, number of rainy days, spatial extension of the rainfall events and frequency distribution of the rain rates are found among the products. Their impacts on the simulated outflow is analyzed. The simulations based on the Real time estimates produce an excess in the discharge. For flood prediction, the problem can be overcome by a prior adjustment of the products - as done here with probability matching - or by analysing the simulated discharge in terms of percentile or anomaly. All tested products exhibit some
Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.
2014-01-01
Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.
Energy Technology Data Exchange (ETDEWEB)
Pal Verma, Mahendra [Instituto de Investigaciones Electricas, Cuernavaca, Morelos (Mexico)
2008-07-01
A procedure was developed to consider the analytical uncertainty in each parameter of geochemical analysis of geothermal fluid. The estimation of the uncertainty is based on the results of the geochemical analyses of geothermal fluids (numbered from the 0 to the 14), obtained within the framework of the comparisons program among the geochemical laboratories in the last 30 years. Also the propagation of the analytical uncertainty was realized in the calculation of the parameters of the geothermal fluid in the reservoir, through the methods of interval of uncertainty and GUM (Guide to the expression of Uncertainty of Measurement). The application of the methods is illustrated in the pH calculation of the geothermal fluid in the reservoir, considering samples 10 and 11 as separated waters at atmospheric conditions. [Spanish] Se desarrollo un procedimiento para estimar la incertidumbre analitica en cada parametro de analisis geoquimico de fluido geotermico. La estimacion de la incertidumbre esta basada en los resultados de los analisis geoquimicos de fluidos geotermicos (numerados del 0 al 14), obtenidos en el marco del programa de comparaciones entre los laboratorios geoquimicos en los ultimos 30 anos. Tambien se realizo la propagacion de la incertidumbre analitica en el calculo de los parametros del fluido geotermico en el yacimiento, a traves de los metodos de intervalo de incertidumbre y GUM (Guide to the expression of Uncertainty of Measurement). La aplicacion de los metodos se ilustra en el calculo de pH del fluido geotermico en el yacimiento, considerando las muestras 10 y 11 como aguas separadas a las condiciones atmosfericas.
Energy Technology Data Exchange (ETDEWEB)
Vinai, Paolo [Paul Scherrer Institute, Villigen (Switzerland); Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland); Chalmers University of Technology, Goeteborg (Sweden); Macian-Juan, Rafael [Technische Universitaet Muenchen, Garching (Germany); Chawla, Rakesh [Paul Scherrer Institute, Villigen (Switzerland); Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland)
2008-07-01
The paper describes the propagation of void fraction uncertainty, as quantified by employing a novel methodology developed at PSI, in the RETRAN-3D simulation of the Peach Bottom turbine trip test. Since the transient considered is characterized by a strongly coupling between thermal-hydraulics and neutronics, the accuracy in the void fraction model has a very important influence on the prediction of the power history and, in particular, of the maximum power reached. It has been shown that the objective measures used for the void fraction uncertainty, based on the direct comparison between experimental and predicted values extracted from a database of appropriate separate-effect tests, provides power uncertainty bands that are narrower and more realistic than those based, for example, on expert opinion. The applicability of such an approach to NPP transient best estimate analysis has thus been demonstrated. (authors)
Mishra, S.; Schwab, Ch.; Šukys, J.
2016-05-01
We consider the very challenging problem of efficient uncertainty quantification for acoustic wave propagation in a highly heterogeneous, possibly layered, random medium, characterized by possibly anisotropic, piecewise log-exponentially distributed Gaussian random fields. A multi-level Monte Carlo finite volume method is proposed, along with a novel, bias-free upscaling technique that allows to represent the input random fields, generated using spectral FFT methods, efficiently. Combined together with a recently developed dynamic load balancing algorithm that scales to massively parallel computing architectures, the proposed method is able to robustly compute uncertainty for highly realistic random subsurface formations that can contain a very high number (millions) of sources of uncertainty. Numerical experiments, in both two and three space dimensions, illustrating the efficiency of the method are presented.
DEFF Research Database (Denmark)
Abdul Samad, Noor Asma Fazli Bin; Sin, Gürkan; Gernaey, Krist;
2013-01-01
, while for sensitivity analysis, global methods including the standardized regression coefficients (SRC) and Morris screening are used to identify the most significant parameters. The potassium dihydrogen phosphate (KDP) crystallization process is used as a case study, both in open-loop and closed......This paper presents the application of uncertainty and sensitivity analysis as part of a systematic modelbased process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty...
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
Rose, K.; Bauer, J. R.; Baker, D. V.
2015-12-01
As big data computing capabilities are increasingly paired with spatial analytical tools and approaches, there is a need to ensure uncertainty associated with the datasets used in these analyses is adequately incorporated and portrayed in results. Often the products of spatial analyses, big data and otherwise, are developed using discontinuous, sparse, and often point-driven data to represent continuous phenomena. Results from these analyses are generally presented without clear explanations of the uncertainty associated with the interpolated values. The Variable Grid Method (VGM) offers users with a flexible approach designed for application to a variety of analyses where users there is a need to study, evaluate, and analyze spatial trends and patterns while maintaining connection to and communicating the uncertainty in the underlying spatial datasets. The VGM outputs a simultaneous visualization representative of the spatial data analyses and quantification of underlying uncertainties, which can be calculated using data related to sample density, sample variance, interpolation error, uncertainty calculated from multiple simulations. In this presentation we will show how we are utilizing Hadoop to store and perform spatial analysis through the development of custom Spark and MapReduce applications that incorporate ESRI Hadoop libraries. The team will present custom 'Big Data' geospatial applications that run on the Hadoop cluster and integrate with ESRI ArcMap with the team's probabilistic VGM approach. The VGM-Hadoop tool has been specially built as a multi-step MapReduce application running on the Hadoop cluster for the purpose of data reduction. This reduction is accomplished by generating multi-resolution, non-overlapping, attributed topology that is then further processed using ESRI's geostatistical analyst to convey a probabilistic model of a chosen study region. Finally, we will share our approach for implementation of data reduction and topology generation
NAJI, Noor Ezzulddin
2011-01-01
Presented is a derivation of an analytical expression for the mode-coherence coefficients of uniform-distributed wave propagating within different homogeneous media-as in the case of hyperbolic Gaussian beams-and a simple method involving the superposition of two such beams is proposed. The results obtained from this work are very applicable to study and analysis of Hermite-Gaussian beam propagation, especially in the problems of radiation-matter interaction, and laser beam propagatio...
On the propagation of diel signals in river networks using analytic solutions of flow equations
Fonley, M.; Mantilla, R.; Small, S. J.; Curtu, R.
2015-08-01
Two hypotheses have been put forth to explain the magnitude and timing of diel streamflow oscillations during low flow conditions. The first suggests that delays between the peaks and troughs of streamflow and daily evapotranspiration are due to processes occurring in the soil as water moves toward the channels in the river network. The second posits that they are due to the propagation of the signal through the channels as water makes its way to the outlet of the basin. In this paper, we design and implement a theoretical experiment to test these hypotheses. We impose a baseflow signal entering the river network and use a linear transport equation to represent flow along the network. We develop analytic streamflow solutions for two cases: uniform and nonuniform velocities in space over all river links. We then use our analytic solutions to simulate streamflows along a self-similar river network for different flow velocities. Our results show that the amplitude and time delay of the streamflow solution are heavily influenced by transport in the river network. Moreover, our equations show that the geomorphology and topology of the river network play important roles in determining how amplitude and signal delay are reflected in streamflow signals. Finally, our results are consistent with empirical observations that delays are more significant as low flow decreases.
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
Wave-like warp propagation in circumbinary discs - I. Analytic theory and numerical simulations
Facchini, Stefano; Lodato, Giuseppe; Price, Daniel J.
2013-08-01
In this paper we analyse the propagation of warps in protostellar circumbinary discs. We use these systems as a test environment in which to study warp propagation in the bending-wave regime, with the addition of an external torque due to the binary gravitational potential. In particular, we want to test the linear regime, for which an analytic theory has been developed. In order to do so, we first compute analytically the steady-state shape of an inviscid disc subject to the binary torques. The steady-state tilt is a monotonically increasing function of radius, but misalignment is found at the disc inner edge. In the absence of viscosity, the disc does not present any twist. Then, we compare the time-dependent evolution of the warped disc calculated via the known linearized equations both with the analytic solutions and with full 3D numerical simulations. The simulations have been performed with the PHANTOM smoothed particle hydrodynamics (SPH) code using two million particles. We find a good agreement both in the tilt and in the phase evolution for small inclinations, even at very low viscosities. Moreover, we have verified that the linearized equations are able to reproduce the diffusive behaviour when α > H/R, where α is the disc viscosity parameter. Finally, we have used the 3D simulations to explore the non-linear regime. We observe a strongly non-linear behaviour, which leads to the breaking of the disc. Then, the inner disc starts precessing with its own precessional frequency. This behaviour has already been observed with numerical simulations in accretion discs around spinning black holes. The evolution of circumstellar accretion discs strongly depends on the warp evolution. Therefore, the issue explored in this paper could be of fundamental importance in order to understand the evolution of accretion discs in crowded environments, when the gravitational interaction with other stars is highly likely, and in multiple systems. Moreover, the evolution of
International Nuclear Information System (INIS)
The control of uncertainties in the field of reactor physics and their propagation in best-estimate modeling are a major issue in safety analysis. In this framework, the CEA develops a methodology to perform multi-physics simulations including uncertainties analysis. The present paper aims to present and apply this methodology for the analysis of an accidental situation such as REA (Rod Ejection Accident). This accident is characterized by a strong interaction between the different areas of the reactor physics (neutronic, fuel thermal and thermal hydraulic). The modeling is performed with CRONOS2 code. The uncertainties analysis has been conducted with the URANIE platform developed by the CEA: For each identified response from the modeling (output) and considering a set of key parameters with their uncertainties (input), a surrogate model in the form of a neural network has been produced. The set of neural networks is then used to carry out a sensitivity analysis which consists on a global variance analysis with the determination of the Sobol indices for all responses. The sensitivity indices are obtained for the input parameters by an approach based on the use of polynomial chaos. The present exercise helped to develop a methodological flow scheme, to consolidate the use of URANIE tool in the framework of parallel calculations. Finally, the use of polynomial chaos allowed computing high order sensitivity indices and thus highlighting and classifying the influence of identified uncertainties on each response of the analysis (single and interaction effects). (authors)
International Nuclear Information System (INIS)
The known analytic expressions for the evolution of the polarization of electromagnetic waves propagating in a plasma with uniformly sheared magnetic field are extended to the case where the shear is not constant. Exact analytic expressions are found for the case when the space variations of the medium are such that the magnetic field components and the plasma density satisfy a particular condition (eq. 13), possibly in a convenient reference frame of polarization space
International Nuclear Information System (INIS)
Highlights: • Fission yield data and uncertainty comparison between major nuclear data libraries. • Fission yield covariance generation through Bayesian technique. • Study of the effect of fission yield correlations on decay heat calculations. • Covariance information contribute to reduce fission pulse decay heat uncertainty. - Abstract: Fission product yields are fundamental parameters in burnup/activation calculations and the impact of their uncertainties was widely studied in the past. Evaluations of these uncertainties were released, still without covariance data. Therefore, the nuclear community expressed the need of full fission yield covariance matrices to be able to produce inventory calculation results that take into account the complete uncertainty data. State-of-the-art fission yield data and methodologies for fission yield covariance generation were researched in this work. Covariance matrices were generated and compared to the original data stored in the library. Then, we focused on the effect of fission yield covariance information on fission pulse decay heat results for thermal fission of 235U. Calculations were carried out using different libraries and codes (ACAB and ALEPH-2) after introducing the new covariance values. Results were compared with those obtained with the uncertainty data currently provided by the libraries. The uncertainty quantification was performed first with Monte Carlo sampling and then compared with linear perturbation. Indeed, correlations between fission yields strongly affect the uncertainty of decay heat. Eventually, a sensitivity analysis of fission product yields to fission pulse decay heat was performed in order to provide a full set of the most sensitive nuclides for such a calculation
Nuclear Data Uncertainty Propagation to Reactivity Coefficients of a Sodium Fast Reactor
Herrero, J. J.; Ochoa, R.; Martínez, J. S.; Díez, C. J.; García-Herranz, N.; Cabellos, O.
2014-04-01
The assessment of the uncertainty levels on the design and safety parameters for the innovative European Sodium Fast Reactor (ESFR) is mandatory. Some of these relevant safety quantities are the Doppler and void reactivity coefficients, whose uncertainties are quantified. Besides, the nuclear reaction data where an improvement will certainly benefit the design accuracy are identified. This work has been performed with the SCALE 6.1 codes suite and its multigroups cross sections library based on ENDF/B-VII.0 evaluation.
Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model
Wang, Shitao
2016-05-27
Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model\\'s estimates of the plume\\'s trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate\\'s contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.
Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model
Wang, Shitao; Iskandarani, Mohamed; Srinivasan, Ashwanth; Thacker, W. Carlisle; Winokur, Justin; Knio, Omar M.
2016-05-01
Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model's estimates of the plume's trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate's contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.
International Nuclear Information System (INIS)
The prompt fission neutron spectrum (PFNS) uncertainties in the n+239Pu fission reaction are used to study the impact on several fast critical assemblies modeled in the MCNP6.1 code. The newly developed sensitivity capability in MCNP6.1 is used to compute the keff sensitivity coefficients with respect to the PFNS. In comparison, the covariance matrix given in the ENDF/B-VII.1 library is decomposed and randomly sampled realizations of the PFNS are propagated through the criticality calculation, preserving the PFNS covariance matrix. The information gathered from both approaches, including the overall keff uncertainty, is statistically analyzed. Overall, the forward and backward approaches agree as expected. The results from a new method appear to be limited by the process used to evaluate the PFNS and is not necessarily a flaw of the method itself. Final thoughts and directions for future work are suggested
Analytical propagation of errors in dynamic SPECT: estimators, degrading factors, bias and noise
International Nuclear Information System (INIS)
Dynamic SPECT is a relatively new technique that may potentially benefit many imaging applications. Though similar to dynamic PET, the accuracy and precision of dynamic SPECT parameter estimates are degraded by factors that differ from those encountered in PET. In this work we formulate a methodology for analytically studying the propagation of errors from dynamic projection data to kinetic parameter estimates. This methodology is used to study the relationships between reconstruction estimators, image degrading factors, bias and statistical noise for the application of dynamic cardiac imaging with 99mTc-teboroxime. Dynamic data were simulated for a torso phantom, and the effects of attenuation, detector response and scatter were successively included to produce several data sets. The data were reconstructed to obtain both weighted and unweighted least squares solutions, and the kinetic rate parameters for a two- compartment model were estimated. The expected values and standard deviations describing the statistical distribution of parameters that would be estimated from noisy data were calculated analytically. The results of this analysis present several interesting implications for dynamic SPECT. Statistically weighted estimators performed only marginally better than unweighted ones, implying that more computationally efficient unweighted estimators may be appropriate. This also suggests that it may be beneficial to focus future research efforts upon regularization methods with beneficial bias-variance trade-offs. Other aspects of the study describe the fundamental limits of the bias-variance trade-off regarding physical degrading factors and their compensation. The results characterize the effects of attenuation, detector response and scatter, and they are intended to guide future research into dynamic SPECT reconstruction and compensation methods. (author)
Zhang, Yaoju
2007-10-10
A simple and rigorous analytical expression of the propagating field behind an axicon illuminated by an azimuthally polarized beam has been deduced by use of the vector interference theory. This analytical expression can easily be used to calculate accurately the propagation field distribution of azimuthally polarized beams throughout the whole space behind an axicon with any size base angle, not just restricted inside the geometric focal region as does the Fresnel diffraction integral. The numerical results show that the pattern of the beam produced by the azimuthally polarized Gaussian beam that passes through an axicon is a multiring, almost-equal-intensity, and propagation-invariant interference beam in the geometric focal region. The number of bright rings increases with the propagation distance, reaching its maximum at half of the geometric focal length and then decreasing. The intensity of bright rings gradually decreases with the propagation distance in the geometric focal region. However, in the far-field (noninterference) region, only one single-ring pattern is produced and the dark spot size expands rapidly with propagation distance. PMID:17932537
Xu, Yanlong
2015-08-01
The coupled mode theory with coupling of diffraction modes and waveguide modes is usually used on the calculations of transmission and reflection coefficients for electromagnetic waves traveling through periodic sub-wavelength structures. In this paper, I extend this method to derive analytical solutions of high-order dispersion relations for shear horizontal (SH) wave propagation in elastic plates with periodic stubs. In the long wavelength regime, the explicit expression is obtained by this theory and derived specially by employing an effective medium. This indicates that the periodical stubs are equivalent to an effective homogenous layer in the long wavelength. Notably, in the short wavelength regime, high-order diffraction modes in the plate and high-order waveguide modes in the stubs are considered with modes coupling to compute the band structures. Numerical results of the coupled mode theory fit pretty well with the results of the finite element method (FEM). In addition, the band structures\\' evolution with the height of the stubs and the thickness of the plate shows clearly that the method can predict well the Bragg band gaps, locally resonant band gaps and high-order symmetric and anti-symmetric thickness-twist modes for the periodically structured plates. © 2015 Elsevier B.V.
Calibration and Forward Uncertainty Propagation for Large-eddy Simulations of Engineering Flows
Energy Technology Data Exchange (ETDEWEB)
Templeton, Jeremy Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Blaylock, Myra L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Domino, Stefan P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hewson, John C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kumar, Pritvi Raj [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ling, Julia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Najm, Habib N. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ruiz, Anthony [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Safta, Cosmin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stewart, Alessia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wagner, Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-09-01
The objective of this work is to investigate the efficacy of using calibration strategies from Uncertainty Quantification (UQ) to determine model coefficients for LES. As the target methods are for engineering LES, uncertainty from numerical aspects of the model must also be quantified. 15 The ultimate goal of this research thread is to generate a cost versus accuracy curve for LES such that the cost could be minimized given an accuracy prescribed by an engineering need. Realization of this goal would enable LES to serve as a predictive simulation tool within the engineering design process.
DEFF Research Database (Denmark)
Diky, Vladimir; Chirico, Robert D.; Muzny, Chris;
. However, the accuracy of such calculations are generally unknown that often leads to overdesign of the operational units and results in significant additional cost. TDE provides a tool for the analysis of uncertainty of property calculations for multi-component streams. A process stream in TDE can be......ThermoData Engine (TDE, NIST Standard Reference Databases 103a and 103b) is the first product that implements the concept of Dynamic Data Evaluation in the fields of thermophysics and thermochemistry, which includes maintaining the comprehensive and up-to-date database of experimentally measured...... variations). Predictions can be compared to the available experimental data, and uncertainties are estimated for all efficiency criteria. Calculations of the properties of multi-component streams including composition at phase equilibria (flash calculations) are at the heart of process simulation engines...
Institute of Scientific and Technical Information of China (English)
ZHAO Yan-Zhong; SUN Hua-Yan; ZHENG Yong-Hui
2011-01-01
Based on the generalized diffraction integral formula and the idea that the angle misalignment of the cat-eye optical lens can be transformed into the displacement misalignment,an approximate analytical propagation formula for Gaussian beams through a cat-eye optical lens under large incidence angle condition is derived.Numerical results show that the diffraction effect of the apertures of the cat-eye optical lens becomes stronger along with the increase in incidence angle. The results are also compared with those from using an angular spectrum diffraction integral and experiment to illustrate the applicability and validity of our theoretical formula.It is shown that the approximate extent is good enough for the application of a cat-eye optical lens with a radius of 20 mm and a propagation distance of 100m,and the approximate extent becomes better along with the increase in the radius of the cat-eye optical lens and the propagation distance.
Study of propagation along the body at 60 GHz with analytical models and skin-equivalent phantoms
Valerio, Guido; Chahat, Nacer; Zhadobov, Maxim; Sauleau, Ronan
2013-01-01
Propagation on the surface of the human body is investigated for the first time in the mm-wave frequency range. The study, motivated by the increasing number of applications of body area networks, is performed through an accurate analytical model for the fields excited by a small source in the proximity of the human body. New asymptotic expressions are derived for the fields, uniformly valid for the range of values of the physical and geometrical parameters of interest. The theoretical analys...
Gustafsson, Johan; Brolin, Gustav; Cox, Maurice; Ljungberg, Michael; Johansson, Lena; Sjögreen Gleisner, Katarina
2015-11-01
A computer model of a patient-specific clinical 177Lu-DOTATATE therapy dosimetry system is constructed and used for investigating the variability of renal absorbed dose and biologically effective dose (BED) estimates. As patient models, three anthropomorphic computer phantoms coupled to a pharmacokinetic model of 177Lu-DOTATATE are used. Aspects included in the dosimetry-process model are the gamma-camera calibration via measurement of the system sensitivity, selection of imaging time points, generation of mass-density maps from CT, SPECT imaging, volume-of-interest delineation, calculation of absorbed-dose rate via a combination of local energy deposition for electrons and Monte Carlo simulations of photons, curve fitting and integration to absorbed dose and BED. By introducing variabilities in these steps the combined uncertainty in the output quantity is determined. The importance of different sources of uncertainty is assessed by observing the decrease in standard deviation when removing a particular source. The obtained absorbed dose and BED standard deviations are approximately 6% and slightly higher if considering the root mean square error. The most important sources of variability are the compensation for partial volume effects via a recovery coefficient and the gamma-camera calibration via the system sensitivity.
Gustafsson, Johan; Brolin, Gustav; Cox, Maurice; Ljungberg, Michael; Johansson, Lena; Gleisner, Katarina Sjögreen
2015-11-01
A computer model of a patient-specific clinical (177)Lu-DOTATATE therapy dosimetry system is constructed and used for investigating the variability of renal absorbed dose and biologically effective dose (BED) estimates. As patient models, three anthropomorphic computer phantoms coupled to a pharmacokinetic model of (177)Lu-DOTATATE are used. Aspects included in the dosimetry-process model are the gamma-camera calibration via measurement of the system sensitivity, selection of imaging time points, generation of mass-density maps from CT, SPECT imaging, volume-of-interest delineation, calculation of absorbed-dose rate via a combination of local energy deposition for electrons and Monte Carlo simulations of photons, curve fitting and integration to absorbed dose and BED. By introducing variabilities in these steps the combined uncertainty in the output quantity is determined. The importance of different sources of uncertainty is assessed by observing the decrease in standard deviation when removing a particular source. The obtained absorbed dose and BED standard deviations are approximately 6% and slightly higher if considering the root mean square error. The most important sources of variability are the compensation for partial volume effects via a recovery coefficient and the gamma-camera calibration via the system sensitivity. PMID:26458139
Alhassan, Erwin; Duan, Junfeng; Gustavsson, Cecilia; Koning, Arjan; Pomp, Stephan; Rochman, Dimitri; Österlund, Michael
2013-01-01
Analyses are carried out to assess the impact of nuclear data uncertainties on keff for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of Pu-239 random ENDF-formated libraries generated using the TALYS based system were processed into ACE format with NJOY99.336 code and used as input into the Serpent Monte Carlo neutron transport code to obtain distribution in keff. The keff distribution obtained was compared with the latest major nuclear data libraries - JEFF-3.1.2, ENDF/B-VII.1 and JENDL-4.0. A method is proposed for the selection of benchmarks for specific applications using the Total Monte Carlo approach. Finally, an accept/reject criterion was investigated based on chi square values obtained using the Pu-239 Jezebel criticality benchmark. It was observed that nuclear data uncertainties in keff were reduced considerably from 748 to 443 pcm by applying a more rigid acceptance criteria for accepting random files.
Applied Analytical Methods for Solving Some Problems of Wave Propagation in the Coastal Areas
Gagoshidze, Shalva; Kodua, Manoni
2016-04-01
Analytical methods, easy for application, are proposed for the solution of the following four classical problems of coastline hydro mechanics: 1. Refraction of waves on coast slopes of arbitrary steepness; 2. Wave propagation in tapering water areas; 3. Longitudinal waves in open channels; 4. Long waves on uniform and non-uniform flows of water. The first three of these problems are solved by the direct Galerkin-Kantorovich method with a choice , of basic functions which completely satisfy all boundary conditions. This approach leads to obtaining new evolutionary equations which can be asymptotically solved by the WKB method. The WKB solution of the first problem enables us to easily determine the three-dimensional field of velocities and to construct the refraction picture of the wave surface near the coast having an arbitrary angle of slope to the horizon varying from 0° to 180°. This solution, in particular for a vertical cliff, fully agrees with Stoker's particular but difficult solution. Moreover, it is shown for the first time that our Schrödinger type evolutionary equation leads to the formation of the so-called "potential wells" if the angle of coast slope to the horizon exceeds 45°, while the angle given at infinity (i.e. at a large distance from the shore) between the wave crests and the coastline exceeds 75°. This theoretical result expressed in terms of elementary functions is well consistent with the experimental observations and with lot of aerial photographs of waves in the coastal zones of the oceans [1,2]. For the second problem we introduce the notions of "wide" and "narrow" water areas. It is shown that Green's law on the wave height growth holds only for the narrow part of the water area, whereas in the wide part the tapering of the water area leads to an insignificant decrease of the wave height. For the third problem, the bank slopes of trapezoidal channels are assumed to have an arbitrary angle of steepness. So far we have known the
Gates, Robert L
2015-01-01
This work proposes a scheme for significantly reducing the computational complexity of discretized problems involving the non-smooth forward propagation of uncertainty by combining the adaptive hierarchical sparse grid stochastic collocation method (ALSGC) with a hierarchy of successively finer spatial discretizations (e.g. finite elements) of the underlying deterministic problem. To achieve this, we build strongly upon ideas from the Multilevel Monte Carlo method (MLMC), which represents a well-established technique for the reduction of computational complexity in problems affected by both deterministic and stochastic error contributions. The resulting approach is termed the Multilevel Adaptive Sparse Grid Collocation (MLASGC) method. Preliminary results for a low-dimensional, non-smooth parametric ODE problem are promising: the proposed MLASGC method exhibits an error/cost-relation of $\\varepsilon \\sim t^{-0.95}$ and therefore significantly outperforms the single-level ALSGC ($\\varepsilon \\sim t^{-0.65}$) a...
International Nuclear Information System (INIS)
To fullfill the needs of the probabilistic safety assessment for the Angra 1 nuclear power plant, a computer code for performing event tree analyses ETAP2, has developed in PASCAL language for the Burroughs B-6700 computer. The code employs the impact vector method. A dependency matrix is defined which allows for proper consideration of all relevant intersystem dependencies. The analyses are carried out to the subsystem (train or channel) level. The uncertainty analysis is performance on the dominant accident sequences lumped in two different groups: one for assessing the core-degradation-class frequencies and another for obtaining the core-degradation frequency concerning the initiator under analysis. For this purpose we use a discrete Monte Carlo algorithm which is faster than others available besides furnishing reliable results. The Loss-Of-Offsite power initiator analysis is presented for illustration purposes. The ETAP2 current version is implemented in the VAX 11/780 computer. (author)
International Nuclear Information System (INIS)
This work uses a new methodology to evaluate DNBR, named mini-RTDP. The standard thermal design procedure method (STDP) currently in use establishes a limit design value which cannot be surpassed. This limit value is determined taking into account the uncertainties of the empirical correlation used in COBRAIIIC/MIT code, modified do Angra-1 conditions. The correlation used is the Westinghouse's W-3 and the minimum DNBR (MDNBR) value cannot be less than 1.3. The new methodology reduces the excessive level of conservatism associated with the parameters used in the DNBR calculation, which are in their most unfavorable values of the STDP methodology, by using their best estimate values. The final goal is to obtain a new DNBR design limit which will provide a margin gain due to more realistic parameters values used in the methodology. (author). 11 refs., 2 tabs
Bilionis, Ilias; Gonzalez, Marcial
2016-01-01
The prohibitive cost of performing Uncertainty Quantification (UQ) tasks with a very large number of input parameters can be addressed, if the response exhibits some special structure that can be discovered and exploited. Several physical responses exhibit a special structure known as an active subspace (AS), a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction with the AS represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the model, we design a two-step maximum likelihood optimization procedure that ensures the ...
Camici, Stefania; Tito Aronica, Giuseppe; Tarpanelli, Angelica; Moramarco, Tommaso
2013-04-01
Hydraulic models are an essential tool in many fields, e.g. civil engineering, flood hazard and risk assessments, evaluation of flood control measures, etc. Nowadays there are many models of different complexity regarding the mathematical foundation and spatial dimensions available, and most of them are comparatively easy to operate due to sophisticated tools for model setup and control. However, the calibration of these models is still underdeveloped in contrast to other models like e.g. hydrological models or models used in ecosystem analysis. This has basically two reasons. First, the lack of relevant data necessary for the model calibration. Indeed, flood events are very rarely monitored due to the disturbances inflicted by them and the lack of appropriate measuring equipment. The second reason is related to the choice of a suitable performance measures for calibrating and to evaluate model predictions in a credible and consistent way (and to reduce the uncertainty). This study takes a well documented flood event in November 2012 in Paglia river basin (Central Italy). For this area a detailed description of the main channel morphology, obtained from an accurate topographical surveys and by a DEM with spatial resolution of 2 m, and several points within the floodplain areas, in which the maximum water level has been measured, were available for the post-event analysis. On basis of these information two-dimensional inertial finite element hydraulic model was set up and calibrated using different performance measures. Manning roughness coefficients obtained from the different calibrations were then used for the delineation of inundation maps including also uncertainty. The water levels of three hydrometric stations and flooded area extensions, derived by video recording the day after the flood event, have been used for the validation of the model.
International Nuclear Information System (INIS)
This paper summarizes the results of the dynamic response analysis of the Zion reactor containment building using three different soil-structure interaction (SSI) analytical procedures which are: the substructure method, CLASSI; the equivalent linear finite element approach, ALUSH; and the nonlinear finite element procedure, DYNA3D. Uncertainties in analyzing a soil-structure system due to SSI analysis procedures were investigated. Responses at selected locations in the structure were compared through peak accelerations and response spectra
Directory of Open Access Journals (Sweden)
Ramin Shamshiri
2014-01-01
Full Text Available Wave propagation and heat distribution are both governed by second order linear constant coefficient partial differential equations, however their solutions yields very different properties. This study presents a comprehensive comparison between hyperbolic wave equation and parabolic heat equation. Issues such as conservation of wave profile versus averaging, transporting information, finite versus infinite speed propagation, time reversibility versus irreversibility and propagation of singularities versus instantaneous smoothing have been addressed and followed by examples and graphical evidences from computer simulations to support the arguments.
International Nuclear Information System (INIS)
One of the most important thermalhydraulics safety parameters is the DNBR (Departure from Nucleate Boiling Ratio). The current methodology in use at Eletronuclear to determine DNBR is extremely conservative and may result in penalties to the reactor power due to an increase plugging level of steam generator tubes. This work uses a new methodology to evaluate DNBR, named mini-RTDP. The standard methodology (STDP) currently in use establishes a limit design value which cannot be surpassed. This limit value is determined taking into account the uncertainties of the empirical correlation used in COBRA IIC/MIT code, modified to Angra 1 conditions. The correlation used is the Westinghouse's W-3 and the minimum DNBR (MDBR) value cannot be less than 1.3. The new methodology reduces the excessive level of conservatism associated with the parameters used in the DNBR calculation, which take most unfavorable values in the STDP methodology, by using their best estimate values. The final goal is to obtain a new DNBR design limit which will provide a margin gain due to more realistic parameters values used in the methodology. (author)
Stolarski, R. S.; Douglass, A. R.
1986-01-01
Models of stratospheric photochemistry are generally tested by comparing their predictions for the composition of the present atmosphere with measurements of species concentrations. These models are then used to make predictions of the atmospheric sensitivity to perturbations. Here the problem of the sensitivity of such a model to chlorine perturbations ranging from the present influx of chlorine-containing compounds to several times that influx is addressed. The effects of uncertainties in input parameters, including reaction rate coefficients, cross sections, solar fluxes, and boundary conditions, are evaluated using a Monte Carlo method in which the values of the input parameters are randomly selected. The results are probability distributions for present atmosheric concentrations and for calculated perturbations due to chlorine from fluorocarbons. For more than 300 Monte Carlo runs the calculated ozone perturbation for continued emission of fluorocarbons at today's rates had a mean value of -6.2 percent, with a 1-sigma width of 5.5 percent. Using the same runs but only allowing the cases in which the calculated present atmosphere values of NO, NO2, and ClO at 25 km altitude fell within the range of measurements yielded a mean ozone depletion of -3 percent, with a 1-sigma deviation of 2.2 percent. The model showed a nonlinear behavior as a function of added fluorocarbons. The mean of the Monte Carlo runs was less nonlinear than the model run using mean value of the input parameters.
Kuzmin, Evgeny Anatol'evich
2012-01-01
Uncertainty and certainty of organizational-economic systems are their integral properties. Existence and development of any object in stochastic conditions is not obviously possible without presence of uncertain conditions and the certain factors determining the subsequent conditions of organizational-economic system. Representation and a substantiation of the methodological device of carrying out of an estimation of uncertainty and the certainty, the author stated earlier in the publication...
International Nuclear Information System (INIS)
horizontal branch (HB) stars, but not by appealing to inadequacies in either theoretical stellar atmospheres or canonical evolutionary phases (e.g., the main-sequence turnoff). The different model predictions in the near-IR for intermediate age systems are due to different treatments of the thermally pulsating asymptotic giant branch stellar evolutionary phase. We emphasize that due to a lack of calibrating star cluster data in regions of the metallicity-age plane relevant for galaxies, all of these models continue to suffer from serious uncertainties that are difficult to quantify.
Uncertainty in soil-structure interaction analysis arising from differences in analytical techniques
International Nuclear Information System (INIS)
This study addresses uncertainties arising from variations in different modeling approaches to soil-structure interaction of massive structures at a nuclear power plant. To perform a comprehensive systems analysis, it is necessary to quantify, for each phase of the traditional analysis procedure, both the realistic seismic response and the uncertainties associated with them. In this study two linear soil-structure interaction techniques were used to analyze the Zion, Illinois nuclear power plant: a direct method using the FLUSH computer program and a substructure approach using the CLASSI family of computer programs. In-structure response from two earthquakes, one real and one synthetic, was compared. Structure configurations from relatively simple to complicated multi-structure cases were analyzed. The resulting variations help quantify uncertainty in structure response due to analysis procedures
An Analytic Solution to the Propagation of Cylindrical Blast Waves in a Radiative Gas
Directory of Open Access Journals (Sweden)
B.G Verma
1977-01-01
Full Text Available In this paper, we have obtained a set of non-similarity in closed forms for the propagation of a cylindrical blast wave in a radiative gas. An explosion in a gas of constant density and pressure has been considered by assuming the existence of an initial uniform magnetic field in the axial direction. The disturbance is supposed to be headed by a shock surface of variable strength and the total energy of the wave varies with time.
Accounting for the analytical properties of the quark propagator from Dyson-Schwinger equation
Dorkin, S M; Kampfer, B
2014-01-01
An approach based on combined solutions of the Bethe-Salpeter (BS) and Dyson-Schwinger (DS) equations within the ladder-rainbow approximation in the presence of singularities is proposed to describe the meson spectrum as quark antiquark bound states. We consistently implement into the BS equation the quark propagator functions from the DS equation, with and without pole-like singularities, and show that, by knowing the precise positions of the poles and their residues, one is able to develop reliable methods of obtaining finite interaction BS kernels and to solve the BS equation numerically. We show that, for bound states with masses $M 1 $ GeV, however, the propagator functions reveal pole-like structures. Consequently, for each type of mesons (unflavored, strange and charmed) we analyze the relevant intervals of $M$ where the pole-like singularities of the corresponding quark propagator influence the solution of the BS equation and develop a framework within which they can be consistently accounted for. The...
Energy Technology Data Exchange (ETDEWEB)
Raupach, Rainer; Flohr, Thomas G, E-mail: rainer.raupach@siemens.com [Siemens AG Healthcare Sector, H IM CT R and D PA, Siemensstrasse 1, D-91301 Forchheim (Germany)
2011-04-07
We analyze the signal and noise propagation of differential phase-contrast computed tomography (PCT) compared with conventional attenuation-based computed tomography (CT) from a theoretical point of view. This work focuses on grating-based differential phase-contrast imaging. A mathematical framework is derived that is able to analytically predict the relative performance of both imaging techniques in the sense of the relative contrast-to-noise ratio for the contrast of any two materials. Two fundamentally different properties of PCT compared with CT are identified. First, the noise power spectra show qualitatively different characteristics implying a resolution-dependent performance ratio. The break-even point is derived analytically as a function of system parameters such as geometry and visibility. A superior performance of PCT compared with CT can only be achieved at a sufficiently high spatial resolution. Second, due to periodicity of phase information which is non-ambiguous only in a bounded interval statistical phase wrapping can occur. This effect causes a collapse of information propagation for low signals which limits the applicability of phase-contrast imaging at low dose.
International Nuclear Information System (INIS)
Based on the generalized diffraction integral formula and the idea that the angle misalignment of the cat-eye optical lens can be transformed into the displacement misalignment, an approximate analytical propagation formula for Gaussian beams through a cat-eye optical lens under large incidence angle condition is derived. Numerical results show that the diffraction effect of the apertures of the cat-eye optical lens becomes stronger along with the increase in incidence angle. The results are also compared with those from using an angular spectrum diffraction integral and experiment to illustrate the applicability and validity of our theoretical formula. It is shown that the approximate extent is good enough for the application of a cat-eye optical lens with a radius of 20 mm and a propagation distance of 100 m, and the approximate extent becomes better along with the increase in the radius of the cat-eye optical lens and the propagation distance. (fundamental areas of phenomenology(including applications))
International Nuclear Information System (INIS)
A characteristic that sets radioactivity measurements apart from most spectrometries is that the precision of a single determination can be estimated from Poisson statistics. This easily calculated counting uncertainty permits the detection of other sources of uncertainty by comparing observed with a priori precision. A good way to test the many underlysing assumptions in radiochemical measurements is to strive for high accuracy. For example, a measurement by instrumental neutron activation analysis (INAA) of gold film thickness in our laboratory revealed the need for pulse pileup correction even at modest dead times. Recently, the International Organization for Standardization (ISO) and other international bodies have formalized the quantitative determination and statement of uncertainty so that the weaknesses of each measurement are exposed for improvement. In the INAA certification measurement of ion-implanted arsenic in silicon (Standard Reference Material 2134), we recently achieved an expanded (95 % confidence) relative uncertainly of 0.38 % for 90 ng of arsenic per sample. A complete quantitative error analysis was performed. This measurement meets the CCQM definition of a primary ratio method. (author)
International Nuclear Information System (INIS)
The relativistic dispersion relation of a nearly perpendicular injected electron cyclotron wave is solved in different regions. The coupling of the O-mode and the X-mode is described by a correct expression qualitatively different from that obtained from the non-relativistic approximation. The damping factor shows that wave absorption is due to two mechanisms: the relativistic O-mode damping and the coupled X-mode damping. Analytic expression for these damping is obtained
Addressing analytical uncertainties in the determination of trichloroacetic acid in soil
Dickey, Catherine A; Heal, Kate V.; Cape, Neil; Stidson, Ruth; Reeves, Nicholas; Heal, Mathew R.
2005-01-01
Soil is an important compartment in the environmental cycling of trichloroacetic acid (TCA), but soil TCA concentration is a methodologically defined quantity; analytical methods either quantify TCA in an aqueous extract of the soil, or thermally decarboxylate TCA to chloroform in the whole soil sample. The former may underestimate the total soil TCA, whereas the latter may overestimate TCA if other soil components (e.g. humic material) liberate chloroform under the decarboxylation conditions...
Directory of Open Access Journals (Sweden)
M. W. Rotach
2012-08-01
Full Text Available D-PHASE was a Forecast Demonstration Project of the World Weather Research Programme (WWRP related to the Mesoscale Alpine Programme (MAP. Its goal was to demonstrate the reliability and quality of operational forecasting of orographically influenced (determined precipitation in the Alps and its consequences on the distribution of run-off characteristics. A special focus was, of course, on heavy-precipitation events.
The D-PHASE Operations Period (DOP ran from June to November~2007, during which an end-to-end forecasting system was operated covering many individual catchments in the Alps, with their water authorities, civil protection organizations or other end users. The forecasting system's core piece was a Visualization Platform where precipitation and flood warnings from some 30 atmospheric and 7 hydrological models (both deterministic and probabilistic and corresponding model fields were displayed in uniform and comparable formats. Also, meteograms, nowcasting information and end user communication was made available to all the forecasters, users and end users. D-PHASE information was assessed and used by some 50 different groups ranging from atmospheric forecasters to civil protection authorities or water management bodies.
In the present contribution, D-PHASE is briefly presented along with its outstanding scientific results and, in particular, the lessons learnt with respect to uncertainty propagation. A focus is thereby on the transfer of ensemble prediction information into the hydrological community and its use with respect to other aspects of societal impact. Objective verification of forecast quality is contrasted to subjective quality assessments during the project (end user workshops, questionnaires and some general conclusions concerning forecast demonstration projects are drawn.
International Nuclear Information System (INIS)
Starting from the path-integral representation for the electron propagator without fermion loops in QED, we analytically investigate the strong-coupling behavior in an arbitrary background electromagnetic field through a series expansion in powers of 1/e. Contrary to the perturbation theory expansion in e the new series only contains positive powers of the derivative operator p. Due to infrared singularities in the path integral the series does not exist beyond the lowest orders, although one can build a systematic expansion in powers of p (not 1/e) which can be calculated up to any order. To handle infinities we regularize using a Pauli-Villars approach. The introduction of fermion loops would not correspond to higher orders in 1/e, so a priori our results are only pertinent to the sector of QED we have chosen. 17 refs., 1 fig
Carlotti, A.; Pueyo, L.
2011-10-01
Since the radius of curvature of a mirror cannot be zero, the apodization that is created by a phase-induced amplitude apodizer (PIAA) formed by a pair of mirrors cannot be zero at the edge of the pupil. If contrasts lower than 10-10 must be obtained, then an additional apodizer must be used with the PIAA mirrors. This has a consequence on the throughput of the system, as well as on its inner working angle (IWA). The intensity distribution in the final pupil plane computed in the ray-optics approximation is misleading, and diffraction must be taken into account to evaluate the true performance of the system. We compute the propagated electric field using two different tools: the semi-analytical model developed by Pueyo and a purely numerical model based on the Huygens integral. It is shown that for higher Fresnel numbers, the agreement between the beams computed using both propagators is stronger, and that for too low Fresnel numbers, the contrast computed using the semi-analytical model can be 2 orders of magnitude higher than the one computed by a numerical evaluation of the Huygens integral. We then study the impact of surface aberrations introduced on the mirrors of the PIAA. The surface quality of the mirrors limits the performance of the system, and the IWA increases linearly with the root-mean-square (RMS) of the aberrations. For a typical set of mirrors, errors of 10nm RMS can increase the IWA by 0.5 to 1λ/D for a contrast of 10-10, and, in the case of a contrast of 10-8, the IWA is maintained to 2 λ/D as long as the errors are smaller than 20nm RMS.
International Nuclear Information System (INIS)
Highlights: ► Response of RC structures to macrocell corrosion of a rebar is studied analytically. ► The problem is solved prior to the onset of microcrack propagation. ► Suitable Love's potential functions are used to study the steel-rust-concrete media. ► The role of crucial factors on the time of onset of concrete cracking is examined. ► The effect of vital factors on the maximum radial stress of concrete is explored. - Abstract: Assessment of the macrocell corrosion which deteriorates reinforced concrete (RC) structures have attracted the attention of many researchers during recent years. In this type of rebar corrosion, the reduction in cross-section of the rebar is significantly accelerated due to the large ratio of the cathode's area to the anode's area. In order to examine the problem, an analytical solution is proposed for prediction of the response of the RC structure from the time of steel depassivation to the stage just prior to the onset of microcrack propagation. To this end, a circular cylindrical RC member under axisymmetric macrocell corrosion of the reinforcement is considered. Both cases of the symmetric and asymmetric rebar corrosion along the length of the anode zone are studied. According to the experimentally observed data, corrosion products are modeled as a thin layer with a nonlinear stress–strain relation. The exact expressions of the elastic fields associated with the steel, and concrete media are obtained using Love's potential function. By imposing the boundary conditions, the resulting set of nonlinear equations are solved in each time step by Newton's method. The effects of the key parameters which have dominating role in the time of the onset of concrete cracking and maximum radial stress field of the concrete have been examined.
Valier-Brasier, Tony; Conoir, Jean-Marc; Coulouvrat, François; Thomas, Jean-Louis
2015-10-01
Sound propagation in dilute suspensions of small spheres is studied using two models: a hydrodynamic model based on the coupled phase equations and an acoustic model based on the ECAH (ECAH: Epstein-Carhart-Allegra-Hawley) multiple scattering theory. The aim is to compare both models through the study of three fundamental kinds of particles: rigid particles, elastic spheres, and viscous droplets. The hydrodynamic model is based on a Rayleigh-Plesset-like equation generalized to elastic spheres and viscous droplets. The hydrodynamic forces for elastic spheres are introduced by analogy with those of droplets. The ECAH theory is also modified in order to take into account the velocity of rigid particles. Analytical calculations performed for long wavelength, low dilution, and weak absorption in the ambient fluid show that both models are strictly equivalent for the three kinds of particles studied. The analytical calculations show that dilatational and translational mechanisms are modeled in the same way by both models. The effective parameters of dilute suspensions are also calculated. PMID:26520342
Directory of Open Access Journals (Sweden)
Sergey F Pravdin
Full Text Available We develop a numerical approach based on our recent analytical model of fiber structure in the left ventricle of the human heart. A special curvilinear coordinate system is proposed to analytically include realistic ventricular shape and myofiber directions. With this anatomical model, electrophysiological simulations can be performed on a rectangular coordinate grid. We apply our method to study the effect of fiber rotation and electrical anisotropy of cardiac tissue (i.e., the ratio of the conductivity coefficients along and across the myocardial fibers on wave propagation using the ten Tusscher-Panfilov (2006 ionic model for human ventricular cells. We show that fiber rotation increases the speed of cardiac activation and attenuates the effects of anisotropy. Our results show that the fiber rotation in the heart is an important factor underlying cardiac excitation. We also study scroll wave dynamics in our model and show the drift of a scroll wave filament whose velocity depends non-monotonically on the fiber rotation angle; the period of scroll wave rotation decreases with an increase of the fiber rotation angle; an increase in anisotropy may cause the breakup of a scroll wave, similar to the mother rotor mechanism of ventricular fibrillation.
Analytic result for the one-loop scalar pentagon integral with massless propagators
International Nuclear Information System (INIS)
The method of dimensional recurrences proposed by one of the authors (O. V.Tarasov, 1996) is applied to the evaluation of the pentagon-type scalar integral with on-shell external legs and massless internal lines. For the first time, an analytic result valid for arbitrary space-time dimension d and five arbitrary kinematic variables is presented. An explicit expression in terms of the Appell hypergeometric function F3 and the Gauss hypergeometric function 2F1, both admitting one-fold integral representations, is given. In the case when one kinematic variable vanishes, the integral reduces to a combination of Gauss hypergeometric functions 2F1. For the case when one scalar invariant is large compared to the others, the asymptotic values of the integral in terms of Gauss hypergeometric functions 2F1 are presented in d=2-2ε, 4-2ε, and 6-2ε dimensions. For multi-Regge kinematics, the asymptotic value of the integral in d=4-2ε dimensions is given in terms of the Appell function F3 and the Gauss hypergeometric function 2F1. (orig.)
Post van der Burg, Max; Cullinane Thomas, Catherine; Holcombe, Tracy R.; Nelson, Richard D.
2016-01-01
The Landscape Conservation Cooperatives (LCCs) are a network of partnerships throughout North America that are tasked with integrating science and management to support more effective delivery of conservation at a landscape scale. In order to achieve this integration, some LCCs have adopted the approach of providing their partners with better scientific information in an effort to facilitate more effective and coordinated conservation decisions. Taking this approach has led many LCCs to begin funding research to provide the information for improved decision making. To ensure that funding goes to research projects with the highest likelihood of leading to more integrated broad scale conservation, some LCCs have also developed approaches for prioritizing which information needs will be of most benefit to their partnerships. We describe two case studies in which decision analytic tools were used to quantitatively assess the relative importance of information for decisions made by partners in the Plains and Prairie Potholes LCC. The results of the case studies point toward a few valuable lessons in terms of using these tools with LCCs. Decision analytic tools tend to help shift focus away from research oriented discussions and toward discussions about how information is used in making better decisions. However, many technical experts do not have enough knowledge about decision making contexts to fully inform the latter type of discussion. When assessed in the right decision context, however, decision analyses can point out where uncertainties actually affect optimal decisions and where they do not. This helps technical experts understand that not all research is valuable in improving decision making. But perhaps most importantly, our results suggest that decision analytic tools may be more useful for LCCs as way of developing integrated objectives for coordinating partner decisions across the landscape, rather than simply ranking research priorities.
Karakoylu, E.; Franz, B.
2016-01-01
First attempt at quantifying uncertainties in ocean remote sensing reflectance satellite measurements. Based on 1000 iterations of Monte Carlo. Data source is a SeaWiFS 4-day composite, 2003. The uncertainty is for remote sensing reflectance (Rrs) at 443 nm.
Directory of Open Access Journals (Sweden)
B. Scherllin-Pirscher
2011-05-01
Full Text Available Due to the measurement principle of the radio occultation (RO technique, RO data are highly suitable for climate studies. Single RO profiles can be used to build climatological fields of different atmospheric parameters like bending angle, refractivity, density, pressure, geopotential height, and temperature. RO climatologies are affected by random (statistical errors, sampling errors, and systematic errors, yielding a total climatological error. Based on empirical error estimates, we provide a simple analytical error model for these error components, which accounts for vertical, latitudinal, and seasonal variations. The vertical structure of each error component is modeled constant around the tropopause region. Above this region the error increases exponentially, below the increase follows an inverse height power-law. The statistical error strongly depends on the number of measurements. It is found to be the smallest error component for monthly mean 10° zonal mean climatologies with more than 600 measurements per bin. Due to smallest atmospheric variability, the sampling error is found to be smallest at low latitudes equatorwards of 40°. Beyond 40°, this error increases roughly linearly, with a stronger increase in hemispheric winter than in hemispheric summer. The sampling error model accounts for this hemispheric asymmetry. However, we recommend to subtract the sampling error when using RO climatologies for climate research since the residual sampling error remaining after such subtraction is estimated to be 50 % of the sampling error for bending angle and 30 % or less for the other atmospheric parameters. The systematic error accounts for potential residual biases in the measurements as well as in the retrieval process and generally dominates the total climatological error. Overall the total error in monthly means is estimated to be smaller than 0.07 % in refractivity and 0.15 K in temperature at low to mid latitudes, increasing towards
The uncertainty of the half-life
Pommé, S.
2015-06-01
Half-life measurements of radionuclides are undeservedly perceived as ‘easy’ and the experimental uncertainties are commonly underestimated. Data evaluators, scanning the literature, are faced with bad documentation, lack of traceability, incomplete uncertainty budgets and discrepant results. Poor control of uncertainties has its implications for the end-user community, varying from limitations to the accuracy and reliability of nuclear-based analytical techniques to the fundamental question whether half-lives are invariable or not. This paper addresses some issues from the viewpoints of the user community and of the decay data provider. It addresses the propagation of the uncertainty of the half-life in activity measurements and discusses different types of half-life measurements, typical parameters influencing their uncertainty, a tool to propagate the uncertainties and suggestions for a more complete reporting style. Problems and solutions are illustrated with striking examples from literature.
Pedroni, Nicola; Zio, Enrico
2012-01-01
Risk analysis models describing aleatory (i.e., random) events contain parameters (e.g., probabilities, failure rates, ...) that are epistemically-uncertain, i.e., known with poor precision. Whereas aleatory uncertainty is always described by probability distributions, epistemic uncertainty may be represented in different ways (e.g., probabilistic or possibilistic), depending on the information and data available. The work presented in this paper addresses the issue of accounting for (in)depe...
Jumper, Kevin; Fisher, Robert
2012-03-01
Type Ia supernovae are astronomical events in which a white dwarf, the cold remnant of a star that has exhausted its hydrogen fuel, detonates and briefly produces an explosion brighter than most galaxies. Many researchers think that they could occur as the white dwarf approaches a critical mass of 1.4 solar masses by accreting matter from a companion main sequence star, a scenario that is referred to as the single-degenerate channel. Assuming such a progenitor, we construct a semi-analytic model of the propagation of a flame bubble ignited at a single off-center point within the white dwarf. The bubble then rises under the influences of buoyancy and drag, burning the surrounding fuel material in a process called deflagration. We contrast the behavior of the deflagration phase in the presence of a physically high Reynolds number regime with the low Reynolds number regimes inherent to three-dimensional simulations, which are a consequence of numerical viscosity. Our work may help validate three-dimensional deflagration results over a range of initial conditions.
Goulden, T.; Hopkinson, C.
2013-12-01
The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future
International Nuclear Information System (INIS)
A new methodology, referred to as manufacturing and technological parameters uncertainty quantification (MTUQ), is under development at Paul Scherrer Institut (PSI). Based on uncertainty and global sensitivity analysis methods, MTUQ aims at advancing state-of-the-art for the treatment of geometrical/material uncertainties in light water reactor computations, using the MCNPX Monte Carlo neutron transport code. The development is currently focused primarily on criticality safety evaluations (CSE). In that context, the key components are a dedicated modular interface with the MCNPX code and a user-friendly interface to model functional relationship between system variables. A unique feature is an automatic capability to parameterize variables belonging to so-called “repeated structures” such as to allow for perturbations of each individual element of a given system modelled with MCNPX. Concerning the statistical analysis capabilities, these are currently implemented through an interface with the ROOT platform to handle the random sampling design. This paper presents the current status of the MTUQ methodology development and a first assessment of an ongoing organisation for economic cooperation and development/nuclear energy agency benchmark dedicated to uncertainty analyses for CSE. The presented results illustrate the overall capabilities of MTUQ and underline its relevance in predicting more realistic results compared to a methodology previously applied at PSI for this particular benchmark. (author)
Thelen, Brian J.; Rickerd, Chris J.; Burns, Joseph W.
2014-06-01
With all of the new remote sensing modalities available, with ever increasing capabilities, there is a constant desire to extend the current state of the art in physics-based feature extraction and to introduce new and innovative techniques that enable the exploitation within and across modalities, i.e., fusion. A key component of this process is finding the associated features from the various imaging modalities that provide key information in terms of exploitative fusion. Further, it is desired to have an automatic methodology for assessing the information in the features from the various imaging modalities, in the presence of uncertainty. In this paper we propose a novel approach for assessing, quantifying, and isolating the information in the features via a joint statistical modeling of the features with the Gaussian Copula framework. This framework allows for a very general modeling of distributions on each of the features while still modeling the conditional dependence between the features, and the final output is a relatively accurate estimate of the information-theoretic J-divergence metric, which is directly related to discriminability. A very useful aspect of this approach is that it can be used to assess which features are most informative, and what is the information content as a function of key uncertainties (e.g., geometry) and collection parameters (e.g., SNR and resolution). We show some results of applying the Gaussian Copula framework and estimating the J-Divergence on HRR data as generated from the AFRL public release data set known as the Backhoe Data Dome.
International Nuclear Information System (INIS)
Leaks in pressurized tubes generate acoustic waves that propagate through the walls of these tubes, which can be captured by accelerometers or by acoustic emission sensors. The knowledge of how these walls can vibrate, or in another way, how these acoustic waves propagate in this material is fundamental in the detection and localization process of the leak source. In this work an analytic model was implemented, through the motion equations of a cylindrical shell, with the objective to understand the behavior of the tube surface excited by a point source. Since the cylindrical surface has a closed pattern in the circumferential direction, waves that are beginning their trajectory will meet with another that has already completed the turn over the cylindrical shell, in the clockwise direction as well as in the counter clockwise direction, generating constructive and destructive interferences. After enough time of propagation, peaks and valleys in the shell surface are formed, which can be visualized through a graphic representation of the analytic solution created. The theoretical results were proven through measures accomplished in an experimental setup composed of a steel tube finished in sand box, simulating the condition of infinite tube. To determine the location of the point source on the surface, the process of inverse solution was adopted, that is to say, known the signals of the sensor disposed in the tube surface , it is determined through the theoretical model where the source that generated these signals can be. (author)
Energy Technology Data Exchange (ETDEWEB)
Holland, Michael K. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); O' Rourke, Patrick E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-05-04
An SRNL H-Canyon Test Bed performance evaluation project was completed jointly by SRNL and LANL on a prototype monochromatic energy dispersive x-ray fluorescence instrument, the hiRX. A series of uncertainty propagations were generated based upon plutonium and uranium measurements performed using the alpha-prototype hiRX instrument. Data reduction and uncertainty modeling provided in this report were performed by the SRNL authors. Observations and lessons learned from this evaluation were also used to predict the expected uncertainties that should be achievable at multiple plutonium and uranium concentration levels provided instrument hardware and software upgrades being recommended by LANL and SRNL are performed.
International Nuclear Information System (INIS)
Based on the Collins formula in a cylindrical coordinate system and the method of introducing a hard aperture function into a finite sum of complex Gaussian functions, an approximate three-dimensional analytical formula for oblique and off-axis Gaussian beams propagating through a cat-eye optical lens is derived. Numerical results show that a reasonable choice of the obliquity factor would result in a better focus beam with a higher central intensity at the return place than that without obliquity, whereas the previous conclusion based on geometry optics is that the highest central intensity could be obtained when there is no obliquity. (fundamental areas of phenomenology (including applications))
Energy Technology Data Exchange (ETDEWEB)
Morales Prieto, M.; Ortega Saiz, P.
2011-07-01
Analysis of analytical uncertainties of the methodology of simulation of processes for obtaining isotopic ending inventory of spent fuel, the ARIANE experiment explores the part of simulation of burning.
Shakas, Alexis; Linde, Niklas
2015-05-01
We propose a new approach to model ground penetrating radar signals that propagate through a homogeneous and isotropic medium, and are scattered at thin planar fractures of arbitrary dip, azimuth, thickness and material filling. We use analytical expressions for the Maxwell equations in a homogeneous space to describe the propagation of the signal in the rock matrix, and account for frequency-dependent dispersion and attenuation through the empirical Jonscher formulation. We discretize fractures into elements that are linearly polarized by the incoming electric field that arrives from the source to each element, locally, as a plane wave. To model the effective source wavelet we use a generalized Gamma distribution to define the antenna dipole moment. We combine microscopic and macroscopic Maxwell's equations to derive an analytic expression for the response of each element, which describes the full electric dipole radiation patterns along with effective reflection coefficients of thin layers. Our results compare favorably with finite-difference time-domain modeling in the case of constant electrical parameters of the rock-matrix and fracture filling. Compared with traditional finite-difference time-domain modeling, the proposed approach is faster and more flexible in terms of fracture orientations. A comparison with published laboratory results suggests that the modeling approach can reproduce the main characteristics of the reflected wavelet.
Mazoyer, Johan; Norman, Colin; N'Diaye, Mamadou; van der Marel, Roeland P; Soummer, Rémi
2015-01-01
The new frontier in the quest for the highest contrast levels in the focal plane of a coronagraph is now the correction of the large diffractive artifacts effects introduced at the science camera by apertures of increasing complexity. The coronagraph for the WFIRST/AFTA mission will be the first of such instruments in space with a two Deformable Mirrors wavefront control system. Regardless of the control algorithm for these multi Deformable Mirrors, they will have to rely on quick and accurate simulation of the propagation effects introduced by the out-of-pupil surface. In the first part of this paper, we present the analytical description of the different approximations to simulate these propagation effects. In Annex A, we prove analytically that, in the special case of surfaces inducing a converging beam, the Fresnel method yields high fidelity for simulations of these effects. We provide numerical simulations showing this effect. In the second part, we use these tools in the framework of the Active Compens...
Directory of Open Access Journals (Sweden)
Cinzia Caliendo
2015-01-01
Full Text Available The propagation of the fundamental symmetric Lamb mode S0 along wz-BN/AlN thin composite plates suitable for telecommunication and sensing applications is studied. The investigation of the acoustic field profile across the plate thickness revealed the presence of modes having longitudinal polarization, the Anisimkin Jr. plate modes (AMs, travelling at a phase velocity close to that of the wz-BN longitudinal bulk acoustic wave propagating in the same direction. The study of the S0 mode phase velocity and coupling coefficient (K2 dispersion curves, for different electrical boundary conditions, has shown that eight different coupling configurations are allowable that exhibit a K2 as high as about 4% and very high phase velocity (up to about 16,700 m/s. The effect of the thickness and material type of the metal floating electrode on the K2 dispersion curves has also been investigated, specifically addressing the design of an enhanced coupling device. The gravimetric sensitivity of the BN/AlN-based acoustic waveguides was then calculated for both the AMs and elliptically polarized S0 modes; the AM-based sensor velocity and attenuation shifts due to the viscosity of a surrounding liquid was theoretically predicted. The performed investigation suggests that wz-BN/AlN is a very promising substrate material suitable for developing GHz band devices with enhanced electroacoustic coupling efficiency and suitable for application in telecommunications and sensing fields.
Directory of Open Access Journals (Sweden)
Arika Ligmann-Zielinska
Full Text Available Agent-based models (ABMs have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1 efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2 conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.
DEFF Research Database (Denmark)
Heydorn, Kaj; Anglov, Thomas
2002-01-01
Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...... uncertainty was verified from independent measurements of the same sample by demonstrating statistical control of analytical results and the absence of bias. The proposed method takes into account uncertainties of the measurement, as well as of the amount of calibrant. It is applicable to all types of...
Hu, Huayu
2015-01-01
Nonperturbative calculation of QED processes participated by a strong electromagnetic field, especially provided by strong laser facilities at present and in the near future, generally resorts to the Furry picture with the usage of analytical solutions of the particle dynamical equation, such as the Klein-Gordon equation and Dirac equation. However only for limited field configurations such as a plane-wave field could the equations be solved analytically. Studies have shown significant interests in QED processes in a strong field composed of two counter-propagating laser waves, but the exact solutions in such a field is out of reach. In this paper, inspired by the observation of the structure of the solutions in a plane-wave field, we develop a new method and obtain the analytical solution for the Klein-Gordon equation and equivalently the action function of the solution for the Dirac equation in this field, under a largest dynamical parameter condition that there exists an inertial frame in which the particl...
Plósz, Benedek Gy; De Clercq, Jeriffa; Nopens, Ingmar; Benedetti, Lorenzo; Vanrolleghem, Peter A
2011-01-01
In WWTP models, the accurate assessment of solids inventory in bioreactors equipped with solid-liquid separators, mostly described using one-dimensional (1-D) secondary settling tank (SST) models, is the most fundamental requirement of any calibration procedure. Scientific knowledge on characterising particulate organics in wastewater and on bacteria growth is well-established, whereas 1-D SST models and their impact on biomass concentration predictions are still poorly understood. A rigorous assessment of two 1-DSST models is thus presented: one based on hyperbolic (the widely used Takács-model) and one based on parabolic (the more recently presented Plósz-model) partial differential equations. The former model, using numerical approximation to yield realistic behaviour, is currently the most widely used by wastewater treatment process modellers. The latter is a convection-dispersion model that is solved in a numerically sound way. First, the explicit dispersion in the convection-dispersion model and the numerical dispersion for both SST models are calculated. Second, simulation results of effluent suspended solids concentration (XTSS,Eff), sludge recirculation stream (XTSS,RAS) and sludge blanket height (SBH) are used to demonstrate the distinct behaviour of the models. A thorough scenario analysis is carried out using SST feed flow rate, solids concentration, and overflow rate as degrees of freedom, spanning a broad loading spectrum. A comparison between the measurements and the simulation results demonstrates a considerably improved 1-D model realism using the convection-dispersion model in terms of SBH, XTSS,RAS and XTSS,Eff. Third, to assess the propagation of uncertainty derived from settler model structure to the biokinetic model, the impact of the SST model as sub-model in a plant-wide model on the general model performance is evaluated. A long-term simulation of a bulking event is conducted that spans temperature evolution throughout a summer
Energy Technology Data Exchange (ETDEWEB)
Romero-Garcia, V [Instituto de Ciencia de Materiales de Madrid, Consejo Superior de Investigaciones Cientificas (Spain); Sanchez-Perez, J V [Centro de Tecnologias Fisicas: Acustica, Materiales y Astrofisica, Universidad Politecnica de Valencia (Spain); Garcia-Raffi, L M, E-mail: virogar1@gmail.com [Instituto Universitario de Matematica Pura y Aplicada, Universidad Politecnica de Valencia (Spain)
2011-07-06
The use of sonic crystals (SCs) as environmental noise barriers has certain advantages from both the acoustical and the constructive points of view with regard to conventional ones. However, the interaction between the SCs and the ground has not been studied yet. In this work we are reporting a semi-analytical model, based on the multiple scattering theory and on the method of images, to study this interaction considering the ground as a finite impedance surface. The results obtained here show that this model could be used to design more effective noise barriers based on SCs because the excess attenuation of the ground could be modelled in order to improve the attenuation properties of the array of scatterers. The results are compared with experimental data and numerical predictions thus finding good agreement between them.
Energy Technology Data Exchange (ETDEWEB)
Vršnak, B.; Žic, T.; Dumbović, M. [Hvar Observatory, Faculty of Geodesy, University of Zagreb, Kačćeva 26, HR-10000 Zagreb (Croatia); Temmer, M.; Möstl, C.; Veronig, A. M. [Kanzelhöhe Observatory—IGAM, Institute of Physics, University of Graz, Universittsplatz 5, A-8010 Graz (Austria); Taktakishvili, A.; Mays, M. L. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Odstrčil, D., E-mail: bvrsnak@geof.hr, E-mail: tzic@geof.hr, E-mail: mdumbovic@geof.hr, E-mail: manuela.temmer@uni-graz.at, E-mail: christian.moestl@uni-graz.at, E-mail: astrid.veronig@uni-graz.at, E-mail: aleksandre.taktakishvili-1@nasa.gov, E-mail: m.leila.mays@nasa.gov, E-mail: dusan.odstrcil@nasa.gov [George Mason University, Fairfax, VA 22030 (United States)
2014-08-01
Real-time forecasting of the arrival of coronal mass ejections (CMEs) at Earth, based on remote solar observations, is one of the central issues of space-weather research. In this paper, we compare arrival-time predictions calculated applying the numerical ''WSA-ENLIL+Cone model'' and the analytical ''drag-based model'' (DBM). Both models use coronagraphic observations of CMEs as input data, thus providing an early space-weather forecast two to four days before the arrival of the disturbance at the Earth, depending on the CME speed. It is shown that both methods give very similar results if the drag parameter Γ = 0.1 is used in DBM in combination with a background solar-wind speed of w = 400 km s{sup –1}. For this combination, the mean value of the difference between arrival times calculated by ENLIL and DBM is Δ-bar =0.09±9.0 hr with an average of the absolute-value differences of |Δ|-bar =7.1 hr. Comparing the observed arrivals (O) with the calculated ones (C) for ENLIL gives O – C = –0.3 ± 16.9 hr and, analogously, O – C = +1.1 ± 19.1 hr for DBM. Applying Γ = 0.2 with w = 450 km s{sup –1} in DBM, one finds O – C = –1.7 ± 18.3 hr, with an average of the absolute-value differences of 14.8 hr, which is similar to that for ENLIL, 14.1 hr. Finally, we demonstrate that the prediction accuracy significantly degrades with increasing solar activity.
International Nuclear Information System (INIS)
Real-time forecasting of the arrival of coronal mass ejections (CMEs) at Earth, based on remote solar observations, is one of the central issues of space-weather research. In this paper, we compare arrival-time predictions calculated applying the numerical ''WSA-ENLIL+Cone model'' and the analytical ''drag-based model'' (DBM). Both models use coronagraphic observations of CMEs as input data, thus providing an early space-weather forecast two to four days before the arrival of the disturbance at the Earth, depending on the CME speed. It is shown that both methods give very similar results if the drag parameter Γ = 0.1 is used in DBM in combination with a background solar-wind speed of w = 400 km s–1. For this combination, the mean value of the difference between arrival times calculated by ENLIL and DBM is Δ-bar =0.09±9.0 hr with an average of the absolute-value differences of |Δ|-bar =7.1 hr. Comparing the observed arrivals (O) with the calculated ones (C) for ENLIL gives O – C = –0.3 ± 16.9 hr and, analogously, O – C = +1.1 ± 19.1 hr for DBM. Applying Γ = 0.2 with w = 450 km s–1 in DBM, one finds O – C = –1.7 ± 18.3 hr, with an average of the absolute-value differences of 14.8 hr, which is similar to that for ENLIL, 14.1 hr. Finally, we demonstrate that the prediction accuracy significantly degrades with increasing solar activity
Evaluation of Measurement Uncertainty in Neutron Activation Analysis using Research Reactor
Energy Technology Data Exchange (ETDEWEB)
Chung, Y. S.; Moon, J. H.; Sun, G. M.; Kim, S. H.; Baek, S. Y.; Lim, J. M.; Lee, Y. N.; Kim, H. R
2007-02-15
This report was summarized a general and technical requirements, methods, results on the measurement uncertainty assessment for a maintenance of quality assurance and traceability which should be performed in NAA technique using the HANARO research reactor. It will be used as a basic information to support effectively an accredited analytical services in the future. That is, for the assessment of measurement uncertainty, environmental certified reference materials are used to apply the analytical results obtained from real experiment using ISO-GUM and Monte Carlo Simulation(MCS) methods. Firstly, standard uncertainty of predominant parameters in a NAA is evaluated for the measured values of elements quantitatively, and then combined uncertainty is calculated applying the rule of uncertainty propagation. In addition, the contribution of individual standard uncertainty for the combined uncertainty are estimated and the way for a minimization of them is reviewed.
International Nuclear Information System (INIS)
A comprehensive study is performed in order to evaluate the impact of activation cross section uncertainties on the actinide composition of the irradiated fuel in representative ADS (Accelerator Driven System) irradiation scenarios. Some of the most recent sources/compilations of uncertainty data are used, and the results obtained from them compared. The ANL covariance matrices are taken as reference data for the calculations. The complete set of cross section uncertainties provided in the EAF2005 data library are also used for comparison purposes. In this study, the inventory code ACAB is used to analyze the following questions: impact of different correlation structures using fixed uncertainties/variances; effect of the irradiation time/burn-up on the concentration uncertainties; and applicability of Monte Carlo (MC) and sensitivity-uncertainty (SU) approaches for all the range of burn-up/irradiation times of interest in ADS designs. When comparing results of calculations using ANL versus EAR2005/UN uncertainty data, we found very significant differences in the concentration uncertainties. The applicability of both MC and SU approaches is found acceptable to deal with all the range of irradiation times
Mazoyer, Johan; Pueyo, Laurent; Norman, Colin; N'Diaye, Mamadou; van der Marel, Roeland P.; Soummer, Rémi
2016-03-01
The new frontier in the quest for the highest contrast levels in the focal plane of a coronagraph is now the correction of the large diffraction artifacts introduced at the science camera by apertures of increasing complexity. Indeed, the future generation of space- and ground-based coronagraphic instruments will be mounted on on-axis and/or segmented telescopes; the design of coronagraphic instruments for such observatories is currently a domain undergoing rapid progress. One approach consists of using two sequential deformable mirrors (DMs) to correct for aberrations introduced by secondary mirror structures and segmentation of the primary mirror. The coronagraph for the WFIRST-AFTA mission will be the first of such instruments in space with a two-DM wavefront control system. Regardless of the control algorithm for these multiple DMs, they will have to rely on quick and accurate simulation of the propagation effects introduced by the out-of-pupil surface. In the first part of this paper, we present the analytical description of the different approximations to simulate these propagation effects. In Appendix A, we prove analytically that in the special case of surfaces inducing a converging beam, the Fresnel method yields high fidelity for simulations of these effects. We provide numerical simulations showing this effect. In the second part, we use these tools in the framework of the active compensation of aperture discontinuities (ACAD) technique applied to pupil geometries similar to WFIRST-AFTA. We present these simulations in the context of the optical layout of the high-contrast imager for complex aperture telescopes, which will test ACAD on a optical bench. The results of this analysis show that using the ACAD method, an apodized pupil Lyot coronagraph, and the performance of our current DMs, we are able to obtain, in numerical simulations, a dark hole with a WFIRST-AFTA-like. Our numerical simulation shows that we can obtain contrast better than 2×10-9 in
Transionospheric Propagation Code (TIPC)
Energy Technology Data Exchange (ETDEWEB)
Roussel-Dupre, R.; Kelley, T.A.
1990-10-01
The Transionospheric Propagation Code is a computer program developed at Los Alamos National Lab to perform certain tasks related to the detection of vhf signals following propagation through the ionosphere. The code is written in Fortran 77, runs interactively and was designed to be as machine independent as possible. A menu format in which the user is prompted to supply appropriate parameters for a given task has been adopted for the input while the output is primarily in the form of graphics. The user has the option of selecting from five basic tasks, namely transionospheric propagation, signal filtering, signal processing, DTOA study, and DTOA uncertainty study. For the first task a specified signal is convolved against the impulse response function of the ionosphere to obtain the transionospheric signal. The user is given a choice of four analytic forms for the input pulse or of supplying a tabular form. The option of adding Gaussian-distributed white noise of spectral noise to the input signal is also provided. The deterministic ionosphere is characterized to first order in terms of a total electron content (TEC) along the propagation path. In addition, a scattering model parameterized in terms of a frequency coherence bandwidth is also available. In the second task, detection is simulated by convolving a given filter response against the transionospheric signal. The user is given a choice of a wideband filter or a narrowband Gaussian filter. It is also possible to input a filter response. The third task provides for quadrature detection, envelope detection, and three different techniques for time-tagging the arrival of the transionospheric signal at specified receivers. The latter algorithms can be used to determine a TEC and thus take out the effects of the ionosphere to first order. Task four allows the user to construct a table of delta-times-of-arrival (DTOAs) vs TECs for a specified pair of receivers.
Barbara Mickowska; Anna Sadowska-Rociek; Ewa Cieślik
2013-01-01
The aim of this study was to assess the importance of validation and uncertainty estimation related to the results of amino acid analysis using the ion-exchange chromatography with post-column derivatization technique. The method was validated and the components of standard uncertainty were identified and quantified to recognize the major contributions to uncertainty of analysis. Estimated relative extended uncertainty (k=2, P=95%) varied in range from 9.03% to 12.68%. Quantification of the u...
Leśniewska, Barbara; Kisielewska, Katarzyna; Wiater, Józefa; Godlewska-Żyłkiewicz, Beata
2016-01-01
A new fast method for determination of mobile zinc fractions in soil is proposed in this work. The three-stage modified BCR procedure used for fractionation of zinc in soil was accelerated by using ultrasounds. The working parameters of an ultrasound probe, a power and a time of sonication, were optimized in order to acquire the content of analyte in soil extracts obtained by ultrasound-assisted sequential extraction (USE) consistent with that obtained by conventional modified Community Bureau of Reference (BCR) procedure. The content of zinc in extracts was determined by flame atomic absorption spectrometry. The developed USE procedure allowed for shortening the total extraction time from 48 h to 27 min in comparison to conventional modified BCR procedure. The method was fully validated, and the uncertainty budget was evaluated. The trueness and reproducibility of the developed method was confirmed by analysis of certified reference material of lake sediment BCR-701. The applicability of the procedure for fast, low costs and reliable determination of mobile zinc fraction in soil, which may be useful for assessing of anthropogenic impacts on natural resources and environmental monitoring purposes, was proved by analysis of different types of soil collected from Podlaskie Province (Poland). PMID:26666658
Taming systematic uncertainties at the LHC with the central limit theorem
Fichet, Sylvain
2016-01-01
We study the simplifications occurring in any likelihood function in the presence of a large number of small systematic uncertainties. We find that the marginalisation of these uncertainties can be done analytically by means of second-order error propagation, error combination, the Lyapunov central limit theorem, and under mild approximations which are typically satisfied for LHC likelihoods. The outcomes of this analysis are i) a very light treatment of systematic uncertainties ii) a convenient way of reporting the main effects of systematic uncertainties such as the detector effects occuring in LHC measurements.
International Nuclear Information System (INIS)
More than 150 researchers and engineers from universities and the industrial world met to discuss on the new methodologies developed around assessing uncertainty. About 20 papers were presented and the main topics were: methods to study the propagation of uncertainties, sensitivity analysis, nuclear data covariances or multi-parameter optimisation. This report gathers the contributions of CEA researchers and engineers
International Nuclear Information System (INIS)
One of the most important aspects in relation to the quality assurance in any analytical activity is the estimation of measurement uncertainty. There is general agreement that 'the expression of the result of a measurement is not complete without specifying its associated uncertainty'. An analytical process is the mechanism for obtaining methodological information (measurand) of a material system (population). This implies the need for the definition of the problem, the choice of methods for sampling and measurement and proper execution of these activities for obtaining information. The result of a measurement is only an approximation or estimate of the value of the measurand, which is complete only when accompanied by an estimate of the uncertainty of the analytical process. According to the 'Vocabulary of Basic and General Terms in Metrology' measurement uncertainty' is the parameter associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand (or magnitude). This parameter could be a standard deviation or a confidence interval. The uncertainty evaluation requires detailed look at all possible sources, but not disproportionately. We can make a good estimate of the uncertainty concentrating efforts on the largest contributions. The key steps of the process of determining the uncertainty in the measurements are: - the specification of the measurand; - identification of the sources of uncertainty - the quantification of individual components of uncertainty, - calculate the combined standard uncertainty; - report of uncertainty.
Development of the Calculation Module for Uncertainty of Internal Dose Coefficients
International Nuclear Information System (INIS)
The ICRP (International Commission on Radiological Protection) provides the coefficients as point values without uncertainties, it is important to understand sources of uncertainty in the derivation of the coefficients. When internal dose coefficients are calculated, numerous factors are involved such as transfer rate in biokinetic models, absorption rates and deposition in respiratory tract model, fractional absorption in alimentary tract model, absorbed fractions (AF), nuclide information and organ mass. These factors have uncertainty respectively, which increases the uncertainty of internal dose coefficients by uncertainty propagation. Since the procedure of internal dose coefficients calculation is somewhat complicated, it is difficult to propagate the each uncertainty analytically. The development of module and calculation were performed by MATLAB. In this study, we developed the calculation module for uncertainty of the internal dose coefficient. In this module, uncertainty of various factor used to calculate the internal dose coefficient can be considered using the Monte Carlo sampling method. After developing the module, we calculated the internal dose coefficient for inhalation of 90Sr with the uncertainty and obtained the distribution and percentile values. It is expected that this study will contribute greatly to the uncertainty research on internal dosimetry. In the future, we will update the module to consider more uncertainties
Efficient Quantification of Uncertainties in Complex Computer Code Results Project
National Aeronautics and Space Administration — Propagation of parameter uncertainties through large computer models can be very resource intensive. Frameworks and tools for uncertainty quantification are...
Conroy, Charlie; White, Martin
2008-01-01
The stellar masses, mean ages, metallicities, and star formation histories of galaxies are now commonly estimated via stellar population synthesis (SPS) techniques. SPS relies on stellar evolution calculations from the main sequence to stellar death, stellar spectral libraries, phenomenological dust models, and stellar initial mass functions (IMFs). The present work is the first in a series that explores the impact of uncertainties in key phases of stellar evolution and the IMF on the derived physical properties of galaxies and the expected luminosity evolution for a passively evolving set of stars. A Monte-Carlo Markov-Chain approach is taken to fit near-UV through near-IR photometry of a representative sample of low- and high-redshift galaxies with this new SPS model. Significant results include the following: 1) including uncertainties in stellar evolution, stellar masses at z~0 carry errors of ~0.3 dex at 95% CL with little dependence on luminosity or color, while at z~2, the masses of bright red galaxies...
Rundel, R. D.; Butler, D. M.; Stolarski, R. S.
1978-01-01
The paper discusses the development of a concise stratospheric model which uses iteration to obtain coupling between interacting species. The one-dimensional, steady-state, diurnally-averaged model generates diffusion equations with appropriate sources and sinks for species odd oxygen, H2O, H2, CO, N2O, odd nitrogen, CH4, CH3Cl, CCl4, CF2Cl2, CFCl3, and odd chlorine. The model evaluates steady-state perturbations caused by injections of chlorine and NO(x) and may be used to predict ozone depletion. The model is used in a Monte Carlo study of the propagation of reaction-rate imprecisions by calculating an ozone perturbation caused by the addition of chlorine. Since the model is sensitive to only 10 of the more than 50 reaction rates considered, only about 1000 Monte Carlo cases are required to span the space of possible results.
Directory of Open Access Journals (Sweden)
S. Bönisch
2004-02-01
Full Text Available Este trabalho teve por objetivos utilizar krigagem por indicação para espacializar propriedades de solos expressas por atributos categóricos, gerar uma representação acompanhada de medida espacial de incerteza e modelar a propagação de incerteza pela álgebra de mapas por meio de procedimentos booleanos. Foram estudados os atributos: teores de potássio (K e de alumínio trocáveis, saturação por bases (V, soma de bases (S, capacidade de troca catiônica (CTC, textura (Tx e classes de relevo (CR, de profundidade efetiva do solo, de drenagem interna e de pedregosidade e, ou, rochosidade, extraídos de 222 perfis pedológicos e de 219 amostras extras, referentes a solos do estado de Santa Catarina. A espacialização das incertezas evidenciou a variabilidade espacial dos dados a qual foi relacionada com a origem das amostras e com o comportamento do atributo. Os atributos S, V, K e CR apresentaram grau de incerteza maior do que o de Tx e CTC e houve aumento da incerteza quando representações categóricas foram integradas.The objectives of this work were to generate a representation of soil properties expressed by categorical attributes by kriging indicator; to assess the uncertainty in estimates, and to model the uncertainty propagation of map algebra by means of boolean procedures. The studied attributes were potassium (K and aluminum (Al exchangeable contents, sum of bases (SB, cationic exchange capacity (CEC, base saturation (V, texture (Tx and relief classes (RC, effective soil depth, internal drainage, and stoniness and/or rockiness, extracted from 222 pedologic profiles and 219 extra samples from soils of the State of Santa Catarina, Brazil. The uncertainties evidenced the spatial variability of the data related to the samples' origin and attribute behavior. Attributes SB, V, K, and RC presented higher uncertainties than Tx and CEC, and there was an increase of uncertainty when categorical representations were integrated.
Energy Technology Data Exchange (ETDEWEB)
Andres, T.H
2002-05-01
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
International Nuclear Information System (INIS)
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
Energy Technology Data Exchange (ETDEWEB)
Barrado, A. I.; Garcia, S.; Perez, R. M.
2013-06-01
This paper presents an evaluation of uncertainty associated to analytical measurement of eighteen polycyclic aromatic compounds (PACs) in ambient air by liquid chromatography with fluorescence detection (HPLC/FD). The study was focused on analyses of PM{sub 1}0, PM{sub 2}.5 and gas phase fractions. Main analytical uncertainty was estimated for eleven polycyclic aromatic hydrocarbons (PAHs), four nitro polycyclic aromatic hydrocarbons (nitro-PAHs) and two hydroxy polycyclic aromatic hydrocarbons (OH-PAHs) based on the analytical determination, reference material analysis and extraction step. Main contributions reached 15-30% and came from extraction process of real ambient samples, being those for nitro- PAHs the highest (20-30%). Range and mean concentration of PAC mass concentrations measured in gas phase and PM{sub 1}0/PM{sub 2}.5 particle fractions during a full year are also presented. Concentrations of OH-PAHs were about 2-4 orders of magnitude lower than their parent PAHs and comparable to those sparsely reported in literature. (Author) 7 refs.
Bursik, Marcus; Jones, Matthew; Carn, Simon; Dean, Ken; Patra, Abani; Pavolonis, Michael; Pitman, E. Bruce; Singh, Tarunraj; Singla, Puneet; Webley, Peter; Bjornsson, Halldor; Ripepe, Maurizio
2012-12-01
Data on source conditions for the 14 April 2010 paroxysmal phase of the Eyjafjallajökull eruption, Iceland, have been used as inputs to a trajectory-based eruption column model, bent. This model has in turn been adapted to generate output suitable as input to the volcanic ash transport and dispersal model, puff, which was used to propagate the paroxysmal ash cloud toward and over Europe over the following days. Some of the source parameters, specifically vent radius, vent source velocity, mean grain size of ejecta, and standard deviation of ejecta grain size have been assigned probability distributions based on our lack of knowledge of exact conditions at the source. These probability distributions for the input variables have been sampled in a Monte Carlo fashion using a technique that yields what we herein call the polynomial chaos quadrature weighted estimate (PCQWE) of output parameters from the ash transport and dispersal model. The advantage of PCQWE over Monte Carlo is that since it intelligently samples the input parameter space, fewer model runs are needed to yield estimates of moments and probabilities for the output variables. At each of these sample points for the input variables, a model run is performed. Output moments and probabilities are then computed by properly summing the weighted values of the output parameters of interest. Use of a computational eruption column model coupled with known weather conditions as given by radiosonde data gathered near the vent allows us to estimate that initial mass eruption rate on 14 April 2010 may have been as high as 108 kg/s and was almost certainly above 107 kg/s. This estimate is consistent with the probabilistic envelope computed by PCQWE for the downwind plume. The results furthermore show that statistical moments and probabilities can be computed in a reasonable time by using 94 = 6,561 PCQWE model runs as opposed to millions of model runs that might be required by standard Monte Carlo techniques. The
A R Banai-Kashani
1990-01-01
A planning simulation approach sensitive to the behavioral and contextual environment of industrial location decisionmaking is developed. Its conceptual and methodological basis is in sharp contrast to the neoclassical premise of location theory, in dealing with contingency, collectivity, multiplicity, and uncertainty. Industrial location decisionmaking is approximated with a multidimensional simulation of (intraurban) locational choice. The likelihood of the location of high-technology firms...
Energy Technology Data Exchange (ETDEWEB)
Thomas, R.E.
1982-03-01
An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software.
Facets of Uncertainty in Digital Elevation and Slope Modeling
Institute of Scientific and Technical Information of China (English)
ZHANG Jingxiong; LI Deren
2005-01-01
This paper investigates the differences that result from applying different approaches to uncertainty modeling and reports an experimental examining error estimation and propagation in elevation and slope,with the latter derived from the former. It is confirmed that significant differences exist between uncertainty descriptors, and propagation of uncertainty to end products is immensely affected by the specification of source uncertainty.
CANDU lattice uncertainties during burnup
International Nuclear Information System (INIS)
Uncertainties associated with fundamental nuclear data accompany evaluated nuclear data libraries in the form of covariance matrices. As nuclear data are important parameters in reactor physics calculations, any associated uncertainty causes a loss of confidence in the calculation results. The quantification of output uncertainties is necessary to adequately establish safety margins of nuclear facilities. In this work, microscopic cross-section has been propagated through lattice burnup calculations applied to a generic CANDU® model. It was found that substantial uncertainty emerges during burnup even when fission yield fraction and decay rate uncertainties are neglected. (author)
Uncertainty in hydrological signatures
Westerberg, I. K.; McMillan, H. K.
2015-09-01
Information about rainfall-runoff processes is essential for hydrological analyses, modelling and water-management applications. A hydrological, or diagnostic, signature quantifies such information from observed data as an index value. Signatures are widely used, e.g. for catchment classification, model calibration and change detection. Uncertainties in the observed data - including measurement inaccuracy and representativeness as well as errors relating to data management - propagate to the signature values and reduce their information content. Subjective choices in the calculation method are a further source of uncertainty. We review the uncertainties relevant to different signatures based on rainfall and flow data. We propose a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrate it in two catchments for common signatures including rainfall-runoff thresholds, recession analysis and basic descriptive signatures of flow distribution and dynamics. Our intention is to contribute to awareness and knowledge of signature uncertainty, including typical sources, magnitude and methods for its assessment. We found that the uncertainties were often large (i.e. typical intervals of ±10-40 % relative uncertainty) and highly variable between signatures. There was greater uncertainty in signatures that use high-frequency responses, small data subsets, or subsets prone to measurement errors. There was lower uncertainty in signatures that use spatial or temporal averages. Some signatures were sensitive to particular uncertainty types such as rating-curve form. We found that signatures can be designed to be robust to some uncertainty sources. Signature uncertainties of the magnitudes we found have the potential to change the conclusions of hydrological and ecohydrological analyses, such as cross-catchment comparisons or inferences about dominant processes.
Leśniewska, Barbara; Kisielewska, Katarzyna; Wiater, Józefa; Godlewska-Żyłkiewicz, Beata
2015-01-01
A new fast method for determination of mobile zinc fractions in soil is proposed in this work. The three-stage modified BCR procedure used for fractionation of zinc in soil was accelerated by using ultrasounds. The working parameters of an ultrasound probe, a power and a time of sonication, were optimized in order to acquire the content of analyte in soil extracts obtained by ultrasound-assisted sequential extraction (USE) consistent with that obtained by conventional modified Community Burea...
Calculating uncertainties of safeguards indices: error propagation
International Nuclear Information System (INIS)
Statistical methods play an important role in making references about a MUF, shipper-receiver difference, operator-inspector difference, and other safeguards indices. This session considers the sources and types of measurement errors and treats a specific example to illustrate how the variance of MUF is calculated for the model plant
On the IR behaviour of the Landau-gauge ghost propagator
Boucaud, Ph; Le Yaouanc, A; Micheli, J; Pène, O; Rodríguez-Quintero, J
2008-01-01
We examine analytically the ghost propagator Dyson-Schwinger Equation (DSE) in the deep IR regime and prove that a finite ghost dressing function at vanishing momentum is an alternative solution (solution II) to the usually assumed divergent one (solution I). We furthermore find that the Slavnov-Taylor identities discriminate between these two classes of solutions and strongly support the solution II. The latter turns out to be also preferred by lattice simulations within numerical uncertainties.
On the IR behaviour of the Landau-gauge ghost propagator
International Nuclear Information System (INIS)
We examine analytically the ghost propagator Dyson-Schwinger Equation (DSE) in the deep IR regime and prove that a finite ghost dressing function at vanishing momentum is an alternative solution (solution II) to the usually assumed divergent one (solution I). We furthermore find that the Slavnov-Taylor identities discriminate between these two classes of solutions and strongly support the solution II. The latter turns out to be also preferred by lattice simulations within numerical uncertainties.
Raymond, Jack; Manoel, Andre; Opper, Manfred
2014-01-01
Variational inference is a powerful concept that underlies many iterative approximation algorithms; expectation propagation, mean-field methods and belief propagations were all central themes at the school that can be perceived from this unifying framework. The lectures of Manfred Opper introduce the archetypal example of Expectation Propagation, before establishing the connection with the other approximation methods. Corrections by expansion about the expectation propagation are then explain...
Fuzzy Uncertainty Evaluation for Fault Tree Analysis
Energy Technology Data Exchange (ETDEWEB)
Kim, Ki Beom; Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of); Jae, Moo Sung [Hanyang University, Seoul (Korea, Republic of)
2015-05-15
This traditional probabilistic approach can calculate relatively accurate results. However it requires a long time because of repetitive computation due to the MC method. In addition, when informative data for statistical analysis are not sufficient or some events are mainly caused by human error, the probabilistic approach may not be possible because uncertainties of these events are difficult to be expressed by probabilistic distributions. In order to reduce the computation time and quantify uncertainties of top events when basic events whose uncertainties are difficult to be expressed by probabilistic distributions exist, the fuzzy uncertainty propagation based on fuzzy set theory can be applied. In this paper, we develop a fuzzy uncertainty propagation code and apply the fault tree of the core damage accident after the large loss of coolant accident (LLOCA). The fuzzy uncertainty propagation code is implemented and tested for the fault tree of the radiation release accident. We apply this code to the fault tree of the core damage accident after the LLOCA in three cases and compare the results with those computed by the probabilistic uncertainty propagation using the MC method. The results obtained by the fuzzy uncertainty propagation can be calculated in relatively short time, covering the results obtained by the probabilistic uncertainty propagation.
Fuzzy Uncertainty Evaluation for Fault Tree Analysis
International Nuclear Information System (INIS)
This traditional probabilistic approach can calculate relatively accurate results. However it requires a long time because of repetitive computation due to the MC method. In addition, when informative data for statistical analysis are not sufficient or some events are mainly caused by human error, the probabilistic approach may not be possible because uncertainties of these events are difficult to be expressed by probabilistic distributions. In order to reduce the computation time and quantify uncertainties of top events when basic events whose uncertainties are difficult to be expressed by probabilistic distributions exist, the fuzzy uncertainty propagation based on fuzzy set theory can be applied. In this paper, we develop a fuzzy uncertainty propagation code and apply the fault tree of the core damage accident after the large loss of coolant accident (LLOCA). The fuzzy uncertainty propagation code is implemented and tested for the fault tree of the radiation release accident. We apply this code to the fault tree of the core damage accident after the LLOCA in three cases and compare the results with those computed by the probabilistic uncertainty propagation using the MC method. The results obtained by the fuzzy uncertainty propagation can be calculated in relatively short time, covering the results obtained by the probabilistic uncertainty propagation
Directory of Open Access Journals (Sweden)
N. Eckert
2008-10-01
Full Text Available For snow avalanches, passive defense structures are generally designed by considering high return period events. In this paper, taking inspiration from other natural hazards, an alternative method based on the maximization of the economic benefit of the defense structure is proposed. A general Bayesian framework is described first. Special attention is given to the problem of taking the poor local information into account in the decision-making process. Therefore, simplifying assumptions are made. The avalanche hazard is represented by a Peak Over Threshold (POT model. The influence of the dam is quantified in terms of runout distance reduction with a simple relation derived from small-scale experiments using granular media. The costs corresponding to dam construction and the damage to the element at risk are roughly evaluated for each dam height-hazard value pair, with damage evaluation corresponding to the maximal expected loss. Both the classical and the Bayesian risk functions can then be computed analytically. The results are illustrated with a case study from the French avalanche database. A sensitivity analysis is performed and modelling assumptions are discussed in addition to possible further developments.
Doppler reactivity worth uncertainties due to errors of resolved resonance parameters
International Nuclear Information System (INIS)
Errors of the resolved resonance parameters for the evaluated nuclear data file JENDL-3.2 were evaluated on the basis of Breit-Wigner Multi-level formula. For the Reich-Moore resonance parameters, the errors equivalent to the Breit-Wigner resonance parameters were obtained. Reactivity uncertainties of Doppler reactivity worth are estimated by the sensitivity coefficients of the infinitely diluted cross section resonance self-shielding factor to the changes of resonance parameter of interest. The resonance self-shielding factor based on NR-approximation was analytically described. Total uncertainty of Doppler reactivity worth ρ for whole resonance was estimated by means of error propagation law. (author)
Validity of Parametrized Quark Propagator
Institute of Scientific and Technical Information of China (English)
ZHU Ji-Zhen; ZHOU Li-Juan; MA Wei-Xing
2005-01-01
Based on an extensively study of the Dyson-Schwinger equations for a fully dressed quark propagator in the "rainbow" approximation, a parametrized fully dressed quark propagator is proposed in this paper. The parametrized propagator describes a confining quark propagator in hadron since it is analytic everywhere in complex p2-plane and has no Lemmann representation. The validity of the new propagator is discussed by comparing its predictions on selfenergy functions Af(p2), Bf(p2) and effective mass Mf(p2) of quark with flavor f to their corresponding theoretical results produced by Dyson-Schwinger equations. Our comparison shows that the parametrized quark propagator is a good approximation to the fully dressed quark propagator given by the solutions of Dyson-Schwinger equations in the rainbow approximation and is convenient to use in any theoretical calculations.
Validity of Parametrized Quark Propagator
Institute of Scientific and Technical Information of China (English)
ZHUJi-Zhen; ZHOULi-Juan; MAWei-Xing
2005-01-01
Based on an extensively study of the Dyson-Schwinger equations for a fully dressed quark propagator in the “rainbow”approximation, a parametrized fully dressed quark propagator is proposed in this paper. The parametrized propagator describes a confining quark propagator in hadron since it is analytic everywhere in complex p2-plane and has no Lemmann representation. The validity of the new propagator is discussed by comparing its predictions on selfenergy functions A/(p2), Bl(p2) and effective mass M$(p2) of quark with flavor f to their corresponding theoretical results produced by Dyson-Schwinger equations. Our comparison shows that the parametrized quark propagator is a good approximation to the fully dressed quark propagator given by the solutions of Dyson-Schwinger equations in the rainbow approximation and is convenient to use in any theoretical calculations.
Lindley, Dennis V
2013-01-01
Praise for the First Edition ""...a reference for everyone who is interested in knowing and handling uncertainty.""-Journal of Applied Statistics The critically acclaimed First Edition of Understanding Uncertainty provided a study of uncertainty addressed to scholars in all fields, showing that uncertainty could be measured by probability, and that probability obeyed three basic rules that enabled uncertainty to be handled sensibly in everyday life. These ideas were extended to embrace the scientific method and to show how decisions, containing an uncertain element, could be rationally made.
Estimation of sedimentary proxy records together with associated uncertainty
Directory of Open Access Journals (Sweden)
B. Goswami
2014-06-01
Full Text Available Sedimentary proxy records constitute a significant portion of the recorded evidence that allow us to investigate paleoclimatic conditions and variability. However, uncertainties in the dating of proxy archives limit our ability to fix the timing of past events and interpret proxy record inter-comparisons. While there are various age-modeling approaches to improve the estimation of the age-depth relations of archives, relatively less focus has been given to the propagation of the age (and radiocarbon calibration uncertainties into the final proxy record. We present a generic Bayesian framework to estimate proxy records along with their associated uncertainty starting with the radiometric age-depth and proxy-depth measurements, and a radiometric calibration curve if required. We provide analytical expressions for the posterior proxy probability distributions at any given calendar age, from which the expected proxy values and their uncertainty can be estimated. We illustrate our method using two synthetic datasets and then use it to construct the proxy records for groundwater inflow and surface erosion from Lonar lake in central India. Our analysis reveals interrelations between the uncertainty of the proxy record over time and the variance of proxy along the depth of the archive. For the Lonar lake proxies, we show that, rather than the age uncertainties, it is the proxy variance combined with calibration uncertainty that accounts for most of the final uncertainty. We represent the proxy records as probability distributions on a precise, error-free time scale that makes further time series analyses and inter-comparison of proxies relatively simpler and clearer. Our approach provides a coherent understanding of age uncertainties within sedimentary proxy records that involve radiometric dating. It can be potentially used within existing age modeling structures to bring forth a reliable and consistent framework for proxy record estimation.
The Uncertainty of Measurement Results
International Nuclear Information System (INIS)
Factors affecting the uncertainty of measurement are explained, basic statistical formulae given, and the theoretical concept explained in the context of pesticide formulation analysis. Practical guidance is provided on how to determine individual uncertainty components within an analytical procedure. An extended and comprehensive table containing the relevant mathematical/statistical expressions elucidates the relevant underlying principles. Appendix I provides a practical elaborated example on measurement uncertainty estimation, above all utilizing experimental repeatability and reproducibility laboratory data. (author)
Gear Crack Propagation Investigation
1995-01-01
Reduced weight is a major design goal in aircraft power transmissions. Some gear designs incorporate thin rims to help meet this goal. Thin rims, however, may lead to bending fatigue cracks. These cracks may propagate through a gear tooth or into the gear rim. A crack that propagates through a tooth would probably not be catastrophic, and ample warning of a failure could be possible. On the other hand, a crack that propagates through the rim would be catastrophic. Such cracks could lead to disengagement of a rotor or propeller from an engine, loss of an aircraft, and fatalities. To help create and validate tools for the gear designer, the NASA Lewis Research Center performed in-house analytical and experimental studies to investigate the effect of rim thickness on gear-tooth crack propagation. Our goal was to determine whether cracks grew through gear teeth (benign failure mode) or through gear rims (catastrophic failure mode) for various rim thicknesses. In addition, we investigated the effect of rim thickness on crack propagation life. A finite-element-based computer program simulated gear-tooth crack propagation. The analysis used principles of linear elastic fracture mechanics, and quarter-point, triangular elements were used at the crack tip to represent the stress singularity. The program had an automated crack propagation option in which cracks were grown numerically via an automated remeshing scheme. Crack-tip stress-intensity factors were estimated to determine crack-propagation direction. Also, various fatigue crack growth models were used to estimate crack-propagation life. Experiments were performed in Lewis' Spur Gear Fatigue Rig to validate predicted crack propagation results. Gears with various backup ratios were tested to validate crack-path predictions. Also, test gears were installed with special crack-propagation gages in the tooth fillet region to measure bending-fatigue crack growth. From both predictions and tests, gears with backup ratios
Quantum dynamics via a time propagator in Wigner's phase space
DEFF Research Database (Denmark)
Grønager, Michael; Henriksen, Niels Engholm
1995-01-01
We derive an expression for a short-time phase space propagator. We use it in a new propagation scheme and demonstrate that it works for a Morse potential. The propagation scheme is used to propagate classical distributions which do not obey the Heisenberg uncertainty principle. It is shown...
Uncertainty analysis and probabilistic modelling
International Nuclear Information System (INIS)
Many factors affect the accuracy and precision of probabilistic assessment. This report discusses sources of uncertainty and ways of addressing them. Techniques for propagating uncertainties in model input parameters through to model prediction are discussed as are various techniques for examining how sensitive and uncertain model predictions are to one or more input parameters. Various statements of confidence which can be made concerning the prediction of a probabilistic assessment are discussed as are several matters of potential regulatory interest. 55 refs
Stochastic Expectation Propagation
Li, Yingzhen; Hernandez-Lobato, Jose Miguel; Turner, Richard E.
2015-01-01
Expectation propagation (EP) is a deterministic approximation algorithm that is often used to perform approximate Bayesian parameter learning. EP approximates the full intractable posterior distribution through a set of local approximations that are iteratively refined for each datapoint. EP can offer analytic and computational advantages over other approximations, such as Variational Inference (VI), and is the method of choice for a number of models. The local nature of EP appears to make it...
On Uncertainty of Compton Backscattering Process
Mo, X H
2013-01-01
The uncertainty of Compton backscattering process is studied by virtue of analytical formulas, and the special effects of variant energy spread and energy drift on the systematic uncertainty estimation are also studied with Monte Carlo sampling technique. These quantitative conclusions are especially important for the understanding the uncertainty of beam energy measurement system.
Solomatine, Dimitri
2016-04-01
When speaking about model uncertainty many authors implicitly assume the data uncertainty (mainly in parameters or inputs) which is probabilistically described by distributions. Often however it is look also into the residual uncertainty as well. It is hence reasonable to classify the main approaches to uncertainty analysis with respect to the two main types of model uncertainty that can be distinguished: A. The residual uncertainty of models. In this case the model parameters and/or model inputs are considered to be fixed (deterministic), i.e. the model is considered to be optimal (calibrated) and deterministic. Model error is considered as the manifestation of uncertainty. If there is enough past data about the model errors (i.e. it uncertainty), it is possible to build a statistical or machine learning model of uncertainty trained on this data. The following methods can be mentioned: (a) quantile regression (QR) method by Koenker and Basset in which linear regression is used to build predictive models for distribution quantiles [1] (b) a more recent approach that takes into account the input variables influencing such uncertainty and uses more advanced machine learning (non-linear) methods (neural networks, model trees etc.) - the UNEEC method [2,3,7] (c) and even more recent DUBRAUE method (Dynamic Uncertainty Model By Regression on Absolute Error), a autoregressive model of model residuals (it corrects the model residual first and then carries out the uncertainty prediction by a autoregressive statistical model) [5] B. The data uncertainty (parametric and/or input) - in this case we study the propagation of uncertainty (presented typically probabilistically) from parameters or inputs to the model outputs. In case of simple functions representing models analytical approaches can be used, or approximation methods (e.g., first-order second moment method). However, for real complex non-linear models implemented in software there is no other choice except using
Uncertainty covariances in robotics applications
International Nuclear Information System (INIS)
The application of uncertainty covariance matrices in the analysis of robot trajectory errors is explored. First, relevant statistical concepts are reviewed briefly. Then, a simple, hypothetical robot model is considered to illustrate methods for error propagation and performance test data evaluation. The importance of including error correlations is emphasized
Liu, Baoding
2015-01-01
When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, c...
Error Propagation Analysis for Quantitative Intracellular Metabolomics
Directory of Open Access Journals (Sweden)
Jana Tillack
2012-11-01
Full Text Available Model-based analyses have become an integral part of modern metabolic engineering and systems biology in order to gain knowledge about complex and not directly observable cellular processes. For quantitative analyses, not only experimental data, but also measurement errors, play a crucial role. The total measurement error of any analytical protocol is the result of an accumulation of single errors introduced by several processing steps. Here, we present a framework for the quantification of intracellular metabolites, including error propagation during metabolome sample processing. Focusing on one specific protocol, we comprehensively investigate all currently known and accessible factors that ultimately impact the accuracy of intracellular metabolite concentration data. All intermediate steps are modeled, and their uncertainty with respect to the final concentration data is rigorously quantified. Finally, on the basis of a comprehensive metabolome dataset of Corynebacterium glutamicum, an integrated error propagation analysis for all parts of the model is conducted, and the most critical steps for intracellular metabolite quantification are detected.
A vector model for error propagation
International Nuclear Information System (INIS)
A simple vector model for error propagation, which is entirely equivalent to the conventional statistical approach, is discussed. It offers considerable insight into the nature of error propagation while, at the same time, readily demonstrating the significance of uncertainty correlations. This model is well suited to the analysis of error for sets of neutron-induced reaction cross sections. 7 refs., 1 fig
Solomatine, Dimitri
2016-04-01
When speaking about model uncertainty many authors implicitly assume the data uncertainty (mainly in parameters or inputs) which is probabilistically described by distributions. Often however it is look also into the residual uncertainty as well. It is hence reasonable to classify the main approaches to uncertainty analysis with respect to the two main types of model uncertainty that can be distinguished: A. The residual uncertainty of models. In this case the model parameters and/or model inputs are considered to be fixed (deterministic), i.e. the model is considered to be optimal (calibrated) and deterministic. Model error is considered as the manifestation of uncertainty. If there is enough past data about the model errors (i.e. it uncertainty), it is possible to build a statistical or machine learning model of uncertainty trained on this data. The following methods can be mentioned: (a) quantile regression (QR) method by Koenker and Basset in which linear regression is used to build predictive models for distribution quantiles [1] (b) a more recent approach that takes into account the input variables influencing such uncertainty and uses more advanced machine learning (non-linear) methods (neural networks, model trees etc.) - the UNEEC method [2,3,7] (c) and even more recent DUBRAUE method (Dynamic Uncertainty Model By Regression on Absolute Error), a autoregressive model of model residuals (it corrects the model residual first and then carries out the uncertainty prediction by a autoregressive statistical model) [5] B. The data uncertainty (parametric and/or input) - in this case we study the propagation of uncertainty (presented typically probabilistically) from parameters or inputs to the model outputs. In case of simple functions representing models analytical approaches can be used, or approximation methods (e.g., first-order second moment method). However, for real complex non-linear models implemented in software there is no other choice except using
Climate model uncertainty vs. conceptual geological uncertainty in hydrological modeling
Directory of Open Access Journals (Sweden)
T. O. Sonnenborg
2015-04-01
Full Text Available Projections of climate change impact are associated with a cascade of uncertainties including CO2 emission scenario, climate model, downscaling and impact model. The relative importance of the individual uncertainty sources is expected to depend on several factors including the quantity that is projected. In the present study the impacts of climate model uncertainty and geological model uncertainty on hydraulic head, stream flow, travel time and capture zones are evaluated. Six versions of a physically based and distributed hydrological model, each containing a unique interpretation of the geological structure of the model area, are forced by 11 climate model projections. Each projection of future climate is a result of a GCM-RCM model combination (from the ENSEMBLES project forced by the same CO2 scenario (A1B. The changes from the reference period (1991–2010 to the future period (2081–2100 in projected hydrological variables are evaluated and the effects of geological model and climate model uncertainties are quantified. The results show that uncertainty propagation is context dependent. While the geological conceptualization is the dominating uncertainty source for projection of travel time and capture zones, the uncertainty on the climate models is more important for groundwater hydraulic heads and stream flow.
Climate model uncertainty versus conceptual geological uncertainty in hydrological modeling
Sonnenborg, T. O.; Seifert, D.; Refsgaard, J. C.
2015-09-01
Projections of climate change impact are associated with a cascade of uncertainties including in CO2 emission scenarios, climate models, downscaling and impact models. The relative importance of the individual uncertainty sources is expected to depend on several factors including the quantity that is projected. In the present study the impacts of climate model uncertainty and geological model uncertainty on hydraulic head, stream flow, travel time and capture zones are evaluated. Six versions of a physically based and distributed hydrological model, each containing a unique interpretation of the geological structure of the model area, are forced by 11 climate model projections. Each projection of future climate is a result of a GCM-RCM model combination (from the ENSEMBLES project) forced by the same CO2 scenario (A1B). The changes from the reference period (1991-2010) to the future period (2081-2100) in projected hydrological variables are evaluated and the effects of geological model and climate model uncertainties are quantified. The results show that uncertainty propagation is context-dependent. While the geological conceptualization is the dominating uncertainty source for projection of travel time and capture zones, the uncertainty due to the climate models is more important for groundwater hydraulic heads and stream flow.
Uncertainty of empirical correlation equations
Feistel, R.; Lovell-Smith, J. W.; Saunders, P.; Seitz, S.
2016-08-01
The International Association for the Properties of Water and Steam (IAPWS) has published a set of empirical reference equations of state, forming the basis of the 2010 Thermodynamic Equation of Seawater (TEOS-10), from which all thermodynamic properties of seawater, ice, and humid air can be derived in a thermodynamically consistent manner. For each of the equations of state, the parameters have been found by simultaneously fitting equations for a range of different derived quantities using large sets of measurements of these quantities. In some cases, uncertainties in these fitted equations have been assigned based on the uncertainties of the measurement results. However, because uncertainties in the parameter values have not been determined, it is not possible to estimate the uncertainty in many of the useful quantities that can be calculated using the parameters. In this paper we demonstrate how the method of generalised least squares (GLS), in which the covariance of the input data is propagated into the values calculated by the fitted equation, and in particular into the covariance matrix of the fitted parameters, can be applied to one of the TEOS-10 equations of state, namely IAPWS-95 for fluid pure water. Using the calculated parameter covariance matrix, we provide some preliminary estimates of the uncertainties in derived quantities, namely the second and third virial coefficients for water. We recommend further investigation of the GLS method for use as a standard method for calculating and propagating the uncertainties of values computed from empirical equations.
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Energy Technology Data Exchange (ETDEWEB)
Haihua Zhao; Vincent A. Mousseau
2008-09-01
This report presents the forward sensitivity analysis method as a means for quantification of uncertainty in system analysis. The traditional approach to uncertainty quantification is based on a “black box” approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. This approach requires large number of simulation runs and therefore has high computational cost. Contrary to the “black box” method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In this approach equations for the propagation of uncertainty are constructed and the sensitivity is solved for as variables in the same simulation. This “glass box” method can generate similar sensitivity information as the above “black box” approach with couples of runs to cover a large uncertainty region. Because only small numbers of runs are required, those runs can be done with a high accuracy in space and time ensuring that the uncertainty of the physical model is being measured and not simply the numerical error caused by the coarse discretization. In the forward sensitivity method, the model is differentiated with respect to each parameter to yield an additional system of the same size as the original one, the result of which is the solution sensitivity. The sensitivity of any output variable can then be directly obtained from these sensitivities by applying the chain rule of differentiation. We extend the forward sensitivity method to include time and spatial steps as special parameters so that the numerical errors can be quantified against other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty analysis. By knowing the relative sensitivity of time and space steps with other
Orbital State Uncertainty Realism
Horwood, J.; Poore, A. B.
2012-09-01
Fundamental to the success of the space situational awareness (SSA) mission is the rigorous inclusion of uncertainty in the space surveillance network. The *proper characterization of uncertainty* in the orbital state of a space object is a common requirement to many SSA functions including tracking and data association, resolution of uncorrelated tracks (UCTs), conjunction analysis and probability of collision, sensor resource management, and anomaly detection. While tracking environments, such as air and missile defense, make extensive use of Gaussian and local linearity assumptions within algorithms for uncertainty management, space surveillance is inherently different due to long time gaps between updates, high misdetection rates, nonlinear and non-conservative dynamics, and non-Gaussian phenomena. The latter implies that "covariance realism" is not always sufficient. SSA also requires "uncertainty realism"; the proper characterization of both the state and covariance and all non-zero higher-order cumulants. In other words, a proper characterization of a space object's full state *probability density function (PDF)* is required. In order to provide a more statistically rigorous treatment of uncertainty in the space surveillance tracking environment and to better support the aforementioned SSA functions, a new class of multivariate PDFs are formulated which more accurately characterize the uncertainty of a space object's state or orbit. The new distribution contains a parameter set controlling the higher-order cumulants which gives the level sets a distinctive "banana" or "boomerang" shape and degenerates to a Gaussian in a suitable limit. Using the new class of PDFs within the general Bayesian nonlinear filter, the resulting filter prediction step (i.e., uncertainty propagation) is shown to have the *same computational cost as the traditional unscented Kalman filter* with the former able to maintain a proper characterization of the uncertainty for up to *ten
Energy Technology Data Exchange (ETDEWEB)
Wendelberger, James G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-08-10
These are slides from a presentation made by a researcher from Los Alamos National Laboratory. The following topics are covered: sources of error for NDA gamma measurements, precision and accuracy are two important characteristics of measurements, four items processed in a material balance area during the inventory time period, inventory difference and propagation of variance, sum in quadrature, and overview of the ID/POV process.
Directory of Open Access Journals (Sweden)
D. D. Lucas
2004-10-01
Full Text Available A study of the current significant uncertainties in dimethylsulfide (DMS gas-phase chemistry provides insight into additional research needed to decrease these uncertainties. The DMS oxidation cycle in the remote marine boundary layer is simulated using a diurnally-varying box model with 56 uncertain chemical and physical parameters. Two analytical methods (direct integration and probabilistic collocation are used to determine the most influential parameters (sensitivity analysis and sources of uncertainty (uncertainty analysis affecting the concentrations of DMS, SO_{2}, methanesulfonic acid (MSA, and H_{2}SO_{4}. The key parameters identified by the sensitivity analysis are associated with DMS emissions, mixing in to and out of the boundary layer, heterogeneous removal of soluble sulfur-containing compounds, and the DMS+OH addition and abstraction reactions. MSA and H_{2}SO_{4} are also sensitive to the rate constants of numerous other reactions, which limits the effectiveness of mechanism reduction techniques. Propagating the parameter uncertainties through the model leads to concentrations that are uncertain by factors of 1.8 to 3.0. The main sources of uncertainty are from DMS emissions and heterogeneous scavenging. Uncertain chemical rate constants, however, collectively account for up to 50–60% of the net uncertainties in MSA and H_{2}SO_{4}. The concentration uncertainties are also calculated at different temperatures, where they vary mainly due to temperature-dependent chemistry. With changing temperature, the uncertainties of DMS and SO_{2} remain steady, while the uncertainties of MSA and H_{2}SO_{4} vary by factors of 2 to 4.
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
DEFF Research Database (Denmark)
Nguyen, Daniel Xuyen
untested destinations. The option to forecast demands causes firms to delay exporting in order to gather more information about foreign demand. Third, since uncertainty is resolved after entry, many firms enter a destination and then exit after learning that they cannot profit. This prediction reconciles......This paper presents a model of trade that explains why firms wait to export and why many exporters fail. Firms face uncertain demands that are only realized after the firm enters the destination. The model retools the timing of uncertainty resolution found in productivity heterogeneity models. This...... the high rate of exit seen in the first years of exporting. Finally, when faced with multiple countries in which to export, some firms will choose to sequentially export in order to slowly learn more about its chances for success in untested markets....
Deterministic uncertainty analysis
International Nuclear Information System (INIS)
This paper presents a deterministic uncertainty analysis (DUA) method for calculating uncertainties that has the potential to significantly reduce the number of computer runs compared to conventional statistical analysis. The method is based upon the availability of derivative and sensitivity data such as that calculated using the well known direct or adjoint sensitivity analysis techniques. Formation of response surfaces using derivative data and the propagation of input probability distributions are discussed relative to their role in the DUA method. A sample problem that models the flow of water through a borehole is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. Propogation of uncertainties by the DUA method is compared for ten cases in which the number of reference model runs was varied from one to ten. The DUA method gives a more accurate representation of the true cumulative distribution of the flow rate based upon as few as two model executions compared to fifty model executions using a statistical approach. 16 refs., 4 figs., 5 tabs
Clyde, Merlise; George, Edward I.
2004-01-01
The evolution of Bayesian approaches for model uncertainty over the past decade has been remarkable. Catalyzed by advances in methods and technology for posterior computation, the scope of these methods has widened substantially. Major thrusts of these developments have included new methods for semiautomatic prior specification and posterior exploration. To illustrate key aspects of this evolution, the highlights of some of these developments are described.
Background and Qualification of Uncertainty Methods
International Nuclear Information System (INIS)
The evaluation of uncertainty constitutes the necessary supplement of Best Estimate calculations performed to understand accident scenarios in water cooled nuclear reactors. The needs come from the imperfection of computational tools on the one side and from the interest in using such tool to get more precise evaluation of safety margins. The paper reviews the salient features of two independent approaches for estimating uncertainties associated with predictions of complex system codes. Namely the propagation of code input error and the propagation of the calculation output error constitute the key-words for identifying the methods of current interest for industrial applications. Throughout the developed methods, uncertainty bands can be derived (both upper and lower) for any desired quantity of the transient of interest. For the second case, the uncertainty method is coupled with the thermal-hydraulic code to get the Code with capability of Internal Assessment of Uncertainty, whose features are discussed in more detail.
Optimizing production under uncertainty
DEFF Research Database (Denmark)
Rasmussen, Svend
This Working Paper derives criteria for optimal production under uncertainty based on the state-contingent approach (Chambers and Quiggin, 2000), and discusses po-tential problems involved in applying the state-contingent approach in a normative context. The analytical approach uses the concept...... of state-contingent production functions and a definition of inputs including both sort of input, activity and alloca-tion technology. It also analyses production decisions where production is combined with trading in state-contingent claims such as insurance contracts. The final part discusses...
Improved beam propagation method equations.
Nichelatti, E; Pozzi, G
1998-01-01
Improved beam propagation method (BPM) equations are derived for the general case of arbitrary refractive-index spatial distributions. It is shown that in the paraxial approximation the discrete equations admit an analytical solution for the propagation of a paraxial spherical wave, which converges to the analytical solution of the paraxial Helmholtz equation. The generalized Kirchhoff-Fresnel diffraction integral between the object and the image planes can be derived, with its coefficients expressed in terms of the standard ABCD matrix. This result allows the substitution, in the case of an unaberrated system, of the many numerical steps with a single analytical step. We compared the predictions of the standard and improved BPM equations by considering the cases of a Maxwell fish-eye and of a Luneburg lens. PMID:18268554
A new algorithm for five-hole probe calibration, data reduction, and uncertainty analysis
Reichert, Bruce A.; Wendt, Bruce J.
1994-01-01
A new algorithm for five-hole probe calibration and data reduction using a non-nulling method is developed. The significant features of the algorithm are: (1) two components of the unit vector in the flow direction replace pitch and yaw angles as flow direction variables; and (2) symmetry rules are developed that greatly simplify Taylor's series representations of the calibration data. In data reduction, four pressure coefficients allow total pressure, static pressure, and flow direction to be calculated directly. The new algorithm's simplicity permits an analytical treatment of the propagation of uncertainty in five-hole probe measurement. The objectives of the uncertainty analysis are to quantify uncertainty of five-hole results (e.g., total pressure, static pressure, and flow direction) and determine the dependence of the result uncertainty on the uncertainty of all underlying experimental and calibration measurands. This study outlines a general procedure that other researchers may use to determine five-hole probe result uncertainty and provides guidance to improve measurement technique. The new algorithm is applied to calibrate and reduce data from a rake of five-hole probes. Here, ten individual probes are mounted on a single probe shaft and used simultaneously. Use of this probe is made practical by the simplicity afforded by this algorithm.
International Nuclear Information System (INIS)
The main part of this thesis consists of 15 published papers, in which the numerical Beam Propagating Method (BPM) is investigated, verified and used in a number of applications. In the introduction a derivation of the nonlinear Schroedinger equation is presented to connect the beginning of the soliton papers with Maxwell's equations including a nonlinear polarization. This thesis focuses on the wide use of the BPM for numerical simulations of propagating light and particle beams through different types of structures such as waveguides, fibers, tapers, Y-junctions, laser arrays and crystalline solids. We verify the BPM in the above listed problems against other numerical methods for example the Finite-element Method, perturbation methods and Runge-Kutta integration. Further, the BPM is shown to be a simple and effective way to numerically set up the Green's function in matrix form for periodic structures. The Green's function matrix can then be diagonalized with matrix methods yielding the eigensolutions of the structure. The BPM inherent transverse periodicity can be untied, if desired, by for example including an absorptive refractive index at the computational window edges. The interaction of two first-order soliton pulses is strongly dependent on the phase relationship between the individual solitons. When optical phase shift keying is used in coherent one-carrier wavelength communication, the fiber attenuation will suppress or delay the nonlinear instability. (orig.)
Parmentier, E. M.; Schubert, G.
1989-01-01
A model for rift propagation which treats the rift as a crack in an elastic plate which is filled from beneath by upwelling viscous asthenosphere as it lengthens and opens. Growth of the crack is driven by either remotely applied forces or the pressure of buoyant asthenosphere in the crack and is resisted by viscous stresses associated with filling the crack. The model predicts a time for a rift to form which depends primarily on the driving stress and asthenosphere viscosity. For a driving stress on the order of 10 MPa, as expected from the topography of rifted swells, the development of rifts over times of a few Myr requires an asthenosphere viscosity of 10 to the 16th Pa s (10 to the 17th poise). This viscosity, which is several orders of magnitude less than values determined by postglacial rebound and at least one order of magnitude less than that inferred for spreading center propagation, may reflect a high temperature or large amount of partial melting in the mantle beneath a rifted swell.
Quantifying uncertainty in LCA-modelling of waste management systems
International Nuclear Information System (INIS)
Highlights: ► Uncertainty in LCA-modelling of waste management is significant. ► Model, scenario and parameter uncertainties contribute. ► Sequential procedure for quantifying uncertainty is proposed. ► Application of procedure is illustrated by a case-study. - Abstract: Uncertainty analysis in LCA studies has been subject to major progress over the last years. In the context of waste management, various methods have been implemented but a systematic method for uncertainty analysis of waste-LCA studies is lacking. The objective of this paper is (1) to present the sources of uncertainty specifically inherent to waste-LCA studies, (2) to select and apply several methods for uncertainty analysis and (3) to develop a general framework for quantitative uncertainty assessment of LCA of waste management systems. The suggested method is a sequence of four steps combining the selected methods: (Step 1) a sensitivity analysis evaluating the sensitivities of the results with respect to the input uncertainties, (Step 2) an uncertainty propagation providing appropriate tools for representing uncertainties and calculating the overall uncertainty of the model results, (Step 3) an uncertainty contribution analysis quantifying the contribution of each parameter uncertainty to the final uncertainty and (Step 4) as a new approach, a combined sensitivity analysis providing a visualisation of the shift in the ranking of different options due to variations of selected key parameters. This tiered approach optimises the resources available to LCA practitioners by only propagating the most influential uncertainties.
Pole solutions for flame front propagation
Kupervasser, Oleg
2015-01-01
This book deals with solving mathematically the unsteady flame propagation equations. New original mathematical methods for solving complex non-linear equations and investigating their properties are presented. Pole solutions for flame front propagation are developed. Premixed flames and filtration combustion have remarkable properties: the complex nonlinear integro-differential equations for these problems have exact analytical solutions described by the motion of poles in a complex plane. Instead of complex equations, a finite set of ordinary differential equations is applied. These solutions help to investigate analytically and numerically properties of the flame front propagation equations.
Uncertainty Characterization of Reactor Vessel Fracture Toughness
International Nuclear Information System (INIS)
To perform fracture mechanics analysis of reactor vessel, fracture toughness (KIc) at various temperatures would be necessary. In a best estimate approach, KIc uncertainties resulting from both lack of sufficient knowledge and randomness in some of the variables of KIc must be characterized. Although it may be argued that there is only one type of uncertainty, which is lack of perfect knowledge about the subject under study, as a matter of practice KIc uncertainties can be divided into two types: aleatory and epistemic. Aleatory uncertainty is related to uncertainty that is very difficult to reduce, if not impossible; epistemic uncertainty, on the other hand, can be practically reduced. Distinction between aleatory and epistemic uncertainties facilitates decision-making under uncertainty and allows for proper propagation of uncertainties in the computation process. Typically, epistemic uncertainties representing, for example, parameters of a model are sampled (to generate a 'snapshot', single-value of the parameters), but the totality of aleatory uncertainties is carried through the calculation as available. In this paper a description of an approach to account for these two types of uncertainties associated with KIc has been provided. (authors)
Managing Uncertainty in Capacity Investment, Revenue Management, and Supply Chain Coordination
Liu, Juqi
2009-01-01
"Uncertainty" is used broadly to refer to things that are unknown or incompletely understood. In operations management, basic sources of uncertainty may include decision uncertainty, model uncertainty, analytical uncertainty, data uncertainty, and so on. Although uncertainty is unavoidable in decision making, different mechanisms can be designed to mitigate the impact of uncertainty. One commonly used strategy is "decision postponement," wherein the decision maker purposefully delays some of ...
International Nuclear Information System (INIS)
Uncertainties of reactivities due to those of resolved resonance parameters are evaluated by so-called'' direct k-difference method''. Then, effective cross section of an individual isotope and reaction type is described in terms of infinitely diluted cross section σ∞xik and resonance self-shielding factor fxik (x: reaction, i: isotope, k: sequence number of resonance) as a function of resonance parameters, and reactivity is evaluated from the neutron balance using the effective cross section and neutron flux. Consequently, reactivity uncertainties such as effective multiplication factor can be estimated by the sensitivity coefficients of the infinitely diluted cross section and resonance self-shielding factor to the changes of resonance parameters of interest. In the present work, the uncertainties of the resolved resonance parameters for the evaluated nuclear data file JENDL-3.2 were estimated on the basis of Breit-Wigner Multi-level formula. For the Reich-Moore resonance parameters complied in the library, the uncertainties equivalent to the Breit-Wigner resonance parameters are estimated. The resonance self-shielding factor based on NR-approximation is analytically described. Reactivity uncertainty evaluation method for the effective multiplication factor keff, temperature coefficient α, Doppler reactivity worth ρ is developed by means of the sensitivity coefficient against the resonance parameter. Final uncertainties of the reactivities are estimated by means of error propagation law using the level-wise uncertainties. Preliminary uncertainty evaluation of Doppler reactivity worth due to the uncertainties of resolved resonance parameters results about 4% at the temperature 728 K for large sodium-cooled fast breeder reactor. (author)
Uncertainty Relations for General Unitary Operators
Bagchi, Shrobona; Pati, Arun Kumar
2015-01-01
We derive several uncertainty relations for two arbitrary unitary operators acting on physical states of any Hilbert space (finite or infinite dimensional). We show that our bounds are tighter in various cases than the ones existing in the current literature. With regard to the minimum uncertainty state for the cases of both the finite as well as the infinite dimensional unitary operators, we derive the minimum uncertainty state equation by the analytic method. As an application of this, we f...
Bayesian Mars for uncertainty quantification in stochastic transport problems
International Nuclear Information System (INIS)
We present a method for estimating solutions to partial differential equations with uncertain parameters using a modification of the Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator. The BMARS algorithm uses Markov chain Monte Carlo (MCMC) to construct a basis function composed of polynomial spline functions, for which derivatives and integrals are straightforward to compute. We use these calculations and a modification of the curve-fitting BMARS algorithm to search for a basis function (response surface) which, in combination with its derivatives/integrals, satisfies a governing differential equation and specified boundary condition. We further show that this fit can be improved by enforcing a conservation or other physics-based constraint. Our results indicate that estimates to solutions of simple first order partial differential equations (without uncertainty) can be efficiently computed with very little regression error. We then extend the method to estimate uncertainties in the solution to a pure absorber transport problem in a medium with uncertain cross-section. We describe and compare two strategies for propagating the uncertain cross-section through the BMARS algorithm; the results from each method are in close comparison with analytic results. We discuss the scalability of the algorithm to parallel architectures and the applicability of the two strategies to larger problems with more degrees of uncertainty. (author)
Constrained quantities in uncertainty quantification. Ambiguity and tips to follow
International Nuclear Information System (INIS)
The nuclear community relies heavily on computer codes and numerical tools. The results of such computations can only be trusted if they are augmented by proper sensitivity and uncertainty (S and U) studies. This paper presents some aspects of S and U analysis when constrained quantities are involved, such as the fission spectrum or the isotopic distribution of elements. A consistent theory is given for the derivation and interpretation of constrained sensitivities as well as the corresponding covariance matrix normalization procedures. It is shown that if the covariance matrix violates the “generic zero column and row sum” condition, normalizing it is equivalent to constraining the sensitivities, but since both can be done in many ways different sensitivity coefficients and uncertainties can be derived. This makes results ambiguous, underlining the need for proper covariance data. It is also highlighted that the use of constrained sensitivity coefficients derived with a constraining procedure that is not idempotent can lead to biased results in uncertainty propagation. The presented theory is demonstrated on an analytical case and a numerical example involving the fission spectrum, both confirming the main conclusions of this research. (author)
Uncertainties in land use data
Directory of Open Access Journals (Sweden)
G. Castilla
2007-11-01
Full Text Available This paper deals with the description and assessment of uncertainties in land use data derived from Remote Sensing observations, in the context of hydrological studies. Land use is a categorical regionalised variable reporting the main socio-economic role each location has, where the role is inferred from the pattern of occupation of land. The properties of this pattern that are relevant to hydrological processes have to be known with some accuracy in order to obtain reliable results; hence, uncertainty in land use data may lead to uncertainty in model predictions. There are two main uncertainties surrounding land use data, positional and categorical. The first one is briefly addressed and the second one is explored in more depth, including the factors that influence it. We (1 argue that the conventional method used to assess categorical uncertainty, the confusion matrix, is insufficient to propagate uncertainty through distributed hydrologic models; (2 report some alternative methods to tackle this and other insufficiencies; (3 stress the role of metadata as a more reliable means to assess the degree of distrust with which these data should be used; and (4 suggest some practical recommendations.
International Nuclear Information System (INIS)
The uncertainties of calculations of loss-of-feedwater transients at Davis-Besse Unit 1 were determined to address concerns of the US Nuclear Regulatory Commission relative to the effectiveness of feed and bleed cooling. Davis-Besse Unit 1 is a pressurized water reactor of the raised-loop Babcock and Wilcox design. A detailed, quality-assured RELAP5/MOD2 model of Davis-Besse was developed at the Idaho National Engineering Laboratory. The model was used to perform an analysis of the loss-of-feedwater transient that occurred at Davis-Besse on June 9, 1985. A loss-of-feedwater transient followed by feed and bleed cooling was also calculated. The evaluation of uncertainty was based on the comparisons of calculations and data, comparisons of different calculations of the same transient, sensitivity calculations, and the propagation of the estimated uncertainty in initial and boundary conditions to the final calculated results
Assonov, Sergey; Groening, Manfred; Fajgelj, Ales
2016-04-01
The worldwide metrological comparability of stable isotope measurement results is presently achieved by linking them to the conventional delta scales. Delta scales are realized by scale defining reference materials, most of them being supplied by the IAEA (examples are VSMOW2 & SLAP2). In fact, these reference materials are artefacts, characterized by a network of laboratories using the current best measurement practice. In reality any measurement result is linked to the scale via reference materials (RMs) in use. Any RMs is traceable to the highest-level RMs which define the scale; this is valid not only for international RMs (mostly secondary RMs like IAEA-CH-7, NBS22) but for any lab standard calibrated by users. This is a basic of measurement traceability. The traceability scheme allows addressing both the comparability and long-term compatibility of measurement results. Correspondingly, the uncertainty of any measurement result has to be propagated up to the scale level. The uncertainty evaluation should include (i) the uncertainty of the RMs in use; (ii) the analytical uncertainty of the materials used in calibration runs performed at the user laboratory; (iii) the reproducibility on results obtained on sample material; (iv) the uncertainty of corrections applied (memory, drift, etc). Besides these, there may be other uncertainty components of to be considered. The presentation will illustrate the metrological concepts involved (comparability, traceability etc) and give a generic scheme for the uncertainty evaluation.
Uncertainty quantification theory, implementation, and applications
Smith, Ralph C
2014-01-01
The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...
Uncertainty budget for Ion Beam Analysis
International Nuclear Information System (INIS)
In the 'Guide to the expression of uncertainty in measurement' (GUM), guidelines for the evaluation of the total uncertainty of an experiment are outlined in a manner to be adopted in all types of measurements. So far, this has not very strictly been implemented in the Ion Beam Analysis (IBA) community. In this paper, the situation for a, to some extent, typical IBA measurement is reviewed, and factors contributing to the total uncertainty are analysed and discussed. Also the propagation of uncertainties is discussed. How the result of a measurement should be presented, with an appropriate coverage factor resulting in a suitable interval for the uncertainty is also discussed. An example of an uncertainty budget for an analysis of quaternary bronzes is given as illustration
Quantifying Mixed Uncertainties in Cyber Attacker Payoffs
Energy Technology Data Exchange (ETDEWEB)
Chatterjee, Samrat; Halappanavar, Mahantesh; Tipireddy, Ramakrishna; Oster, Matthew R.; Saha, Sudip
2015-04-15
Representation and propagation of uncertainty in cyber attacker payoffs is a key aspect of security games. Past research has primarily focused on representing the defender’s beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and intervals. Within cyber-settings, continuous probability distributions may still be appropriate for addressing statistical (aleatory) uncertainties where the defender may assume that the attacker’s payoffs differ over time. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information about the attacker’s payoff generation mechanism. Such epistemic uncertainties are more suitably represented as probability boxes with intervals. In this study, we explore the mathematical treatment of such mixed payoff uncertainties.
International Nuclear Information System (INIS)
This paper presents continuous and discrete equations for the propagation of uncertainty applied to inverse kinetics and shows that the uncertainty of a measurement can be minimized by the proper choice of frequency from the perturbing reactivity waveform. (authors)
Finite Frames and Graph Theoretic Uncertainty Principles
Koprowski, Paul J.
The subject of analytical uncertainty principles is an important field within harmonic analysis, quantum physics, and electrical engineering. We explore uncertainty principles in the context of the graph Fourier transform, and we prove additive results analogous to the multiplicative version of the classical uncertainty principle. We establish additive uncertainty principles for finite Parseval frames. Lastly, we examine the feasibility region of simultaneous values of the norms of a graph differential operator acting on a function f ∈ l2(G) and its graph Fourier transform.
Uncertainties in physics calculations for gas cooled reactor cores
International Nuclear Information System (INIS)
The meeting was attended by 29 participants from Austria, China, France, Germany, Japan, Switzerland, the Union of Soviet Socialist Republics and the United States of America and was subdivided into four technical sessions: Analytical methods, comparison of predictions with results from existing HTGRs, uncertainty evaluations (3 papers); Analytical methods, predictions of performance of future HTGRs, uncertainty evaluations - part 1 (5 papers); Analytical methods, predictions of performance of future HTGRs, uncertainty evaluations - part 2 (6 papers); Critical experiments - planning and results, uncertainty evaluations (5 papers). The participants presented 19 papers on behalf of their countries or organizations. A separate abstract was prepared for each of these papers. Refs, figs and tabs
Using Nuclear Theory, Data and Uncertainties in Monte Carlo Transport Applications
Energy Technology Data Exchange (ETDEWEB)
Rising, Michael Evan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-11-03
These are slides for a presentation on using nuclear theory, data and uncertainties in Monte Carlo transport applications. The following topics are covered: nuclear data (experimental data versus theoretical models, data evaluation and uncertainty quantification), fission multiplicity models (fixed source applications, criticality calculations), uncertainties and their impact (integral quantities, sensitivity analysis, uncertainty propagation).
Propagation Mechanism Analysis Before the Break Point Inside Tunnels
Guan, Ke; Zhong, Zhangdui; Bo, Ai; Briso Rodriguez, Cesar
2011-01-01
There is no unanimous consensus yet on the propagation mechanism before the break point inside tunnels. Some deem that the propagation mechanism follows the free space model, others argue that it should be described by the multimode waveguide model. Firstly, this paper analyzes the propagation loss in two mechanisms. Then, by conjunctively using the propagation theory and the three-dimensional solid geometry, a generic analytical model for the boundary between the free space mechanism and the...
Measurement uncertainty in pharmaceutical analysis and its application
Marcus Augusto Lyrio Traple; Alessandro Morais Saviano; Fabiane Lacerda Francisco; Felipe Rebello Lourenço
2014-01-01
The measurement uncertainty provides complete information about an analytical result. This is very important because several decisions of compliance or non-compliance are based on analytical results in pharmaceutical industries. The aim of this work was to evaluate and discuss the estimation of uncertainty in pharmaceutical analysis. The uncertainty is a useful tool in the assessment of compliance or non-compliance of in-process and final pharmaceutical products as well as in the assessment o...
Quantifying uncertainty in LCA-modelling of waste management systems
DEFF Research Database (Denmark)
Clavreul, Julie; Guyonnet, D.; Christensen, Thomas Højlund
2012-01-01
Uncertainty analysis in LCA studies has been subject to major progress over the last years. In the context of waste management, various methods have been implemented but a systematic method for uncertainty analysis of waste-LCA studies is lacking. The objective of this paper is (1) to present...... the sources of uncertainty specifically inherent to waste-LCA studies, (2) to select and apply several methods for uncertainty analysis and (3) to develop a general framework for quantitative uncertainty assessment of LCA of waste management systems. The suggested method is a sequence of four steps combining...... the selected methods: (Step 1) a sensitivity analysis evaluating the sensitivities of the results with respect to the input uncertainties, (Step 2) an uncertainty propagation providing appropriate tools for representing uncertainties and calculating the overall uncertainty of the model results, (Step 3...
Experimental uncertainty estimation and statistics for data having interval uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)
2007-05-01
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Uncertainty and global climate change research
Energy Technology Data Exchange (ETDEWEB)
Tonn, B.E. [Oak Ridge National Lab., TN (United States); Weiher, R. [National Oceanic and Atmospheric Administration, Boulder, CO (United States)
1994-06-01
The Workshop on Uncertainty and Global Climate Change Research March 22--23, 1994, in Knoxville, Tennessee. This report summarizes the results and recommendations of the workshop. The purpose of the workshop was to examine in-depth the concept of uncertainty. From an analytical point of view, uncertainty is a central feature of global climate science, economics and decision making. The magnitude and complexity of uncertainty surrounding global climate change has made it quite difficult to answer even the most simple and important of questions-whether potentially costly action is required now to ameliorate adverse consequences of global climate change or whether delay is warranted to gain better information to reduce uncertainties. A major conclusion of the workshop is that multidisciplinary integrated assessments using decision analytic techniques as a foundation is key to addressing global change policy concerns. First, uncertainty must be dealt with explicitly and rigorously since it is and will continue to be a key feature of analysis and recommendations on policy questions for years to come. Second, key policy questions and variables need to be explicitly identified, prioritized, and their uncertainty characterized to guide the entire scientific, modeling, and policy analysis process. Multidisciplinary integrated assessment techniques and value of information methodologies are best suited for this task. In terms of timeliness and relevance of developing and applying decision analytic techniques, the global change research and policy communities are moving rapidly toward integrated approaches to research design and policy analysis.
Bruce, William J; Maxwell, E A; Sneddon, I N
1963-01-01
Analytic Trigonometry details the fundamental concepts and underlying principle of analytic geometry. The title aims to address the shortcomings in the instruction of trigonometry by considering basic theories of learning and pedagogy. The text first covers the essential elements from elementary algebra, plane geometry, and analytic geometry. Next, the selection tackles the trigonometric functions of angles in general, basic identities, and solutions of equations. The text also deals with the trigonometric functions of real numbers. The fifth chapter details the inverse trigonometric functions
International Nuclear Information System (INIS)
Highlights: ► Application of analytical unavailability model integrating T and M, ageing, and test strategy. ► Ageing data uncertainty propagation on system level assessed via Monte Carlo simulation. ► Uncertainty impact is growing with the extension of the surveillance test interval. ► Calculated system unavailability dependence on two different sensitivity study ageing databases. ► System unavailability sensitivity insights regarding specific groups of BEs as test intervals extend. - Abstract: The interest in operational lifetime extension of the existing nuclear power plants is growing. Consequently, plants life management programs, considering safety components ageing, are being developed and employed. Ageing represents a gradual degradation of the physical properties and functional performance of different components consequently implying their reduced availability. Analyses, which are being made in the direction of nuclear power plants lifetime extension are based upon components ageing management programs. On the other side, the large uncertainties of the ageing parameters as well as the uncertainties associated with most of the reliability data collections are widely acknowledged. This paper addresses the uncertainty and sensitivity analyses conducted utilizing a previously developed age-dependent unavailability model, integrating effects of test and maintenance activities, for a selected stand-by safety system in a nuclear power plant. The most important problem is the lack of data concerning the effects of ageing as well as the relatively high uncertainty associated to these data, which would correspond to more detailed modelling of ageing. A standard Monte Carlo simulation was coded for the purpose of this paper and utilized in the process of assessment of the component ageing parameters uncertainty propagation on system level. The obtained results from the uncertainty analysis indicate the extent to which the uncertainty of the selected
Estimating uncertainties in complex joint inverse problems
Afonso, Juan Carlos
2016-04-01
to the forward and statistical models, I will also address other uncertainties associated with data and uncertainty propagation.
Radio Channel Modelling Using Stochastic Propagation Graphs
DEFF Research Database (Denmark)
Pedersen, Troels; Fleury, Bernard Henri
2007-01-01
In this contribution the radio channel model proposed in [1] is extended to include multiple transmitters and receivers. The propagation environment is modelled using random graphs where vertices of a graph represent scatterers and edges model the wave propagation between scatterers. Furthermore......, we develop a closed form analytical expression for the transfer matrix of the propagation graph. It is shown by simulation that impulse response and the delay-power spectrum of the graph exhibit exponentially decaying power as a result of the recursive scattering structure of the graph. The impulse...
Intense electron beam propagation into vacuum
International Nuclear Information System (INIS)
The authors have performed experimental and theoretical studies of the propagation of an intense electron beam (1 MeV, 27 kA, 30 ns) into a long evacuated drift cube. In one case the beam propagates because an applied axial magnetic field immerses the entire system. In the second case a localized source of ions for charge neutralization enables the beam is propagate. In the case of a magnetically confined beam, experimental results for current propagation as a function of uniform applied magnetic field (0-1.2 Tesla) are presented for various drift tube diameters, cathode geometries, and anode aperture sizes. An analytic model of laminar beam flow is presented which predicts the space charge limited current of a solid intense relativistic electron beam (IREB) propagating in a grounded drift tube as a function of tube and diode sizes and applied magnetic field. Comparisons between the experimental and theoretical results are discussed
Premixed flame propagation in vertical tubes
Kazakov, Kirill A
2015-01-01
Analytical treatment of premixed flame propagation in vertical tubes with smooth walls is given. Using the on-shell flame description, equations describing quasi-steady flame with a small but finite front thickness are obtained and solved numerically. It is found that near the limits of inflammability, solutions describing upward flame propagation come in pairs having close propagation speeds, and that the effect of gravity is to reverse the burnt gas velocity profile generated by the flame. On the basis of these results, a theory of partial flame propagation driven by the gravitational field is developed. A complete explanation is given of the intricate observed behavior of limit flames, including dependence of the inflammability range on the size of the combustion domain, the large distances of partial flame propagation, and the progression of flame extinction. The role of the finite front-thickness effects is discussed in detail. Also, various mechanisms governing flame acceleration in smooth tubes are ide...
Procedures for uncertainty and sensitivity analysis in repository performance assessment
International Nuclear Information System (INIS)
The objective of the project was mainly a literature study of available methods for the treatment of parameter uncertainty propagation and sensitivity aspects in complete models such as those concerning geologic disposal of radioactive waste. The study, which has run parallel with the development of a code package (PROPER) for computer assisted analysis of function, also aims at the choice of accurate, cost-affective methods for uncertainty and sensitivity analysis. Such a choice depends on several factors like the number of input parameters, the capacity of the model and the computer reresources required to use the model. Two basic approaches are addressed in the report. In one of these the model of interest is directly simulated by an efficient sampling technique to generate an output distribution. Applying the other basic method the model is replaced by an approximating analytical response surface, which is then used in the sampling phase or in moment matching to generate the output distribution. Both approaches are illustrated by simple examples in the report. (author)
Methodology for characterizing modeling and discretization uncertainties in computational simulation
Energy Technology Data Exchange (ETDEWEB)
ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.
2000-03-01
This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.
NLO error propagation exercise: statistical results
International Nuclear Information System (INIS)
Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or 235U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, 235U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and 235U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods
Angular Operators Violating the Heisenberg Uncertainty Principle
Pereira, Tiago
2008-01-01
The description of a quantum system in terms of angle variables may violate Heisenberg uncertainty principle. The familiar case is the azimutal angle $\\phi$ and its canonical moment $L_z$. Although this problem was foreseen almost a century ago, up to the present days there are no criteria to precisely characterize the violation. In this paper, we present a theorem which provides necessary and sufficient conditions for the violation of the Heisenberg uncertainty principle. We illustrate our results with analytical examples.
Pulse Wave Propagation in the Arterial Tree
van de Vosse, Frans N.; Stergiopulos, Nikos
2011-01-01
The beating heart creates blood pressure and flow pulsations that propagate as waves through the arterial tree that are reflected at transitions in arterial geometry and elasticity. Waves carry information about the matter in which they propagate. Therefore, modeling of arterial wave propagation extends our knowledge about the functioning of the cardiovascular system and provides a means to diagnose disorders and predict the outcome of medical interventions. In this review we focus on the physical and mathematical modeling of pulse wave propagation, based on general fluid dynamical principles. In addition we present potential applications in cardiovascular research and clinical practice. Models of short- and long-term adaptation of the arterial system and methods that deal with uncertainties in personalized model parameters and boundary conditions are briefly discussed, as they are believed to be major topics for further study and will boost the significance of arterial pulse wave modeling even more.
A surrogate-based uncertainty quantification with quantifiable errors
Energy Technology Data Exchange (ETDEWEB)
Bang, Y.; Abdel-Khalik, H. S. [North Carolina State Univ., Raleigh, NC 27695 (United States)
2012-07-01
Surrogate models are often employed to reduce the computational cost required to complete uncertainty quantification, where one is interested in propagating input parameters uncertainties throughout a complex engineering model to estimate responses uncertainties. An improved surrogate construction approach is introduced here which places a premium on reducing the associated computational cost. Unlike existing methods where the surrogate is constructed first, then employed to propagate uncertainties, the new approach combines both sensitivity and uncertainty information to render further reduction in the computational cost. Mathematically, the reduction is described by a range finding algorithm that identifies a subspace in the parameters space, whereby parameters uncertainties orthogonal to the subspace contribute negligible amount to the propagated uncertainties. Moreover, the error resulting from the reduction can be upper-bounded. The new approach is demonstrated using a realistic nuclear assembly model and compared to existing methods in terms of computational cost and accuracy of uncertainties. Although we believe the algorithm is general, it will be applied here for linear-based surrogates and Gaussian parameters uncertainties. The generalization to nonlinear models will be detailed in a separate article. (authors)
A surrogate-based uncertainty quantification with quantifiable errors
International Nuclear Information System (INIS)
Surrogate models are often employed to reduce the computational cost required to complete uncertainty quantification, where one is interested in propagating input parameters uncertainties throughout a complex engineering model to estimate responses uncertainties. An improved surrogate construction approach is introduced here which places a premium on reducing the associated computational cost. Unlike existing methods where the surrogate is constructed first, then employed to propagate uncertainties, the new approach combines both sensitivity and uncertainty information to render further reduction in the computational cost. Mathematically, the reduction is described by a range finding algorithm that identifies a subspace in the parameters space, whereby parameters uncertainties orthogonal to the subspace contribute negligible amount to the propagated uncertainties. Moreover, the error resulting from the reduction can be upper-bounded. The new approach is demonstrated using a realistic nuclear assembly model and compared to existing methods in terms of computational cost and accuracy of uncertainties. Although we believe the algorithm is general, it will be applied here for linear-based surrogates and Gaussian parameters uncertainties. The generalization to nonlinear models will be detailed in a separate article. (authors)
Federal Laboratory Consortium — NETL’s analytical laboratories in Pittsburgh, PA, and Albany, OR, give researchers access to the equipment they need to thoroughly study the properties of materials...
Flannelly, W. G.; Fabunmi, J. A.; Nagy, E. J.
1981-01-01
Analytical methods for combining flight acceleration and strain data with shake test mobility data to predict the effects of structural changes on flight vibrations and strains are presented. This integration of structural dynamic analysis with flight performance is referred to as analytical testing. The objective of this methodology is to analytically estimate the results of flight testing contemplated structural changes with minimum flying and change trials. The category of changes to the aircraft includes mass, stiffness, absorbers, isolators, and active suppressors. Examples of applying the analytical testing methodology using flight test and shake test data measured on an AH-1G helicopter are included. The techniques and procedures for vibration testing and modal analysis are also described.
French, N.; L. Gabrielli
2005-01-01
Valuation is often said to be “an art not a science” but this relates to the techniques employed to calculate value not to the underlying concept itself. Valuation is the process of estimating price in the market place. Yet, such an estimation will be affected by uncertainties. Uncertainty in the comparable information available; uncertainty in the current and future market conditions and uncertainty in the specific inputs for the subject property. These input uncertainties will translate int...
Onatski, Alexei; Williams, Noah
2003-01-01
Recently there has been much interest in studying monetary policy under model uncertainty. We develop methods to analyze different sources of uncertainty in one coherent structure useful for policy decisions. We show how to estimate the size of the uncertainty based on time series data, and incorporate this uncertainty in policy optimization. We propose two different approaches to modeling model uncertainty. The first is model error modeling, which imposes additional structure on the errors o...
High-level waste qualification: Managing uncertainty
International Nuclear Information System (INIS)
A vitrification facility is being developed by the U.S. Department of Energy (DOE) at the West Valley Demonstration Plant (WVDP) near Buffalo, New York, where approximately 300 canisters of high-level nuclear waste glass will be produced. To assure that the produced waste form is acceptable, uncertainty must be managed. Statistical issues arise due to sampling, waste variations, processing uncertainties, and analytical variations. This paper presents elements of a strategy to characterize and manage the uncertainties associated with demonstrating that an acceptable waste form product is achieved. Specific examples are provided within the context of statistical work performed by Pacific Northwest Laboratory (PNL)
Uncertainty Quantification and Propagation in Nuclear Density Functional Theory
Schunck, N.; McDonnell, J. D.; Higdon, D.; Sarich, J.; Wild, S. M.
2015-01-01
Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going efforts seek to better root nuclear DFT in the theory of nuclear forces [see Duguet et al., this issue], energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in finite nuclei. In this paper, we review recent e...
Modelling delay propagation within an airport network
Pyrgiotis, N.; Malone, K.M.; Odoni, A.
2013-01-01
We describe an analytical queuing and network decomposition model developed to study the complex phenomenon of the propagation of delays within a large network of major airports. The Approximate Network Delays (AND) model computes the delays due to local congestion at individual airports and capture
Uncertainties in radiation flow experiments
Fryer, C. L.; Dodd, E.; Even, W.; Fontes, C. J.; Greeff, C.; Hungerford, A.; Kline, J.; Mussack, K.; Tregillis, I.; Workman, J. B.; Benstead, J.; Guymer, T. M.; Moore, A. S.; Morton, J.
2016-03-01
Although the fundamental physics behind radiation and matter flow is understood, many uncertainties remain in the exact behavior of macroscopic fluids in systems ranging from pure turbulence to coupled radiation hydrodynamics. Laboratory experiments play an important role in studying this physics to allow scientists to test their macroscopic models of these phenomena. However, because the fundamental physics is well understood, precision experiments are required to validate existing codes already tested by a suite of analytic, manufactured and convergence solutions. To conduct such high-precision experiments requires a detailed understanding of the experimental errors and the nature of their uncertainties on the observed diagnostics. In this paper, we study the uncertainties plaguing many radiation-flow experiments, focusing on those using a hohlraum (dynamic or laser-driven) source and a foam-density target. This study focuses on the effect these uncertainties have on the breakout time of the radiation front. We find that, even if the errors in the initial conditions and numerical methods are Gaussian, the errors in the breakout time are asymmetric, leading to a systematic bias in the observed data. We must understand these systematics to produce the high-precision experimental results needed to study this physics.
Uncertainty quantification in aerosol dynamics
International Nuclear Information System (INIS)
The influence of uncertainty in coagulation and depositions mechanisms, as well as in the initial conditions, on the solution of the aerosol dynamic equation have been assessed using polynomial chaos theory. In this way, large uncertainties can be incorporated into the equations and their propagation as a function of space and time studied. We base our calculations on the simplified point model dynamic equation which includes coagulation and deposition removal mechanisms. Results are given for the stochastic mean aerosol density as a function of time as well as its variance. The stochastic mean and deterministic mean are shown to differ and the associated uncertainty, in the form of a sensitivity coefficient, is obtained as a function of time. In addition, we obtain the probability density function of the aerosol density and show how this varies with time. In view of the generally uncertain nature of an accidental aerosol release in a nuclear reactor accident, the polynomial chaos method is a particularly useful technique as it allows one to deal with a very large spread of input data and examine the effect this has on the quantities of interest. Convergence matters are studied and numerical values given.
On the diffusive propagation of warps in thin accretion discs
LODATO G; Price, D.
2010-01-01
In this paper we revisit the issue of the propagation of warps in thin and viscous accretion discs. In this regime warps are know to propagate diffusively, with a diffusion coefficient approximately inversely proportional to the disc viscosity. Previous numerical investigations of this problem (Lodato & Pringle 2007) did not find a good agreement between the numerical results and the predictions of the analytic theories of warp propagation, both in the linear and in the non-linear case. Here,...
DEFF Research Database (Denmark)
Seif El-Nasr, Magy; Drachen, Anders; Canossa, Alessandro
2013-01-01
Game Analytics has gained a tremendous amount of attention in game development and game research in recent years. The widespread adoption of data-driven business intelligence practices at operational, tactical and strategic levels in the game industry, combined with the integration of quantitative...... measures in user-oriented game research, has caused a paradigm shift. Historically, game development has not been data-driven, but this is changing as the benefits of adopting and adapting analytics to inform decision making across all levels of the industry are becoming generally known and accepted....
Spain, Barry; Ulam, S; Stark, M
1960-01-01
Analytical Quadrics focuses on the analytical geometry of three dimensions. The book first discusses the theory of the plane, sphere, cone, cylinder, straight line, and central quadrics in their standard forms. The idea of the plane at infinity is introduced through the homogenous Cartesian coordinates and applied to the nature of the intersection of three planes and to the circular sections of quadrics. The text also focuses on paraboloid, including polar properties, center of a section, axes of plane section, and generators of hyperbolic paraboloid. The book also touches on homogenous coordi
On the worst case uncertainty and its evaluation
International Nuclear Information System (INIS)
The paper is a review on the worst case uncertainty (WCU) concept, neglected in the Guide to the Expression of Uncertainty in Measurements (GUM), but necessary for a correct uncertainty assessment in a number of practical cases involving distribution with compact support. First, it is highlighted that the knowledge of the WCU is necessary to choose a sensible coverage factor, associated to a sensible coverage probability: the Maximum Acceptable Coverage Factor (MACF) is introduced as a convenient index to guide this choice. Second, propagation rules for the worst-case uncertainty are provided in matrix and scalar form. It is highlighted that when WCU propagation cannot be computed, the Monte Carlo approach is the only way to obtain a correct expanded uncertainty assessment, in contrast to what can be inferred from the GUM. Third, examples of applications of the formulae to ordinary instruments and measurements are given. Also an example taken from the GUM is discussed, underlining some inconsistencies in it
On the worst case uncertainty and its evaluation
Fabbiano, L.; Giaquinto, N.; Savino, M.; Vacca, G.
2016-02-01
The paper is a review on the worst case uncertainty (WCU) concept, neglected in the Guide to the Expression of Uncertainty in Measurements (GUM), but necessary for a correct uncertainty assessment in a number of practical cases involving distribution with compact support. First, it is highlighted that the knowledge of the WCU is necessary to choose a sensible coverage factor, associated to a sensible coverage probability: the Maximum Acceptable Coverage Factor (MACF) is introduced as a convenient index to guide this choice. Second, propagation rules for the worst-case uncertainty are provided in matrix and scalar form. It is highlighted that when WCU propagation cannot be computed, the Monte Carlo approach is the only way to obtain a correct expanded uncertainty assessment, in contrast to what can be inferred from the GUM. Third, examples of applications of the formulae to ordinary instruments and measurements are given. Also an example taken from the GUM is discussed, underlining some inconsistencies in it.
Uncertainty estimation of ultrasonic thickness measurement
International Nuclear Information System (INIS)
The most important factor that should be taken into consideration when selecting ultrasonic thickness measurement technique is its reliability. Only when the uncertainty of a measurement results is known, it may be judged if the result is adequate for intended purpose. The objective of this study is to model the ultrasonic thickness measurement function, to identify the most contributing input uncertainty components, and to estimate the uncertainty of the ultrasonic thickness measurement results. We assumed that there are five error sources significantly contribute to the final error, these sources are calibration velocity, transit time, zero offset, measurement repeatability and resolution, by applying the propagation of uncertainty law to the model function, a combined uncertainty of the ultrasonic thickness measurement was obtained. In this study the modeling function of ultrasonic thickness measurement was derived. By using this model the estimation of the uncertainty of the final output result was found to be reliable. It was also found that the most contributing input uncertainty components are calibration velocity, transit time linearity and zero offset. (author)
Dealing with uncertainties - communication between disciplines
Overbeek, Bernadet; Bessembinder, Janette
2013-04-01
Climate adaptation research inevitably involves uncertainty issues - whether people are building a model, using climate scenarios, or evaluating policy processes. However, do they know which uncertainties are relevant in their field of work? And which uncertainties exist in the data from other disciplines that they use (e.g. climate data, land use, hydrological data) and how they propagate? From experiences in Dutch research programmes on climate change in the Netherlands we know that disciplines often deal differently with uncertainties. This complicates communication between disciplines and also with the various users of data and information on climate change and its impacts. In October 2012 an autumn school was organized within the Knowledge for Climate Research Programme in the Netherlands with as central theme dealing with and communicating about uncertainties, in climate- and socio-economic scenarios, in impact models and in the decision making process. The lectures and discussions contributed to the development of a common frame of reference (CFR) for dealing with uncertainties. The common frame contains the following: 1. Common definitions (typology of uncertainties, robustness); 2. Common understanding (why do we consider it important to take uncertainties into account) and aspects on which we disagree (how far should scientists go in communication?); 3. Documents that are considered important by all participants; 4. Do's and don'ts in dealing with uncertainties and communicating about uncertainties (e.g. know your audience, check how your figures are interpreted); 5. Recommendations for further actions (e.g. need for a platform to exchange experiences). The CFR is meant to help researchers in climate adaptation to work together and communicate together on climate change (better interaction between disciplines). It is also meant to help researchers to explain to others (e.g. decision makers) why and when researchers agree and when and why they disagree
Ferroukhi, H.; Leray, O.; Hursin, M.; Vasiliev, A.; Perret, G.; Pautz, A.
2014-04-01
At the Paul Scherrer Institut (PSI), a methodology for nuclear data uncertainty propagation in CASMO-5M (C5M) assembly calculations is under development. This paper presents a preliminary application of this methodology to C5M decay heat calculations. Applying a stochastic sampling method, nuclear decay data uncertainties are first propagated for the cooling phase only. Thereafter, the uncertainty propagation is enlarged to gradually account for cross-section as well as fission yield uncertainties during the depletion phase. On that basis, assembly heat load uncertainties as well as total uncertainty for the entire pool are quantified for cooling times up to one year. The relative contributions from the various types of nuclear data uncertainties are in this context also estimated.
Pappas, Marjorie L.
1995-01-01
Discusses analytical searching, a process that enables searchers of electronic resources to develop a planned strategy by combining words or phrases with Boolean operators. Defines simple and complex searching, and describes search strategies developed with Boolean logic and truncation. Provides guidelines for teaching students analytical…
Cascading rainfall uncertainties into 2D inundation impact models
Souvignet, Maxime; de Almeida, Gustavo; Champion, Adrian; Garcia Pintado, Javier; Neal, Jeff; Freer, Jim; Cloke, Hannah; Odoni, Nick; Coxon, Gemma; Bates, Paul; Mason, David
2013-04-01
Existing precipitation products show differences in their spatial and temporal distribution and several studies have presented how these differences influence the ability to predict hydrological responses. However, an atmospheric-hydrologic-hydraulic uncertainty cascade is seldom explored and how, importantly, input uncertainties propagate through this cascade is still poorly understood. Such a project requires a combination of modelling capabilities, runoff generation predictions based on those rainfall forecasts, and hydraulic flood wave propagation based on the runoff predictions. Accounting for uncertainty in each component is important in decision making for issuing flood warnings, monitoring or planning. We suggest a better understanding of uncertainties in inundation impact modelling must consider these differences in rainfall products. This will improve our understanding of the input uncertainties on our predictive capability. In this paper, we propose to address this issue by i) exploring the effects of errors in rainfall on inundation predictive capacity within an uncertainty framework, i.e. testing inundation uncertainty against different comparable meteorological conditions (i.e. using different rainfall products). Our method cascades rainfall uncertainties into a lumped hydrologic model (FUSE) within the GLUE uncertainty framework. The resultant prediction uncertainties in discharge provide uncertain boundary conditions, which are cascaded into a simplified shallow water 2D hydraulic model (LISFLOOD-FP). Rainfall data captured by three different measurement techniques - rain gauges, gridded data and numerical weather predictions (NWP) models are used to assess the combined input data and model parameter uncertainty. The study is performed in the Severn catchment over the period between June and July 2007, where a series of rainfall events causing record floods in the study area). Changes in flood area extent are compared and the uncertainty envelope is
International Nuclear Information System (INIS)
We used the error propagation theory to calculate uncertainties in static formation temperature estimates in geothermal and petroleum wells from three widely used methods (line-source or Horner method; spherical and radial heat flow method; and cylindrical heat source method). Although these methods commonly use an ordinary least-squares linear regression model considered in this study, we also evaluated two variants of a weighted least-squares linear regression model for the actual relationship between the bottom-hole temperature and the corresponding time functions. Equations based on the error propagation theory were derived for estimating uncertainties in the time function of each analytical method. These uncertainties in conjunction with those on bottom-hole temperatures were used to estimate individual weighting factors required for applying the two variants of the weighted least-squares regression model. Standard deviations and 95% confidence limits of intercept were calculated for both types of linear regressions. Applications showed that static formation temperatures computed with the spherical and radial heat flow method were generally greater (at the 95% confidence level) than those from the other two methods under study. When typical measurement errors of 0.25 h in time and 5 deg. C in bottom-hole temperature were assumed for the weighted least-squares model, the uncertainties in the estimated static formation temperatures were greater than those for the ordinary least-squares model. However, if these errors were smaller (about 1% in time and 0.5% in temperature measurements), the weighted least-squares linear regression model would generally provide smaller uncertainties for the estimated temperatures than the ordinary least-squares linear regression model. Therefore, the weighted model would be statistically correct and more appropriate for such applications. We also suggest that at least 30 precise and accurate BHT and time measurements along with
Groves, Curtis Edward
2014-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional validation by test only mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions.Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations. This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions
Groves, Curtis Edward
2014-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional "validation by test only" mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics
Pore Velocity Estimation Uncertainties
Devary, J. L.; Doctor, P. G.
1982-08-01
Geostatistical data analysis techniques were used to stochastically model the spatial variability of groundwater pore velocity in a potential waste repository site. Kriging algorithms were applied to Hanford Reservation data to estimate hydraulic conductivities, hydraulic head gradients, and pore velocities. A first-order Taylor series expansion for pore velocity was used to statistically combine hydraulic conductivity, hydraulic head gradient, and effective porosity surfaces and uncertainties to characterize the pore velocity uncertainty. Use of these techniques permits the estimation of pore velocity uncertainties when pore velocity measurements do not exist. Large pore velocity estimation uncertainties were found to be located in the region where the hydraulic head gradient relative uncertainty was maximal.
International Nuclear Information System (INIS)
There is an increasing interest in computational reactor safety analysis to replace the conservative evaluation model calculations by best estimate calculations supplemented by a quantitative uncertainty analysis. Sources of uncertainties - code models, initial and boundary conditions, plant state, fuel parameters, scaling, and numerical solution algorithm. Measurements, which are the basis of computer code model, show a scatter around a mean value. For example, data for two-phase pressure drop show a scatter range of about ± 20 - 30%. A range of values should be taken into account for the respective model parameter instead of one discrete value only. The state of knowledge about all uncertain parameters is described by ranges and subjective probability distributions. Stochastic variability due to possible component failures of the reactor plant is not considered in an uncertainty analysis. The single failure criterion is taken into account in a deterministic way. The aim of the uncertainty analysis is at first to identify and quantify all potentially important uncertain parameters. Their propagation through computer code calculations provides subjective probability distributions (and ranges) for the code results. The evaluation of the margin to acceptance criteria, (= technical limit value) e.g. the maximum fuel rod clad temperature, should be based on the upper limit of this distribution for the calculated temperatures. Investigations are underway to transform data measured in experiments and post-test calculations into thermal-hydraulic model parameters with uncertainties. It is effective to concentrate on those uncertainties showing the highest sensitivity measures. The state of knowledge about these uncertain input parameters has to be improved, and suitable experimental as well as analytical information has to be selected. This is a general experience applying different uncertainty methods
Uncertainty of Doppler reactivity worth due to uncertainties of JENDL-3.2 resonance parameters
Energy Technology Data Exchange (ETDEWEB)
Zukeran, Atsushi [Hitachi Ltd., Hitachi, Ibaraki (Japan). Power and Industrial System R and D Div.; Hanaki, Hiroshi; Nakagawa, Tuneo; Shibata, Keiichi; Ishikawa, Makoto
1998-03-01
Analytical formula of Resonance Self-shielding Factor (f-factor) is derived from the resonance integral (J-function) based on NR approximation and the analytical expression for Doppler reactivity worth ({rho}) is also obtained by using the result. Uncertainties of the f-factor and Doppler reactivity worth are evaluated on the basis of sensitivity coefficients to the resonance parameters. The uncertainty of the Doppler reactivity worth at 487{sup 0}K is about 4 % for the PNC Large Fast Breeder Reactor. (author)
Uncertainty and Cognitive Control
Directory of Open Access Journals (Sweden)
Faisal eMushtaq
2011-10-01
Full Text Available A growing trend of neuroimaging, behavioural and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1 There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2 There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3 The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the need for control; (4 Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders.
International Nuclear Information System (INIS)
This book is comprised of nineteen chapters, which describes introduction of analytical chemistry, experimental error and statistics, chemistry equilibrium and solubility, gravimetric analysis with mechanism of precipitation, range and calculation of the result, volume analysis on general principle, sedimentation method on types and titration curve, acid base balance, acid base titration curve, complex and firing reaction, introduction of chemical electro analysis, acid-base titration curve, electrode and potentiometry, electrolysis and conductometry, voltammetry and polarographic spectrophotometry, atomic spectrometry, solvent extraction, chromatograph and experiments.
International Nuclear Information System (INIS)
The division for Analytical Chemistry continued to try and develope an accurate method for the separation of trace amounts from mixtures which, contain various other elements. Ion exchange chromatography is of special importance in this regard. New separation techniques were tried on certain trace amounts in South African standard rock materials and special ceramics. Methods were also tested for the separation of carrier-free radioisotopes from irradiated cyclotron discs
Constraint Propagation as Information Maximization
Abdallah, A Nait
2012-01-01
Dana Scott used the partial order among partial functions for his mathematical model of recursively defined functions. He interpreted the partial order as one of information content. In this paper we elaborate on Scott's suggestion of regarding computation as a process of information maximization by applying it to the solution of constraint satisfaction problems. Here the method of constraint propagation can be interpreted as decreasing uncertainty about the solution -- that is, as gain in information about the solution. As illustrative example we choose numerical constraint satisfaction problems to be solved by interval constraints. To facilitate this approach to constraint solving we formulate constraint satisfaction problems as formulas in predicate logic. This necessitates extending the usual semantics for predicate logic so that meaning is assigned not only to sentences but also to formulas with free variables.
Propagator of spinless tachyons
International Nuclear Information System (INIS)
The possibility of formulating a tachyon propagator is considered. Both nonrelativistic- and relativistic-tachyon propagators are derived. The presented theory is based on the reciprocity principle according to which the roles of space and time are interchanged. The roles of tachyon energy and momentum are also interchanged. The relativistic-tachyon propagator reflects the fact that positive- and negative-momentum states are separated by a gap which remains unaltered in all Lorentz frames. The relativistic-tachyon propagator includes the momentum sign function instead of the energy sign function as compared with the bradyon propagator. For these reasons, the relativistic-tachyon propagator leads to a solution of the tachyon Klein-Gordon equation which is Lorentz invariant. (author)
Uncertainty Quantification in Climate Modeling
Sargsyan, K.; Safta, C.; Berry, R.; Debusschere, B.; Najm, H.
2011-12-01
We address challenges that sensitivity analysis and uncertainty quantification methods face when dealing with complex computational models. In particular, climate models are computationally expensive and typically depend on a large number of input parameters. We consider the Community Land Model (CLM), which consists of a nested computational grid hierarchy designed to represent the spatial heterogeneity of the land surface. Each computational cell can be composed of multiple land types, and each land type can incorporate one or more sub-models describing the spatial and depth variability. Even for simulations at a regional scale, the computational cost of a single run is quite high and the number of parameters that control the model behavior is very large. Therefore, the parameter sensitivity analysis and uncertainty propagation face significant difficulties for climate models. This work employs several algorithmic avenues to address some of the challenges encountered by classical uncertainty quantification methodologies when dealing with expensive computational models, specifically focusing on the CLM as a primary application. First of all, since the available climate model predictions are extremely sparse due to the high computational cost of model runs, we adopt a Bayesian framework that effectively incorporates this lack-of-knowledge as a source of uncertainty, and produces robust predictions with quantified uncertainty even if the model runs are extremely sparse. In particular, we infer Polynomial Chaos spectral expansions that effectively encode the uncertain input-output relationship and allow efficient propagation of all sources of input uncertainties to outputs of interest. Secondly, the predictability analysis of climate models strongly suffers from the curse of dimensionality, i.e. the large number of input parameters. While single-parameter perturbation studies can be efficiently performed in a parallel fashion, the multivariate uncertainty analysis
Towards "Propagation = Logic + Control"
Brand, Sebastian; Yap, Roland H. C.
2006-01-01
Constraint propagation algorithms implement logical inference. For efficiency, it is essential to control whether and in what order basic inference steps are taken. We provide a high-level framework that clearly differentiates between information needed for controlling propagation versus that needed for the logical semantics of complex constraints composed from primitive ones. We argue for the appropriateness of our controlled propagation framework by showing that it c...
Directory of Open Access Journals (Sweden)
M. Hajek
2006-04-01
Full Text Available The propagation of ultra wide band (UWB signals through walls is analyzed. For this propagation studies, it is necessary to consider not only propagation at a single frequency but in the whole band. The UWB radar output signal is formed by both transmitter and antenna. The effects of antenna receiving and transmitting responses for various antenna types (such as small and aperture antennas are studied in the frequency as well as time domain. Moreover, UWB radar output signals can be substantially affected due to electromagnetic wave propagation through walls and multipath effects.
Uncertainty in peak cooling load calculations
Energy Technology Data Exchange (ETDEWEB)
Dominguez-Munoz, Fernando; Cejudo-Lopez, Jose M.; Carrillo-Andres, Antonio [Grupo de Energetica, ETS Ingenieros Industriales, Universidad de Malaga, Calle Dr. Ortiz Ramos, 29071 Malaga (Spain)
2010-07-15
Peak cooling loads are usually calculated at early stages of the building project, when large uncertainties affect the input data. Uncertainties arise from a variety of sources like the lack of information, random components and the approximate nature of the building mathematical model. Unfortunately, these uncertainties are normally large enough to make the result of the calculation very dependent on starting assumptions about the value of input data. HVAC engineers deal with uncertainties through worst-case scenarios and/or safety factors. In this paper, a new approach is proposed based on stochastic simulation methods. Uncertainty bands are applied to the input data and propagated through the building model in order to determine their impact on the peak cooling load. The result of this calculation is a probability distribution that quantifies the whole range of possible peak loads and the probability of each interval. The stochastic solution is compared with the conventional one, and a global sensitivity analysis is undertaken to identify the most important uncertainties. (author)
Uncertainty Analysis of Thermal Comfort Parameters
Ribeiro, A. Silva; Alves e Sousa, J.; Cox, Maurice G.; Forbes, Alistair B.; Matias, L. Cordeiro; Martins, L. Lages
2015-08-01
International Standard ISO 7730:2005 defines thermal comfort as that condition of mind that expresses the degree of satisfaction with the thermal environment. Although this definition is inevitably subjective, the Standard gives formulae for two thermal comfort indices, predicted mean vote ( PMV) and predicted percentage dissatisfied ( PPD). The PMV formula is based on principles of heat balance and experimental data collected in a controlled climate chamber under steady-state conditions. The PPD formula depends only on PMV. Although these formulae are widely recognized and adopted, little has been done to establish measurement uncertainties associated with their use, bearing in mind that the formulae depend on measured values and tabulated values given to limited numerical accuracy. Knowledge of these uncertainties are invaluable when values provided by the formulae are used in making decisions in various health and civil engineering situations. This paper examines these formulae, giving a general mechanism for evaluating the uncertainties associated with values of the quantities on which the formulae depend. Further, consideration is given to the propagation of these uncertainties through the formulae to provide uncertainties associated with the values obtained for the indices. Current international guidance on uncertainty evaluation is utilized.
Subjective judgment on measure of data uncertainty
International Nuclear Information System (INIS)
Integral parameters are considered, which can be derived from the covariance matrix of the uncertainties and can serve as a general measure of uncertainties in comparisons of different fits. Using realistic examples and simple data model fits with a variable number of parameters, he was able to show that the sum of all elements of the covariance matrix is a best general measure for characterizing and comparing uncertainties obtained in different model and non-model fits. Discussions also included the problem of non-positive definiteness of the covariance matrix of the uncertainty of the cross sections obtained from the covariance matrix of the uncertainty of the parameters in cases where the number of parameters is less than number of cross section points. As a consequence of the numerical inaccuracy of the calculations that are always many orders larger than the presentation of the machine zero, it was concluded that the calculated eigenvalues of semipositive definite matrices have no machine zeros. These covariance matrices can be inverted when they are used in the error propagation equations. So the procedure for transformation of the semi-positive definite matrices to positive ones by introducing minimal changes into the matrix (changes that are equivalent to introducing additional non-informative parameters in the model) is generally not needed. But caution should be observed, because there can be cases where uncertainties can be unphysical, e.g. integral parameters estimated with formally non-positive-definite covariance matrices
Uncertainty in hydrological signatures for gauged and ungauged catchments
Westerberg, Ida K.; Wagener, Thorsten; Coxon, Gemma; McMillan, Hilary K.; Castellarin, Attilio; Montanari, Alberto; Freer, Jim
2016-03-01
Reliable information about hydrological behavior is needed for water-resource management and scientific investigations. Hydrological signatures quantify catchment behavior as index values, and can be predicted for ungauged catchments using a regionalization procedure. The prediction reliability is affected by data uncertainties for the gauged catchments used in prediction and by uncertainties in the regionalization procedure. We quantified signature uncertainty stemming from discharge data uncertainty for 43 UK catchments and propagated these uncertainties in signature regionalization, while accounting for regionalization uncertainty with a weighted-pooling-group approach. Discharge uncertainty was estimated using Monte Carlo sampling of multiple feasible rating curves. For each sampled rating curve, a discharge time series was calculated and used in deriving the gauged signature uncertainty distribution. We found that the gauged uncertainty varied with signature type, local measurement conditions and catchment behavior, with the highest uncertainties (median relative uncertainty ±30-40% across all catchments) for signatures measuring high- and low-flow magnitude and dynamics. Our regionalization method allowed assessing the role and relative magnitudes of the gauged and regionalized uncertainty sources in shaping the signature uncertainty distributions predicted for catchments treated as ungauged. We found that (1) if the gauged uncertainties were neglected there was a clear risk of overconditioning the regionalization inference, e.g., by attributing catchment differences resulting from gauged uncertainty to differences in catchment behavior, and (2) uncertainty in the regionalization results was lower for signatures measuring flow distribution (e.g., mean flow) than flow dynamics (e.g., autocorrelation), and for average flows (and then high flows) compared to low flows.
When to carry out analytic continuation?
Zuo, J M
1998-01-01
This paper discusses the analytic continuation in the thermal field theory by using the theory of $\\eta-\\xi$ spacetime. Taking a simple model as example, the $2\\times 2$ matrix real-time propagator is solved from the equation obtained through continuation of the equation for the imaginary-time propagator. The geometry of the $\\eta-\\xi$ spacetime plays important role in the discussion.
Uncertainty in the environmental modelling process – A framework and guidance
Refsgaard, J. C.; Van Der Sluijs, J P; Hojberg, A.L.; Vanrolleghem, P.
2007-01-01
A terminology and typology of uncertainty is presented together with a framework for the modelling process, its interaction with the broader water management process and the role of uncertainty at different stages in the modelling processes. Brief reviews have been made of 14 different (partly complementary) methods commonly used in uncertainty assessment and characterisation: data uncertainty engine (DUE), error propagation equations, expert elicitation, extended peer review, inverse modelli...
Premixed flame propagation in vertical tubes
Kazakov, Kirill A.
2016-04-01
Analytical treatment of the premixed flame propagation in vertical tubes with smooth walls is given. Using the on-shell flame description, equations for a quasi-steady flame with a small but finite front thickness are obtained and solved numerically. It is found that near the limits of inflammability, solutions describing upward flame propagation come in pairs having close propagation speeds and that the effect of gravity is to reverse the burnt gas velocity profile generated by the flame. On the basis of these results, a theory of partial flame propagation driven by a strong gravitational field is developed. A complete explanation is given of the intricate observed behavior of limit flames, including dependence of the inflammability range on the size of the combustion domain, the large distances of partial flame propagation, and the progression of flame extinction. The role of the finite front-thickness effects is discussed in detail. Also, various mechanisms governing flame acceleration in smooth tubes are identified. Acceleration of methane-air flames in open tubes is shown to be a combined effect of the hydrostatic pressure difference produced by the ambient cold air and the difference of dynamic gas pressure at the tube ends. On the other hand, a strong spontaneous acceleration of the fast methane-oxygen flames at the initial stage of their evolution in open-closed tubes is conditioned by metastability of the quasi-steady propagation regimes. An extensive comparison of the obtained results with the experimental data is made.
Sustainable Process Design under uncertainty analysis: targeting environmental indicators
DEFF Research Database (Denmark)
L. Gargalo, Carina; Gani, Rafiqul
2015-01-01
This study focuses on uncertainty analysis of environmental indicators used to support sustainable process design efforts. To this end, the Life Cycle Assessment methodology is extended with a comprehensive uncertainty analysis to propagate the uncertainties in input LCA data to the environmental...... extended LCA procedure is flexible and generic and can handle various sources of uncertainties in environmental impact analysis. This is expected to contribute to more reliable calculation of impact categories and robust sustainable process design....... from algae biomass is used as a case study. The results indicate there are considerable uncertainties in the calculated environmental indicators as revealed by CDFs. The underlying sources of these uncertainties are indeed the significant variation in the databases used for the LCA analysis. The...
Measurement uncertainty in pharmaceutical analysis and its application
Institute of Scientific and Technical Information of China (English)
Marcus Augusto Lyrio Traple; Alessandro Morais Saviano; Fabiane Lacerda Francisco; Felipe Rebello Lourençon
2014-01-01
The measurement uncertainty provides complete information about an analytical result. This is very important because several decisions of compliance or non-compliance are based on analytical results in pharmaceutical industries. The aim of this work was to evaluate and discuss the estimation of uncertainty in pharmaceutical analysis. The uncertainty is a useful tool in the assessment of compliance or non-compliance of in-process and final pharmaceutical products as well as in the assessment of pharmaceutical equivalence and stability study of drug products.
Study of social-network-based information propagation
Fan, Xiaoguang; 樊晓光.
2013-01-01
Information propagation has attracted increasing attention from sociologists, marketing researchers and Information Technology entrepreneurs. With the rapid developments in online and mobile social applications like Facebook, Twitter, and LinkedIn, large-scale, high-speed and instantaneous information dissemination becomes possible, spawning tremendous opportunities for electronic commerce. It is non-trivial to make an accurate analysis on how information is propagated due to the uncertainty...
A generalized photon propagator
Itin, Yakov
2007-01-01
A covariant gauge independent derivation of the generalized dispersion relation of electromagnetic waves in a medium with local and linear constitutive law is presented. A generalized photon propagator is derived. For Maxwell constitutive tensor, the standard light cone structure and the standard Feynman propagator are reinstated.
Uncertainty and Cognitive Control
Mushtaq, Faisal; Bland, Amy R; Schaefer, Alexandre
2011-01-01
A growing trend of neuroimaging, behavioral, and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decisi...
Inventories and sales uncertainty
Caglayan, M.; Maioli, S. (Silvia); Mateut, S.
2011-01-01
We investigate the empirical linkages between sales uncertainty and firms´ inventory investment behavior while controlling for firms´ financial strength. Using large panels of manufacturing firms from several European countries we find that higher sales uncertainty leads to larger stocks of inventories. We also identify an indirect effect of sales uncertainty on inventory accumulation through the financial strength of firms. Our results provide evidence that financial strength mitigates the a...
Uncertainty in artificial intelligence
Kanal, LN
1986-01-01
How to deal with uncertainty is a subject of much controversy in Artificial Intelligence. This volume brings together a wide range of perspectives on uncertainty, many of the contributors being the principal proponents in the controversy.Some of the notable issues which emerge from these papers revolve around an interval-based calculus of uncertainty, the Dempster-Shafer Theory, and probability as the best numeric model for uncertainty. There remain strong dissenting opinions not only about probability but even about the utility of any numeric method in this context.
Measurement Uncertainty and Probability
Willink, Robin
2013-02-01
Part I. Principles: 1. Introduction; 2. Foundational ideas in measurement; 3. Components of error or uncertainty; 4. Foundational ideas in probability and statistics; 5. The randomization of systematic errors; 6. Beyond the standard confidence interval; Part II. Evaluation of Uncertainty: 7. Final preparation; 8. Evaluation using the linear approximation; 9. Evaluation without the linear approximations; 10. Uncertainty information fit for purpose; Part III. Related Topics: 11. Measurement of vectors and functions; 12. Why take part in a measurement comparison?; 13. Other philosophies; 14. An assessment of objective Bayesian methods; 15. A guide to the expression of uncertainty in measurement; 16. Measurement near a limit - an insoluble problem?; References; Index.
Radon surveys and uncertainties
International Nuclear Information System (INIS)
Radon surveys are made primarily for estimating the radon risk for the population of an area but also for predicting the risk for inhabitants of future buildings. Therefore it is of essential importance to know the uncertainties of such predictions. The outcome of radon surveys is strongly influenced b y many factors with partly large uncertainties. In most cases passive radon detectors are exposed for some weeks or months (up to one-year measurements). In these cases, the contribution of uncertainties in the calibration of the detectors to the total uncertainty is most often of less importance. The main contribution to the uncertainties comes from the unknown treatment of the detectors by the inhabitants during the exposure and by the natural fluctuation of the indoor radon concentration in time. The latter is also true for one-year-measurements. Additional uncertainties are introduced when the measured data are normalized to some time period (e. g. one-year mean) or to some standardized measurement situation. Generally, it is of crucial importance to know the probability for a possible underestimation of the radon risk for an area. The main contributions to the final uncertainties, their sizes and the mathematical procedures which were used during the Austrian Radon Project (ARP) to estimate the uncertainties in the final categorization of areas in radon potential classes will be discussed. In addition, procedures which can be used to reduce some uncertainties will be presented. (author)
Positrons from dark matter annihilation in the galactic halo: uncertainties
Fornengo, N; Lineros, R; Donato, F; Salati, P
2007-01-01
Indirect detection signals from dark matter annihilation are studied in the positron channel. We discuss in detail the positron propagation inside the galactic medium: we present novel solutions of the diffusion and propagation equations and we focus on the determination of the astrophysical uncertainties which affect the positron dark matter signal. We show that, especially in the low energy tail of the positron spectra at Earth, the uncertainty is sizeable and we quantify the effect. Comparison of our predictions with current available and foreseen experimental data are derived.
Vortex microscope: analytical model and experiment
Masajada, Jan; Popiołek-Masajada, Agnieszka; Szatkowski, Mateusz; Plociniczak, Łukasz
2015-11-01
We present the analytical model describing the Gaussian beam propagation through the off axis vortex lens and the set of axially positioned ideal lenses. The model is derived on the base of Fresnel diffraction integral. The model is extended to the case of vortex lens with any topological charge m. We have shown that the Gaussian beam propagation can be represented by function G which depends on four coefficients. When propagating from one lens to another the function holds its form but the coefficient changes.
Hu, Xingzhi; Parks, Geoffrey T.; Chen, Xiaoqian; Seshadri, Pranay
2016-03-01
Uncertainty quantification has recently been receiving much attention from aerospace engineering community. With ever-increasing requirements for robustness and reliability, it is crucial to quantify multidisciplinary uncertainty in satellite system design which dominates overall design direction and cost. However, coupled multi-disciplines and cross propagation hamper the efficiency and accuracy of high-dimensional uncertainty analysis. In this study, an uncertainty quantification methodology based on active subspaces is established for satellite conceptual design. The active subspace effectively reduces the dimension and measures the contributions of input uncertainties. A comprehensive characterization of associated uncertain factors is made and all subsystem models are built for uncertainty propagation. By integrating a system decoupling strategy, the multidisciplinary uncertainty effect is efficiently represented by a one-dimensional active subspace for each design. The identified active subspace is checked by bootstrap resampling for confidence intervals and verified by Monte Carlo propagation for the accuracy. To show the performance of active subspaces, 18 uncertainty parameters of an Earth observation small satellite are exemplified and then another 5 design uncertainties are incorporated. The uncertainties that contribute the most to satellite mass and total cost are ranked, and the quantification of high-dimensional uncertainty is achieved by a relatively small number of support samples. The methodology with considerably less cost exhibits high accuracy and strong adaptability, which provides a potential template to tackle multidisciplinary uncertainty in practical satellite systems.
The uncertainty of modeled soil carbon stock change for Finland
Lehtonen, Aleksi; Heikkinen, Juha
2013-04-01
Countries should report soil carbon stock changes of forests for Kyoto Protocol. Under Kyoto Protocol one can omit reporting of a carbon pool by verifying that the pool is not a source of carbon, which is especially tempting for the soil pool. However, verifying that soils of a nation are not a source of carbon in given year seems to be nearly impossible. The Yasso07 model was parametrized against various decomposition data using MCMC method. Soil carbon change in Finland between 1972 and 2011 were simulated with Yasso07 model using litter input data derived from the National Forest Inventory (NFI) and fellings time series. The uncertainties of biomass models, litter turnoverrates, NFI sampling and Yasso07 model were propagated with Monte Carlo simulations. Due to biomass estimation methods, uncertainties of various litter input sources (e.g. living trees, natural mortality and fellings) correlate strongly between each other. We show how original covariance matrices can be analytically combined and the amount of simulated components reduce greatly. While doing simulations we found that proper handling correlations may be even more essential than accurate estimates of standard errors. As a preliminary results, from the analysis we found that both Southern- and Northern Finland were soil carbon sinks, coefficient of variations (CV) varying 10%-25% when model was driven with long term constant weather data. When we applied annual weather data, soils were both sinks and sources of carbon and CVs varied from 10%-90%. This implies that the success of soil carbon sink verification depends on the weather data applied with models. Due to this fact IPCC should provide clear guidance for the weather data applied with soil carbon models and also for soil carbon sink verification. In the UNFCCC reporting carbon sinks of forest biomass have been typically averaged for five years - similar period for soil model weather data would be logical.
Uncertainty in flood risk mapping
Gonçalves, Luisa M. S.; Fonte, Cidália C.; Gomes, Ricardo
2014-05-01
A flood refers to a sharp increase of water level or volume in rivers and seas caused by sudden rainstorms or melting ice due to natural factors. In this paper, the flooding of riverside urban areas caused by sudden rainstorms will be studied. In this context, flooding occurs when the water runs above the level of the minor river bed and enters the major river bed. The level of the major bed determines the magnitude and risk of the flooding. The prediction of the flooding extent is usually deterministic, and corresponds to the expected limit of the flooded area. However, there are many sources of uncertainty in the process of obtaining these limits, which influence the obtained flood maps used for watershed management or as instruments for territorial and emergency planning. In addition, small variations in the delineation of the flooded area can be translated into erroneous risk prediction. Therefore, maps that reflect the uncertainty associated with the flood modeling process have started to be developed, associating a degree of likelihood with the boundaries of the flooded areas. In this paper an approach is presented that enables the influence of the parameters uncertainty to be evaluated, dependent on the type of Land Cover Map (LCM) and Digital Elevation Model (DEM), on the estimated values of the peak flow and the delineation of flooded areas (different peak flows correspond to different flood areas). The approach requires modeling the DEM uncertainty and its propagation to the catchment delineation. The results obtained in this step enable a catchment with fuzzy geographical extent to be generated, where a degree of possibility of belonging to the basin is assigned to each elementary spatial unit. Since the fuzzy basin may be considered as a fuzzy set, the fuzzy area of the basin may be computed, generating a fuzzy number. The catchment peak flow is then evaluated using fuzzy arithmetic. With this methodology a fuzzy number is obtained for the peak flow
Spatial Uncertainty Model for Visual Features Using a Kinect™ Sensor
Directory of Open Access Journals (Sweden)
Jae-Han Park
2012-06-01
Full Text Available This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.
Discussion of OECD LWR Uncertainty Analysis in Modelling Benchmark
International Nuclear Information System (INIS)
The demand for best estimate calculations in nuclear reactor design and safety evaluations has increased in recent years. Uncertainty quantification has been highlighted as part of the best estimate calculations. The modelling aspects of uncertainty and sensitivity analysis are to be further developed and validated on scientific grounds in support of their performance and application to multi-physics reactor simulations. The Organization for Economic Co-operation and Development (OECD) / Nuclear Energy Agency (NEA) Nuclear Science Committee (NSC) has endorsed the creation of an Expert Group on Uncertainty Analysis in Modelling (EGUAM). Within the framework of activities of EGUAM/NSC the OECD/NEA initiated the Benchmark for Uncertainty Analysis in Modelling for Design, Operation, and Safety Analysis of Light Water Reactor (OECD LWR UAM benchmark). The general objective of the benchmark is to propagate the predictive uncertainties of code results through complex coupled multi-physics and multi-scale simulations. The benchmark is divided into three phases with Phase I highlighting the uncertainty propagation in stand-alone neutronics calculations, while Phase II and III are focused on uncertainty analysis of reactor core and system respectively. This paper discusses the progress made in Phase I calculations, the Specifications for Phase II and the incoming challenges in defining Phase 3 exercises. The challenges of applying uncertainty quantification to complex code systems, in particular the time-dependent coupled physics models are the large computational burden and the utilization of non-linear models (expected due to the physics coupling). (authors)
Finite-difference time-domain modelling of through-the-Earth radio signal propagation
Ralchenko, M.; Svilans, M.; Samson, C.; Roper, M.
2015-12-01
This research seeks to extend the knowledge of how a very low frequency (VLF) through-the-Earth (TTE) radio signal behaves as it propagates underground, by calculating and visualizing the strength of the electric and magnetic fields for an arbitrary geology through numeric modelling. To achieve this objective, a new software tool has been developed using the finite-difference time-domain method. This technique is particularly well suited to visualizing the distribution of electromagnetic fields in an arbitrary geology. The frequency range of TTE radio (400-9000 Hz) and geometrical scales involved (1 m resolution for domains a few hundred metres in size) involves processing a grid composed of millions of cells for thousands of time steps, which is computationally expensive. Graphics processing unit acceleration was used to reduce execution time from days and weeks, to minutes and hours. Results from the new modelling tool were compared to three cases for which an analytic solution is known. Two more case studies were done featuring complex geologic environments relevant to TTE communications that cannot be solved analytically. There was good agreement between numeric and analytic results. Deviations were likely caused by numeric artifacts from the model boundaries; however, in a TTE application in field conditions, the uncertainty in the conductivity of the various geologic formations will greatly outweigh these small numeric errors.
Techniques Applied in the COSYMA Accident Consequence Uncertainty Analysis (invited paper)
International Nuclear Information System (INIS)
Uncertainty and sensitivity analysis studies for the program package COSYMA for assessing the radiological consequences of nuclear accidents have been performed to obtain a deeper insight into the propagation of parameter uncertainties through different submodules and to quantify their contribution to uncertainty and sensitivity in a final overall uncertainty analysis of the complete program system COSYMA. Some strategies are given for performing uncertainty analysis runs for submodules and/or the complete complex program system COSYMA and guidelines explain how to post-process and to reduce the bulk of uncertainty and sensitivity analysis results. (author)
International Nuclear Information System (INIS)
The paper concerns the physical principles behind the analytical techniques employing high energy ion microbeams, with special attention to features that affect their use with microbeams. Particle-induced x-ray emission (PIXIE) is discussed with respect to X-ray production, thick-target PIXIE, a microbeam PIXIE system, sensitivity, and microbeam PIXIE applications. An explanation of nuclear reaction analysis (NRA) is given for NRA with charged particle detection, NRA with neutron detection and NRA with gamma detection. The essentials of Rutherford back scattering (RBS) are given, along with the elastic recoil detection analysis, which has very close connections with RBS but was introduced much more recently. Finally a comparison of the microbeam's capability with those of its main competitors is presented. (UK)
Heat pulse propagation studies in TFTR
Energy Technology Data Exchange (ETDEWEB)
Fredrickson, E.D.; Callen, J.D.; Colchin, R.J.; Efthimion, P.C.; Hill, K.W.; Izzo, R.; Mikkelsen, D.R.; Monticello, D.A.; McGuire, K.; Bell, J.D.
1986-02-01
The time scales for sawtooth repetition and heat pulse propagation are much longer (10's of msec) in the large tokamak TFTR than in previous, smaller tokamaks. This extended time scale coupled with more detailed diagnostics has led us to revisit the analysis of the heat pulse propagation as a method to determine the electron heat diffusivity, chi/sub e/, in the plasma. A combination of analytic and computer solutions of the electron heat diffusion equation are used to clarify previous work and develop new methods for determining chi/sub e/. Direct comparison of the predicted heat pulses with soft x-ray and ECE data indicates that the space-time evolution is diffusive. However, the chi/sub e/ determined from heat pulse propagation usually exceeds that determined from background plasma power balance considerations by a factor ranging from 2 to 10. Some hypotheses for resolving this discrepancy are discussed. 11 refs., 19 figs., 1 tab.
Heat pulse propagation studies in TFTR
International Nuclear Information System (INIS)
The time scales for sawtooth repetition and heat pulse propagation are much longer (10's of msec) in the large tokamak TFTR than in previous, smaller tokamaks. This extended time scale coupled with more detailed diagnostics has led us to revisit the analysis of the heat pulse propagation as a method to determine the electron heat diffusivity, chi/sub e/, in the plasma. A combination of analytic and computer solutions of the electron heat diffusion equation are used to clarify previous work and develop new methods for determining chi/sub e/. Direct comparison of the predicted heat pulses with soft x-ray and ECE data indicates that the space-time evolution is diffusive. However, the chi/sub e/ determined from heat pulse propagation usually exceeds that determined from background plasma power balance considerations by a factor ranging from 2 to 10. Some hypotheses for resolving this discrepancy are discussed. 11 refs., 19 figs., 1 tab
Applicability of Parametrized Form of Fully Dressed Quark Propagator
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
According to extensive study of the Dyson-Schwinger equations for a fully dressed quark propagator in the "rainbow" approximation with an effective gluon propagator, a parametrized fully dressed confining quark propagator is suggested in this paper. The parametrized quark propagator describes a confined quark propagation in hadron, and is analytic everywhere in complex p2-plane and has no Lehmann representation. The vector and scalar self-energy functions [1 - Af(p2)] and [Bf(p2) - mf], dynamically running effective mass of quark Mf(p2) and the structure of non-local quark vacuum condensates as well as local quark vacuum condensates are predicted by use of the parametrized quark propagator. The results are compatible with other theoretical calculations.
Photon propagation in slowly varying electromagnetic fields
Karbstein, Felix
2016-01-01
We study the effective theory of soft photons in slowly varying electromagnetic background fields at one-loop order in QED. This is of relevance for the study of all-optical signatures of quantum vacuum nonlinearity in realistic electromagnetic background fields as provided by high-intensity lasers. The central result derived in this article is a new analytical expression for the photon polarization tensor in two linearly polarized counter-propagating pulsed Gaussian laser beams. As we treat ...
Space Propagation of Instabilities in Zakharov Equations
Metivier, Guy
2007-01-01
39 p International audience In this paper we study an initial boundary value problem for Zakharov's equations, describing the space propagation of a laser beam entering in a plasma. We prove a strong instability result and prove that the mathematical problem is ill-posed in Sobolev spaces. We also show that it is well posed in spaces of analytic functions. Several consequences for the physical consistency of the model are discussed.
Gaussian Process Quantile Regression using Expectation Propagation
Boukouvalas, Alexis; Barillec, Remi; Cornford, Dan
2012-01-01
Direct quantile regression involves estimating a given quantile of a response variable as a function of input variables. We present a new framework for direct quantile regression where a Gaussian process model is learned, minimising the expected tilted loss function. The integration required in learning is not analytically tractable so to speed up the learning we employ the Expectation Propagation algorithm. We describe how this work relates to other quantile regression methods and apply the ...
Physical Uncertainty Bounds (PUB)
Energy Technology Data Exchange (ETDEWEB)
Vaughan, Diane Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Dean L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.
Nanoparticles: Uncertainty Risk Analysis
DEFF Research Database (Denmark)
Grieger, Khara Deanne; Hansen, Steffen Foss; Baun, Anders
2012-01-01
Scientific uncertainty plays a major role in assessing the potential environmental risks of nanoparticles. Moreover, there is uncertainty within fundamental data and information regarding the potential environmental and health risks of nanoparticles, hampering risk assessments based on standard...... approaches. To date, there have been a number of different approaches to assess uncertainty of environmental risks in general, and some have also been proposed in the case of nanoparticles and nanomaterials. In recent years, others have also proposed that broader assessments of uncertainty are also needed in...... order to handle the complex potential risks of nanoparticles, including more descriptive characterizations of uncertainty. Some of these approaches are presented and discussed herein, in which the potential strengths and limitations of these approaches are identified along with further challenges for...
Measurement uncertainty and probability
Willink, Robin
2013-01-01
A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.
Economic uncertainty and econophysics
Schinckus, Christophe
2009-10-01
The objective of this paper is to provide a methodological link between econophysics and economics. I will study a key notion of both fields: uncertainty and the ways of thinking about it developed by the two disciplines. After having presented the main economic theories of uncertainty (provided by Knight, Keynes and Hayek), I show how this notion is paradoxically excluded from the economic field. In economics, uncertainty is totally reduced by an a priori Gaussian framework-in contrast to econophysics, which does not use a priori models because it works directly on data. Uncertainty is then not shaped by a specific model, and is partially and temporally reduced as models improve. This way of thinking about uncertainty has echoes in the economic literature. By presenting econophysics as a Knightian method, and a complementary approach to a Hayekian framework, this paper shows that econophysics can be methodologically justified from an economic point of view.
Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses
Energy Technology Data Exchange (ETDEWEB)
Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Curtis E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-08-01
We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.
Assessing uncertainties in surface water security: An empirical multimodel approach
Rodrigues, Dulce B. B.; Gupta, Hoshin V.; Mendiondo, Eduardo M.; Oliveira, Paulo Tarso S.
2015-11-01
Various uncertainties are involved in the representation of processes that characterize interactions among societal needs, ecosystem functioning, and hydrological conditions. Here we develop an empirical uncertainty assessment of water security indicators that characterize scarcity and vulnerability, based on a multimodel and resampling framework. We consider several uncertainty sources including those related to (i) observed streamflow data; (ii) hydrological model structure; (iii) residual analysis; (iv) the method for defining Environmental Flow Requirement; (v) the definition of critical conditions for water provision; and (vi) the critical demand imposed by human activities. We estimate the overall hydrological model uncertainty by means of a residual bootstrap resampling approach, and by uncertainty propagation through different methodological arrangements applied to a 291 km2 agricultural basin within the Cantareira water supply system in Brazil. Together, the two-component hydrograph residual analysis and the block bootstrap resampling approach result in a more accurate and precise estimate of the uncertainty (95% confidence intervals) in the simulated time series. We then compare the uncertainty estimates associated with water security indicators using a multimodel framework and the uncertainty estimates provided by each model uncertainty estimation approach. The range of values obtained for the water security indicators suggests that the models/methods are robust and performs well in a range of plausible situations. The method is general and can be easily extended, thereby forming the basis for meaningful support to end-users facing water resource challenges by enabling them to incorporate a viable uncertainty analysis into a robust decision-making process.
Uncertainty and validation. Effect of model complexity on uncertainty estimates
International Nuclear Information System (INIS)
In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root
Methodological basis for analysis and accounting of NPP probabilistic safety analysis uncertainties
International Nuclear Information System (INIS)
The paper presents classification of NPP probabilistic safety analysis uncertainties and defines their main sources. It sets forth methods to perform statistical and analytical analysis of different uncertainty classes, proposes sequence of efforts related to analysis and accounting of uncertainties in making decisions on NPP safety
Expectation Particle Belief Propagation
Lienart, Thibaut; Teh, Yee Whye; Doucet, Arnaud
2015-01-01
We propose an original particle-based implementation of the Loopy Belief Propagation (LPB) algorithm for pairwise Markov Random Fields (MRF) on a continuous state space. The algorithm constructs adaptively efficient proposal distributions approximating the local beliefs at each note of the MRF. This is achieved by considering proposal distributions in the exponential family whose parameters are updated iterately in an Expectation Propagation (EP) framework. The proposed particle scheme provid...
Kroc, Lukas; Sabharwal, Ashish; Selman, Bart
2012-01-01
Survey propagation (SP) is an exciting new technique that has been remarkably successful at solving very large hard combinatorial problems, such as determining the satisfiability of Boolean formulas. In a promising attempt at understanding the success of SP, it was recently shown that SP can be viewed as a form of belief propagation, computing marginal probabilities over certain objects called covers of a formula. This explanation was, however, shortly dismissed by experiments suggesting that...
Javed, A.; Kamphues, E.; Hartuc, T.; Pecnik, R.; Van Buijtenen, J.P.
2015-01-01
The compressor impellers for mass-produced turbochargers are generally die-casted and machined to their final configuration. Manufacturing uncertainties are inherently introduced as stochastic dimensional deviations in the impeller geometry. These deviations eventually propagate into the compressor
Aspects of uncertainty analysis in accident consequence modeling
International Nuclear Information System (INIS)
Mathematical models are frequently used to determine probable dose to man from an accidental release of radionuclides by a nuclear facility. With increased emphasis on the accuracy of these models, the incorporation of uncertainty analysis has become one of the most crucial and sensitive components in evaluating the significance of model predictions. In the present paper, we address three aspects of uncertainty in models used to assess the radiological impact to humans: uncertainties resulting from the natural variability in human biological parameters; the propagation of parameter variability by mathematical models; and comparison of model predictions to observational data
Deconvolution of variability and uncertainty in the Cassini safety analysis
International Nuclear Information System (INIS)
The standard method for propagation of uncertainty in a risk analysis requires rerunning the risk calculation numerous times with model parameters chosen from their uncertainty distributions. This was not practical for the Cassini nuclear safety analysis, due to the computationally intense nature of the risk calculation. A less computationally intense procedure was developed which requires only two calculations for each accident case. The first of these is the standard 'best-estimate' calculation. In the second calculation, variables and parameters change simultaneously. The mathematical technique of deconvolution is then used to separate out an uncertainty multiplier distribution, which can be used to calculate distribution functions at various levels of confidence
Accurate orbit propagation with planetary close encounters
Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca
2015-08-01
We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).
Indian Academy of Sciences (India)
Rituparna Chutia; Supahi Mahanta; D Datta
2014-04-01
The parameters associated to a environmental dispersion model may include different kinds of variability, imprecision and uncertainty. More often, it is seen that available information is interpreted in probabilistic sense. Probability theory is a well-established theory to measure such kind of variability. However, not all available information, data or model parameters affected by variability, imprecision and uncertainty, can be handled by traditional probability theory. Uncertainty or imprecision may occur due to incomplete information or data, measurement error or data obtained from expert judgement or subjective interpretation of available data or information. Thus for model parameters, data may be affected by subjective uncertainty. Traditional probability theory is inappropriate to represent subjective uncertainty. Possibility theory is used as a tool to describe parameters with insufficient knowledge. Based on the polynomial chaos expansion, stochastic response surface method has been utilized in this article for the uncertainty propagation of atmospheric dispersion model under consideration of both probabilistic and possibility information. The proposed method has been demonstrated through a hypothetical case study of atmospheric dispersion.
Uncertainty in wind climate parameters and their influence on wind turbine fatigue loads
DEFF Research Database (Denmark)
Toft, Henrik Stensgaard; Svenningsen, Lasse; Sørensen, John Dalsgaard;
2016-01-01
Highlights • Probabilistic framework for reliability assessment of site specific wind turbines. • Uncertainty in wind climate parameters propagated to structural loads directly. • Sensitivity analysis to estimate wind climate parameters influence on reliability.......Highlights • Probabilistic framework for reliability assessment of site specific wind turbines. • Uncertainty in wind climate parameters propagated to structural loads directly. • Sensitivity analysis to estimate wind climate parameters influence on reliability....
Quantum-Gravity phenomenology and high energy particle propagation
International Nuclear Information System (INIS)
Quantum-gravity effects may introduce relevant consequences for the propagation and interaction of high energy cosmic rays particles. Assuming the space-time foamy structure results in an intrinsic uncertainty of energy and momentum of particles, we show how low energy (under GZK) observations can provide strong constraints on the role of the fluctuating space-time structure
Quantum-Gravity phenomenology and high energy particle propagation
Aloisio, R.; P. Blasi(INAF Arcetri); A. Galante(Univ. L'Aquila); P. L. Ghia(CNR and INFN Torino); Grillo, A. F.
2004-01-01
Quantum-gravity effects may introduce relevant consequences for the propagation and interaction of high energy cosmic rays particles. Assuming the space-time foamy structure results in an intrinsic uncertainty of energy and momentum of particles, we show how low energy (under GZK) observations can provide strong constraints on the role of the fluctuating space-time structure.
Nuclear data uncertainties for local power densities in the Martin-Hoogenboom benchmark
International Nuclear Information System (INIS)
The recently developed method of fast Total Monte Carlo to propagate nuclear data uncertainties was applied to the Martin-Hoogenboom benchmark. This Martin- Hoogenboom benchmark prescribes that one calculates local pin powers (of light water cooled reactor) with a statistical uncertainty lower than 1% everywhere. Here we report, for the first time, an estimate of the nuclear data uncertainties for these local pin powers. For each of the more than 6 million local power tallies, the uncertainty due to nuclear data uncertainties was calculated, based on random variation of data for 235U, 238U, 239Pu and H in H2O thermal scattering. In the center of the core region, the nuclear data uncertainty is 0.9%. Towards the edges of the core, this uncertainty increases to roughly 3%. The nuclear data uncertainties have been shown to be larger than the statistical uncertainties that the benchmark prescribes
Uncertainties in Hauser-Feshbach Neutron Capture Calculations for Astrophysics
International Nuclear Information System (INIS)
The calculation of neutron capture cross sections in a statistical Hauser-Feshbach method has proved successful in numerous astrophysical applications. Of increasing interest is the uncertainty associated with the calculated Maxwellian averaged cross sections (MACS). Aspects of a statistical model that introduce a large amount of uncertainty are the level density model, γ-ray strength function parameter, and the placement of Elow – the cut-off energy below which the Hauser-Feshbach method is not applicable. Utilizing the Los Alamos statistical model code CoH3 we investigate the appropriate treatment of these sources of uncertainty via systematics of nuclei in a local region for which experimental or evaluated data is available. In order to show the impact of uncertainty analysis on nuclear data for astrophysical applications, these new uncertainties will be propagated through the nucleosynthesis code NuGrid
A Web tool for calculating k0-NAA uncertainties
International Nuclear Information System (INIS)
The calculation of uncertainty budgets is becoming a standard step in reporting analytical results. This gives rise to the need for simple, easily accessed tools to calculate uncertainty budgets. An example of such a tool is the Excel spreadsheet approach of Robouch et al. An internet application which calculates uncertainty budgets for k0-NAA is presented. The Web application has built in 'Literature' values for standard isotopes and accepts as inputs fixed information such as the thermal to epithermal neutron flux ratio, as well as experiment specific data such as the mass of the sample. The application calculates and displays intermediate uncertainties as well as the final combined uncertainty of the element concentration in the sample. The interface only requires access to a standard browser and is thus easily accessible to researchers and laboratories. This may facilitate and standardize the calculation of k0-NAA uncertainty budgets. (author)
Uncertainty in biodiversity science, policy and management: a conceptual overview
Directory of Open Access Journals (Sweden)
Yrjö Haila
2014-10-01
Full Text Available The protection of biodiversity is a complex societal, political and ultimately practical imperative of current global society. The imperative builds upon scientific knowledge on human dependence on the life-support systems of the Earth. This paper aims at introducing main types of uncertainty inherent in biodiversity science, policy and management, as an introduction to a companion paper summarizing practical experiences of scientists and scholars (Haila et al. 2014. Uncertainty is a cluster concept: the actual nature of uncertainty is inherently context-bound. We use semantic space as a conceptual device to identify key dimensions of uncertainty in the context of biodiversity protection; these relate to [i] data; [ii] proxies; [iii] concepts; [iv] policy and management; and [v] normative goals. Semantic space offers an analytic perspective for drawing critical distinctions between types of uncertainty, identifying fruitful resonances that help to cope with the uncertainties, and building up collaboration between different specialists to support mutual social learning.
International Nuclear Information System (INIS)
The paper covered the following topics: - application of uncertainty evaluation methods in Germany, - conclusions of the 'Uncertainty Methods Study', - short overview on the international situation in licensing, - further support development in GRS, future co-operation with FZK. The following conclusions had been drawn from the performed uncertainty analyses: - there were high requirements in determining uncertain input parameters: selection of suitable experimental and analytical information to determine ranges and probability distributions, and transfer of measured data into these ranges and distributions, - determination of ranges of calculation results were recommended for codes developed at GRS and important accident scenarios, - uncertainty importance ranking guides were leading to improvement of knowledge and effective code improvements. Recommendations had also been made with respect to the Uncertainty Method Study (UMS): - collection, qualification, review and dissemination of uncertainty data should be made more systematic, - code manuals should include relevant uncertainty information, - more efforts should be made to transform data measured in experiments and compared with posttest calculations into model parameters with uncertainties, - uncertainty analysis should be performed if useful conclusions are to be obtained from Best Estimate codes, - new generations of codes should provide 'internal assessment of uncertainty', - information gained in the UMS should be used to inform decisions on the conduct of uncertainty analyses, for example in the light of licensing requirements
DEFF Research Database (Denmark)
Christensen, Hanne Bjerre; Poulsen, Mette Erecius; Pedersen, Mikael
2003-01-01
The estimation of uncertainty of an analytical result has become important in analytical chemistry. It is especially difficult to determine uncertainties for multiresidue methods, e.g. for pesticides in fruit and vegetables, as the varieties of pesticide/commodity combinations are many....... In the present study, recommendations from the International Organisation for Standardisation's (ISO) Guide to the Expression of Uncertainty and the EURACHEM/CITAC guide Quantifying Uncertainty in Analytical Measurements were followed to estimate the expanded uncertainties for 153 pesticides in fruit...
DEFF Research Database (Denmark)
Christensen, Hanne Bjerre; Poulsen, Mette Erecius; Pedersen, Mikael
2003-01-01
The estimation of uncertainty of an analytical result has become important in analytical chemistry. It is especially difficult to determine uncertainties for multiresidue methods, e.g. for pesticides in fruit and vegetables, as the varieties of pesticide/commodity combinations are many. In the...... present study, recommendations from the International Organisation for Standardisation's (ISO) Guide to the Expression of Uncertainty and the EURACHEM/CITAC guide Quantifying Uncertainty in Analytical Measurements were followed to estimate the expanded uncertainties for 153 pesticides in fruit and...
Assessment of errors and uncertainty patterns in GIA modeling
DEFF Research Database (Denmark)
Barletta, Valentina Roberta; Spada, G.
2012-01-01
time-evolving shorelines and paleo coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...
Assessment of errors and uncertainty patterns in GIA modeling
DEFF Research Database (Denmark)
Barletta, Valentina Roberta; Spada, G.
time-evolving shorelines and paleo-coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...
Assessment of errors and uncertainty patterns in GIA modeling
DEFF Research Database (Denmark)
Barletta, Valentina Roberta; Spada, G.
as time-evolving shorelines and paleo-coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and...
Uncertainty, rationality, and agency
Hoek, Wiebe van der
2006-01-01
Goes across 'classical' borderlines of disciplinesUnifies logic, game theory, and epistemics and studies them in an agent-settingCombines classical and novel approaches to uncertainty, rationality, and agency
International Nuclear Information System (INIS)
This report presents an oil project valuation under uncertainty by means of two well-known financial techniques: The Capital Asset Pricing Model (CAPM) and The Black-Scholes Option Pricing Formula. CAPM gives a linear positive relationship between expected rate of return and risk but does not take into consideration the aspect of flexibility which is crucial for an irreversible investment as an oil price is. Introduction of investment decision flexibility by using real options can increase the oil project value substantially. Some simple tests for the importance of uncertainty in stock market for oil investments are performed. Uncertainty in stock returns is correlated with aggregate product market uncertainty according to Pindyck (1991). The results of the tests are not satisfactory due to the short data series but introducing two other explanatory variables the interest rate and Gross Domestic Product make the situation better. 36 refs., 18 figs., 6 tabs
Evaluating prediction uncertainty
Energy Technology Data Exchange (ETDEWEB)
McKay, M.D. [Los Alamos National Lab., NM (United States)
1995-03-01
The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented.
Introduction to uncertainty quantification
Sullivan, T J
2015-01-01
Uncertainty quantification is a topic of increasing practical importance at the intersection of applied mathematics, statistics, computation, and numerous application areas in science and engineering. This text provides a framework in which the main objectives of the field of uncertainty quantification are defined, and an overview of the range of mathematical methods by which they can be achieved. Complete with exercises throughout, the book will equip readers with both theoretical understanding and practical experience of the key mathematical and algorithmic tools underlying the treatment of uncertainty in modern applied mathematics. Students and readers alike are encouraged to apply the mathematical methods discussed in this book to their own favourite problems to understand their strengths and weaknesses, also making the text suitable as a self-study. This text is designed as an introduction to uncertainty quantification for senior undergraduate and graduate students with a mathematical or statistical back...
Evaluating prediction uncertainty
International Nuclear Information System (INIS)
The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented
Decision making under uncertainty
International Nuclear Information System (INIS)
The theory of evidence and the theory of possibility are considered by some analysts as potential models for uncertainty. This paper discusses two issues: how formal probability theory has been relaxed to develop these uncertainty models; and the degree to which these models can be applied to risk assessment. The scope of the second issue is limited to an investigation of their compatibility for combining various pieces of evidence, which is an important problem in PRA
Uncertainty in Environmental Economics
Robert S. Pindyck
2006-01-01
In a world of certainty, the design of environmental policy is relatively straightforward, and boils down to maximizing the present value of the flow of social benefits minus costs. But the real world is one of considerable uncertainty -- over the physical and ecological impact of pollution, over the economic costs and benefits of reducing it, and over the discount rates that should be used to compute present values. The implications of uncertainty are complicated by the fact that most enviro...
Uncertainty calculations made easier
Energy Technology Data Exchange (ETDEWEB)
Hogenbirk, A.
1994-07-01
The results are presented of a neutron cross section sensitivity/uncertainty analysis performed in a complicated 2D model of the NET shielding blanket design inside the ITER torus design, surrounded by the cryostat/biological shield as planned for ITER. The calculations were performed with a code system developed at ECN Petten, with which sensitivity/uncertainty calculations become relatively simple. In order to check the deterministic neutron transport calculations (performed with DORT), calculations were also performed with the Monte Carlo code MCNP. Care was taken to model the 2.0 cm wide gaps between two blanket segments, as the neutron flux behind the vacuum vessel is largely determined by neutrons streaming through these gaps. The resulting neutron flux spectra are in excellent agreement up to the end of the cryostat. It is noted, that at this position the attenuation of the neutron flux is about 1 l orders of magnitude. The uncertainty in the energy integrated flux at the beginning of the vacuum vessel and at the beginning of the cryostat was determined in the calculations. The uncertainty appears to be strongly dependent on the exact geometry: if the gaps are filled with stainless steel, the neutron spectrum changes strongly, which results in an uncertainty of 70% in the energy integrated flux at the beginning of the cryostat in the no-gap-geometry, compared to an uncertainty of only 5% in the gap-geometry. Therefore, it is essential to take into account the exact geometry in sensitivity/uncertainty calculations. Furthermore, this study shows that an improvement of the covariance data is urgently needed in order to obtain reliable estimates of the uncertainties in response parameters in neutron transport calculations. (orig./GL).
Uncertainty calculations made easier
International Nuclear Information System (INIS)
The results are presented of a neutron cross section sensitivity/uncertainty analysis performed in a complicated 2D model of the NET shielding blanket design inside the ITER torus design, surrounded by the cryostat/biological shield as planned for ITER. The calculations were performed with a code system developed at ECN Petten, with which sensitivity/uncertainty calculations become relatively simple. In order to check the deterministic neutron transport calculations (performed with DORT), calculations were also performed with the Monte Carlo code MCNP. Care was taken to model the 2.0 cm wide gaps between two blanket segments, as the neutron flux behind the vacuum vessel is largely determined by neutrons streaming through these gaps. The resulting neutron flux spectra are in excellent agreement up to the end of the cryostat. It is noted, that at this position the attenuation of the neutron flux is about 1 l orders of magnitude. The uncertainty in the energy integrated flux at the beginning of the vacuum vessel and at the beginning of the cryostat was determined in the calculations. The uncertainty appears to be strongly dependent on the exact geometry: if the gaps are filled with stainless steel, the neutron spectrum changes strongly, which results in an uncertainty of 70% in the energy integrated flux at the beginning of the cryostat in the no-gap-geometry, compared to an uncertainty of only 5% in the gap-geometry. Therefore, it is essential to take into account the exact geometry in sensitivity/uncertainty calculations. Furthermore, this study shows that an improvement of the covariance data is urgently needed in order to obtain reliable estimates of the uncertainties in response parameters in neutron transport calculations. (orig./GL)
Conundrums with uncertainty factors.
Cooke, Roger
2010-03-01
The practice of uncertainty factors as applied to noncancer endpoints in the IRIS database harkens back to traditional safety factors. In the era before risk quantification, these were used to build in a "margin of safety." As risk quantification takes hold, the safety factor methods yield to quantitative risk calculations to guarantee safety. Many authors believe that uncertainty factors can be given a probabilistic interpretation as ratios of response rates, and that the reference values computed according to the IRIS methodology can thus be converted to random variables whose distributions can be computed with Monte Carlo methods, based on the distributions of the uncertainty factors. Recent proposals from the National Research Council echo this view. Based on probabilistic arguments, several authors claim that the current practice of uncertainty factors is overprotective. When interpreted probabilistically, uncertainty factors entail very strong assumptions on the underlying response rates. For example, the factor for extrapolating from animal to human is the same whether the dosage is chronic or subchronic. Together with independence assumptions, these assumptions entail that the covariance matrix of the logged response rates is singular. In other words, the accumulated assumptions entail a log-linear dependence between the response rates. This in turn means that any uncertainty analysis based on these assumptions is ill-conditioned; it effectively computes uncertainty conditional on a set of zero probability. The practice of uncertainty factors is due for a thorough review. Two directions are briefly sketched, one based on standard regression models, and one based on nonparametric continuous Bayesian belief nets. PMID:20030767
Propagators of resonances and rescatterings of the decay products
Anisovich, A V; Matveev, M A; Sarantsev, A V; Semenova, A N; Nyiri, J
2016-01-01
Hadronic resonance propagators which take into account the analytical properties of decay processes are built in terms of the dispersion relation technique. Such propagators can describe multi-component systems, for example, those when quark degrees of freedom create a resonance state, and decay products correct the corresponding pole by adding hadronic deuteron-like components. Meson and baryon states are considered, examples of particles with different spins are presented.
Propagation characteristics of electromagnetic waves along a dense plasma filament
Energy Technology Data Exchange (ETDEWEB)
Nowakowska, H.; Zakrzewski, Z. [Institute of Fluid-Flow Machinery, Polish Academy of Sciences, Gdansk (Poland); Moisan, M. [Departement de Physique, Universite de Montreal, Montreal, PQ (Canada)
2001-05-21
The characteristics of electromagnetic waves propagating along dense plasma filaments, as encountered in atmospheric pressure discharges, are examined in the microwave frequency range; they turn out to be surface waves. Results of numerical calculations of the dependence of the phase and attenuation coefficients on the plasma parameters are presented. In the limit of large electron densities, this guided wave is akin to a Sommerfeld wave and the propagation can be described in an analytical form. (author)
Tsunami generation by ocean floor rupture front propagation: Hamiltonian description
Directory of Open Access Journals (Sweden)
V. I. Pavlov
2009-02-01
Full Text Available The Hamiltonian method is applied to the problem of tsunami generation caused by a propagating rupture front and deformation of the ocean floor. The method establishes an alternative framework for analyzing the tsunami generation process and produces analytical expressions for the power and directivity of tsunami radiation (in the far-field for two illustrative cases, with constant and gradually varying speeds of rupture front propagation.
Communicating spatial uncertainty to non-experts using R
Luzzi, Damiano; Sawicka, Kasia; Heuvelink, Gerard; de Bruin, Sytze
2016-04-01
Effective visualisation methods are important for the efficient use of uncertainty information for various groups of users. Uncertainty propagation analysis is often used with spatial environmental models to quantify the uncertainty within the information. A challenge arises when trying to effectively communicate the uncertainty information to non-experts (not statisticians) in a wide range of cases. Due to the growing popularity and applicability of the open source programming language R, we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. The package has implemented Monte Carlo algorithms for uncertainty propagation, the output of which is represented by an ensemble of model outputs (i.e. a sample from a probability distribution). Numerous visualisation methods exist that aim to present such spatial uncertainty information both statically, dynamically and interactively. To provide the most universal visualisation tools for non-experts, we conducted a survey on a group of 20 university students and assessed the effectiveness of selected static and interactive methods for visualising uncertainty in spatial variables such as DEM and land cover. The static methods included adjacent maps and glyphs for continuous variables. Both allow for displaying maps with information about the ensemble mean, variance/standard deviation and prediction intervals. Adjacent maps were also used for categorical data, displaying maps of the most probable class, as well as its associated probability. The interactive methods included a graphical user interface, which in addition to displaying the previously mentioned variables also allowed for comparison of joint uncertainties at multiple locations. The survey indicated that users could understand the basics of the uncertainty information displayed in the static maps, with the interactive interface allowing for more in-depth information. Subsequently, the R
Measurement uncertainty in Total Reflection X-ray Fluorescence
International Nuclear Information System (INIS)
Total Reflection X-ray Fluorescence (TXRF) spectrometry is a multi-elemental technique using micro-volumes of sample. This work assessed the components contributing to the combined uncertainty budget associated with TXRF measurements using Cu and Fe concentrations in different spiked and natural water samples as an example. The results showed that an uncertainty estimation based solely on the count statistics of the analyte is not a realistic estimation of the overall uncertainty, since the depositional repeatability and the relative sensitivity between the analyte and the internal standard are important contributions to the uncertainty budget. The uncertainty on the instrumental repeatability and sensitivity factor could be estimated and as such, potentially relatively straightforward implemented in the TXRF instrument software. However, the depositional repeatability varied significantly from sample to sample and between elemental ratios and the controlling factors are not well understood. By a lack of theoretical prediction of the depositional repeatability, the uncertainty budget can be based on repeat measurements using different reflectors. A simple approach to estimate the uncertainty was presented. The measurement procedure implemented and the uncertainty estimation processes developed were validated from the agreement with results obtained by inductively coupled plasma — optical emission spectrometry (ICP-OES) and/or reference/calculated values. - Highlights: • The uncertainty of TXRF cannot be realistically described by the counting statistics. • The depositional repeatability is an important contribution to the uncertainty. • Total combined uncertainties for Fe and Cu in waste/mine water samples were 4–8%. • Obtained concentrations agree within uncertainty with reference values
Measurement uncertainty in Total Reflection X-ray Fluorescence
Energy Technology Data Exchange (ETDEWEB)
Floor, G.H., E-mail: geerke.floor@gfz-potsdam.de [GFZ German Research Centre for Geosciences Section 3.4. Earth Surface Geochemistry, Telegrafenberg, 14473 Postdam (Germany); Queralt, I. [Institute of Earth Sciences Jaume Almera ICTJA-CSIC, Solé Sabaris s/n, 08028 Barcelona (Spain); Hidalgo, M.; Marguí, E. [Department of Chemistry, University of Girona, Campus Montilivi s/n, 17071 Girona (Spain)
2015-09-01
Total Reflection X-ray Fluorescence (TXRF) spectrometry is a multi-elemental technique using micro-volumes of sample. This work assessed the components contributing to the combined uncertainty budget associated with TXRF measurements using Cu and Fe concentrations in different spiked and natural water samples as an example. The results showed that an uncertainty estimation based solely on the count statistics of the analyte is not a realistic estimation of the overall uncertainty, since the depositional repeatability and the relative sensitivity between the analyte and the internal standard are important contributions to the uncertainty budget. The uncertainty on the instrumental repeatability and sensitivity factor could be estimated and as such, potentially relatively straightforward implemented in the TXRF instrument software. However, the depositional repeatability varied significantly from sample to sample and between elemental ratios and the controlling factors are not well understood. By a lack of theoretical prediction of the depositional repeatability, the uncertainty budget can be based on repeat measurements using different reflectors. A simple approach to estimate the uncertainty was presented. The measurement procedure implemented and the uncertainty estimation processes developed were validated from the agreement with results obtained by inductively coupled plasma — optical emission spectrometry (ICP-OES) and/or reference/calculated values. - Highlights: • The uncertainty of TXRF cannot be realistically described by the counting statistics. • The depositional repeatability is an important contribution to the uncertainty. • Total combined uncertainties for Fe and Cu in waste/mine water samples were 4–8%. • Obtained concentrations agree within uncertainty with reference values.
Propagation in thermal explosions
International Nuclear Information System (INIS)
In a number a small scale experiments the propagation phenomena in thermal explosions caused by contact of a molten metal with water were studied. To investigate the rapid vapor-blanket collapse a small amount of molten tin (800 deg C) was poured on to a crucible under water at decreased pressure. After pressurization to 1 bar the pressure rise in the vessel was measured and the occurring events were observed by cinephotography (8000ps-1). The experiment showed that explosion propagation by blanket collapse is energetically possible. Similar experiments were performed with a larger interacting surface in a through shaped and in a think tank shaped arrangement, which demonstrated that propagation actually occured; the propagation velocity could be estimated to about 5-103cm s-1. The findings favour the interpretation that the explosion is driven by fragmentation rather than by super heat. Fragmentation or mixing can occur through self-driven collapse and possibly by penetration of coolant jets formed by the collapse in the blanket. In a continuously propagation explosion, Taylor and Kelvin-Helmholtz instabilities may take part in the mixing process
Multi-scenario modelling of uncertainty in stochastic chemical systems
International Nuclear Information System (INIS)
Uncertainty analysis has not been well studied at the molecular scale, despite extensive knowledge of uncertainty in macroscale systems. The ability to predict the effect of uncertainty allows for robust control of small scale systems such as nanoreactors, surface reactions, and gene toggle switches. However, it is difficult to model uncertainty in such chemical systems as they are stochastic in nature, and require a large computational cost. To address this issue, a new model of uncertainty propagation in stochastic chemical systems, based on the Chemical Master Equation, is proposed in the present study. The uncertain solution is approximated by a composite state comprised of the averaged effect of samples from the uncertain parameter distributions. This model is then used to study the effect of uncertainty on an isomerization system and a two gene regulation network called a repressilator. The results of this model show that uncertainty in stochastic systems is dependent on both the uncertain distribution, and the system under investigation. -- Highlights: •A method to model uncertainty on stochastic systems was developed. •The method is based on the Chemical Master Equation. •Uncertainty in an isomerization reaction and a gene regulation network was modelled. •Effects were significant and dependent on the uncertain input and reaction system. •The model was computationally more efficient than Kinetic Monte Carlo
Uncertainties in the simulation of groundwater recharge at different scales
Directory of Open Access Journals (Sweden)
H. Bogena
2005-01-01
Full Text Available Digital spatial data always imply some kind of uncertainty. The source of this uncertainty can be found in their compilation as well as the conceptual design that causes a more or less exact abstraction of the real world, depending on the scale under consideration. Within the framework of hydrological modelling, in which numerous data sets from diverse sources of uneven quality are combined, the various uncertainties are accumulated. In this study, the GROWA model is taken as an example to examine the effects of different types of uncertainties on the calculated groundwater recharge. Distributed input errors are determined for the parameters' slope and aspect using a Monte Carlo approach. Landcover classification uncertainties are analysed by using the conditional probabilities of a remote sensing classification procedure. The uncertainties of data ensembles at different scales and study areas are discussed. The present uncertainty analysis showed that the Gaussian error propagation method is a useful technique for analysing the influence of input data on the simulated groundwater recharge. The uncertainties involved in the land use classification procedure and the digital elevation model can be significant in some parts of the study area. However, for the specific model used in this study it was shown that the precipitation uncertainties have the greatest impact on the total groundwater recharge error.
Multi-scenario modelling of uncertainty in stochastic chemical systems
Energy Technology Data Exchange (ETDEWEB)
Evans, R. David [Waterloo Institute for Nanotechnology, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1 (Canada); Ricardez-Sandoval, Luis A., E-mail: laricardezsandoval@uwaterloo.ca [Department of Chemical Engineering, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1 (Canada); Waterloo Institute for Nanotechnology, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1 (Canada)
2014-09-15
Uncertainty analysis has not been well studied at the molecular scale, despite extensive knowledge of uncertainty in macroscale systems. The ability to predict the effect of uncertainty allows for robust control of small scale systems such as nanoreactors, surface reactions, and gene toggle switches. However, it is difficult to model uncertainty in such chemical systems as they are stochastic in nature, and require a large computational cost. To address this issue, a new model of uncertainty propagation in stochastic chemical systems, based on the Chemical Master Equation, is proposed in the present study. The uncertain solution is approximated by a composite state comprised of the averaged effect of samples from the uncertain parameter distributions. This model is then used to study the effect of uncertainty on an isomerization system and a two gene regulation network called a repressilator. The results of this model show that uncertainty in stochastic systems is dependent on both the uncertain distribution, and the system under investigation. -- Highlights: •A method to model uncertainty on stochastic systems was developed. •The method is based on the Chemical Master Equation. •Uncertainty in an isomerization reaction and a gene regulation network was modelled. •Effects were significant and dependent on the uncertain input and reaction system. •The model was computationally more efficient than Kinetic Monte Carlo.
Uncertainty analysis in the applications of nuclear probabilistic risk assessment
International Nuclear Information System (INIS)
The aim of this thesis is to propose an approach to model parameter and model uncertainties affecting the results of risk indicators used in the applications of nuclear Probabilistic Risk assessment (PRA). After studying the limitations of the traditional probabilistic approach to represent uncertainty in PRA model, a new approach based on the Dempster-Shafer theory has been proposed. The uncertainty analysis process of the proposed approach consists in five main steps. The first step aims to model input parameter uncertainties by belief and plausibility functions according to the data PRA model. The second step involves the propagation of parameter uncertainties through the risk model to lay out the uncertainties associated with output risk indicators. The model uncertainty is then taken into account in the third step by considering possible alternative risk models. The fourth step is intended firstly to provide decision makers with information needed for decision making under uncertainty (parametric and model) and secondly to identify the input parameters that have significant uncertainty contributions on the result. The final step allows the process to be continued in loop by studying the updating of beliefs functions given new data. The proposed methodology was implemented on a real but simplified application of PRA model. (author)
Including uncertainty in hazard analysis through fuzzy measures
International Nuclear Information System (INIS)
This paper presents a method for capturing the uncertainty expressed by an Hazard Analysis (HA) expert team when estimating the frequencies and consequences of accident sequences and provides a sound mathematical framework for propagating this uncertainty to the risk estimates for these accident sequences. The uncertainty is readily expressed as distributions that can visually aid the analyst in determining the extent and source of risk uncertainty in HA accident sequences. The results also can be expressed as single statistics of the distribution in a manner analogous to expressing a probabilistic distribution as a point-value statistic such as a mean or median. The study discussed here used data collected during the elicitation portion of an HA on a high-level waste transfer process to demonstrate the techniques for capturing uncertainty. These data came from observations of the uncertainty that HA team members expressed in assigning frequencies and consequences to accident sequences during an actual HA. This uncertainty was captured and manipulated using ideas from possibility theory. The result of this study is a practical method for displaying and assessing the uncertainty in the HA team estimates of the frequency and consequences for accident sequences. This uncertainty provides potentially valuable information about accident sequences that typically is lost in the HA process
On Uncertainty Quantification in Particle Accelerators Modelling
Adelmann, Andreas
2015-01-01
Using a cyclotron based model problem, we demonstrate for the first time the applicability and usefulness of a uncertainty quantification (UQ) approach in order to construct surrogate models for quantities such as emittance, energy spread but also the halo parameter, and construct a global sensitivity analysis together with error propagation and $L_{2}$ error analysis. The model problem is selected in a way that it represents a template for general high intensity particle accelerator modelling tasks. The presented physics problem has to be seen as hypothetical, with the aim to demonstrate the usefulness and applicability of the presented UQ approach and not solving a particulate problem. The proposed UQ approach is based on sparse polynomial chaos expansions and relies on a small number of high fidelity particle accelerator simulations. Within this UQ framework, the identification of most important uncertainty sources is achieved by performing a global sensitivity analysis via computing the so-called Sobols' ...
Hierarchical Affinity Propagation
Givoni, Inmar; Frey, Brendan J
2012-01-01
Affinity propagation is an exemplar-based clustering algorithm that finds a set of data-points that best exemplify the data, and associates each datapoint with one exemplar. We extend affinity propagation in a principled way to solve the hierarchical clustering problem, which arises in a variety of domains including biology, sensor networks and decision making in operational research. We derive an inference algorithm that operates by propagating information up and down the hierarchy, and is efficient despite the high-order potentials required for the graphical model formulation. We demonstrate that our method outperforms greedy techniques that cluster one layer at a time. We show that on an artificial dataset designed to mimic the HIV-strain mutation dynamics, our method outperforms related methods. For real HIV sequences, where the ground truth is not available, we show our method achieves better results, in terms of the underlying objective function, and show the results correspond meaningfully to geographi...
Instability Versus Equilibrium Propagation of Laser Beam in Plasma
Lushnikov, P M; Lushnikov, Pavel M.; Rose, Harvey A.
2003-01-01
We obtain, for the first time, an analytic theory of the forward stimulated Brillouin scattering instability of a spatially and temporally incoherent laser beam, that controls the transition between statistical equilibrium and non-equilibrium (unstable) self-focusing regimes of beam propagation. The stability boundary may be used as a comprehensive guide for inertial confinement fusion designs. Well into the stable regime, an analytic expression for the angular diffusion coefficient is obtained, which provides an essential correction to a geometric optic approximation for beam propagation.
Compensation of On-call and Fixed-term Employment: the Role of Uncertainty
de Graaf-Zijl, Marloes
2005-01-01
In this paper I analyse the use and compensation of fixed-term and on-call employment contracts in the Netherlands. I use an analytical framework in which wage differentials result from two types of uncertainty. Quantity uncertainty originates from imperfect foresight in future product demand. I argue that workers who take over part of the quantity uncertainty from the employer get higher payments. Quality uncertainty on the other hand originates from the fact that employers are ex-ante unabl...
Uncertainty analyses in systems modeling process
International Nuclear Information System (INIS)
In the context of Probabilistic Safety Assessment (PSA), the uncertainty analyses play an important role. The objective is to ensure the qualitative evaluation and quantitative estimation in PSA level 1 results (the core damage frequency, the accident sequences frequency, the top events probability, etc). An application that enables uncertainty calculations by probability distribution propagations in the fault tree model has been developed. It uses the moment method and Monte Carlo method. The application has been integrated into a computer program that allocates the reliability data, quantifies the human errors and labels in a unique way the components. The reliability data used in Institute for Nuclear Research (INR) Pitesti for Cernavoda Probabilistic Safety Evaluation (CPSE) studies is a generic data base. Taking into account the status of reliability data base and the cases by which an error factor for a failure rate lognormal distribution is calculated, the data base has been completed with an error factor for each record. This paper presents the module that performs the uncertainty analysis and an example of uncertainty analysis at the fault tree level. (authors)
Uncertainty in Seismic Capacity of Masonry Buildings
Directory of Open Access Journals (Sweden)
Nicola Augenti
2012-07-01
Full Text Available Seismic assessment of masonry structures is plagued by both inherent randomness and model uncertainty. The former is referred to as aleatory uncertainty, the latter as epistemic uncertainty because it depends on the knowledge level. Pioneering studies on reinforced concrete buildings have revealed a significant influence of modeling parameters on seismic vulnerability. However, confidence in mechanical properties of existing masonry buildings is much lower than in the case of reinforcing steel and concrete. This paper is aimed at assessing whether and how uncertainty propagates from material properties to seismic capacity of an entire masonry structure. A typical two-story unreinforced masonry building is analyzed. Based on previous statistical characterization of mechanical properties of existing masonry types, the following random variables have been considered in this study: unit weight, uniaxial compressive strength, shear strength at zero confining stress, Young’s modulus, shear modulus, and available ductility in shear. Probability density functions were implemented to generate a significant number of realizations and static pushover analysis of the case-study building was performed for each vector of realizations, load combination and lateral load pattern. Analysis results show a large dispersion in displacement capacity and lower dispersion in spectral acceleration capacity. This can directly affect decision-making because both design and retrofit solutions depend on seismic capacity predictions. Therefore, engineering judgment should always be used when assessing structural safety of existing masonry constructions against design earthquakes, based on a series of seismic analyses under uncertain parameters.
Retarded condensate freezing propagation on superhydrophobic surfaces patterned with micropillars
Zhao, Yugang; Yang, Chun
2016-02-01
Previous studies have shown ice delay on nano-structured or hierarchical surfaces with nanoscale roughness. Here we report retarded condensate freezing on superhydrophobic silicon substrates fabricated with patterned micropillars of small aspect ratio. We further investigated the pillar size effects on freezing propagation. We found that the velocity of freezing propagation on the surface patterned with proper micropillars can be reduced by one order of magnitude, compared to that on the smooth untreated silicon surface. Additionally, we developed an analytical model to describe the condensate freezing propagation on a structured surface with micropillars and the model predictions were compared with our experimental results.
Effects of laser beam propagation in a multilevel photoionization system
International Nuclear Information System (INIS)
When the intense laser pulse propagates in the atomic vapor over a long distance, the laser pulse shape, the carrier frequency and the propagating velocity are greatly modified during the propagation by the resonant and/or the near-resonant interactions with atoms. We have been investigating these effects on the laser beam propagation experimentally and analytically. The simulation code named CEALIS-P has been developed, which employs the coupled three- level Bloch-Maxwell equations to study the atomic excitation and laser beam propagation simultaneously. Several features of the resonant and near-resonant effects based on the the self-induced transparency, the self-phase modulation and the nonlinear group velocity dispersion are described and the influences of such effects on the photoionization efficiency are analyzed.
Spatio-temporal propagation of cascading overload failures
Zhao, Jichang; Sanhedrai, Hillel; Cohen, Reuven; Havlin, Shlomo
2015-01-01
Different from the direct contact in epidemics spread, overload failures propagate through hidden functional dependencies. Many studies focused on the critical conditions and catastrophic consequences of cascading failures. However, to understand the network vulnerability and mitigate the cascading overload failures, the knowledge of how the failures propagate in time and space is essential but still missing. Here we study the spatio-temporal propagation behavior of cascading overload failures analytically and numerically. The cascading overload failures are found to spread radially from the center of the initial failure with an approximately constant velocity. The propagation velocity decreases with increasing tolerance, and can be well predicted by our theoretical framework with one single correction for all the tolerance values. This propagation velocity is found similar in various model networks and real network structures. Our findings may help to predict and mitigate the dynamics of cascading overload f...
David, P
2013-01-01
Propagation of Waves focuses on the wave propagation around the earth, which is influenced by its curvature, surface irregularities, and by passage through atmospheric layers that may be refracting, absorbing, or ionized. This book begins by outlining the behavior of waves in the various media and at their interfaces, which simplifies the basic phenomena, such as absorption, refraction, reflection, and interference. Applications to the case of the terrestrial sphere are also discussed as a natural generalization. Following the deliberation on the diffraction of the "ground? wave around the ear
UNC32/33, Covariance Matrices from ENDF/B-5 Resonance Parameter Uncertainties
International Nuclear Information System (INIS)
1 - Description of program or function: The programs UNC 32/33 read uncertainty information from cross-section libraries in the ENDF/B-V format (auto-correlations) and convert these uncertainty data to a group structure selected by the user. In the conversion procedure a weighting neutron spectrum is needed. The converted cross-section uncertainty data can be used in adjustment programs and to calculate the uncertainty in calculated reaction rates. 2 - Method of solution: Straightforward application of uncertainty propagation relations. 3 - Restrictions on the complexity of the problem: None detected for the ENDF/B-V dosimetry file
Temporal scaling in information propagation
Junming Huang; Chao Li; Wen-Qiang Wang; Hua-Wei Shen; Guojie Li; Xue-Qi Cheng
2014-01-01
For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite ...
Working fluid selection for organic Rankine cycles - Impact of uncertainty of fluid properties
DEFF Research Database (Denmark)
Frutiger, Jerome; Andreasen, Jesper Graa; Liu, Wei;
2016-01-01
This study presents a generic methodology to select working fluids for ORC (Organic Rankine Cycles)taking into account property uncertainties of the working fluids. A Monte Carlo procedure is described as a tool to propagate the influence of the input uncertainty of the fluid parameters on the ORC...
Propagating Synchrony in Feed-Forward Networks
Directory of Open Access Journals (Sweden)
Sven eJahnke
2013-11-01
Full Text Available Coordinated patterns of precisely timed action potentials (spikes emerge in a variety of neural circuits but their dynamical originis still not well understood. One hypothesis states that synchronous activity propagating through feed-forward chains of groups of neurons (synfire chains may dynamically generate such spike patterns. Additionally, synfire chains offer the possibility to enable reliable signal transmission. So far, mostly densely connected chains, often with all-to-all connectivity between groups, have been theoretically and computationally studied. Yet, such prominent feed-forward structures have not been observed experimentally. Here we analytically and numerically investigate under which conditions diluted feed-forward chains may exhibit synchrony propagation. In addition to conventional linear input summation, we study the impact of nonlinear, non-additive summation accounting for the effect of fast dendritic spikes. The non-linearities promote synchronous inputs to generate precisely timed spikes. We identify how non-additive coupling relaxes the conditions on connectivity such that it enables synchrony propagation at connectivities substantially lower than required for linearly coupled chains. Although the analytical treatment is based on a simple leaky integrate-and-fire neuron model, we show how to generalize our methods to biologically more detailed neuron models and verify our results by numerical simulations with, e.g., Hodgkin Huxley type neurons.
Uncertainty Quantification of Physical Model for Best Estimate Safety Analysis
International Nuclear Information System (INIS)
In this work, the model calibration was done to reduce the simulation code's input parameters' uncertainties, and subsequently simulation code's prediction uncertainties of design constraining responses. Each parameter/physical Model's fidelity was identified as well to determine major sources of the modeling uncertainty. This analysis is important in deciding where additional efforts should be given to improve our simulation model. The goal of this work is to develop higher fidelity model by completing experiments and doing uncertainty quantification. Thermal hydraulic parameters were adjusted for both mildly nonlinear and highly nonlinear systems, and their a posterior parameter uncertainties were propagated through the simulation model to predict a posterior uncertainties of the key system attributes. To solve both highly nonlinear as well as mildly nonlinear problem, both deterministic and probabilistic methods were used to complete uncertainty quantification. To accomplish this, the Bayesian approach modified by regularization is used for the mildly nonlinear problem to incorporate available information in quantifying uncertainties. The a priori information considered are the parameters and the experimental data together with their uncertainties. The results indicate that substantial reductions in uncertainties on the system responses can be achieved using experimental data to obtain a posterior input parameters' uncertainty distributions. The MCMC method was used for the highly nonlinear transient. Due to the computational burden, this method would not be applicable if there are many parameters, but it can provide the best solution since the algorithm does not approximate the responses while the deterministic approach assumes linearity of the responses with regard to dependencies on the parameters. Using MCMC non-Gaussian a posterior distributions of the parameters with reduced uncertainties were obtained due to the nonlinearity of the system sensitivity
Risk Aversion and Dynamic Games Between Hydroelectric Operators under Uncertainty
Abdessalem Abbassi; Ahlem Dakhlaoui; Tamini, Lota D.
2014-01-01
This article analyses management of hydropower dams within monopolistic and oligopolistic competition and when hydroelectricity producers are risk averse and face demand uncertainty. In each type of market structure we analytically determine the water release path in closed-loop equilibrium. We show how a monopoly can manage its hydropower dams by additional pumping or storage depending on the relative abundance of water between different regions to smooth the effect of uncertainty on electri...
OpenTURNS, an open source uncertainty engineering software
International Nuclear Information System (INIS)
The needs to assess robust performances for complex systems have lead to the emergence of a new industrial simulation challenge: to take into account uncertainties when dealing with complex numerical simulation frameworks. EDF has taken part in the development of an Open Source software platform dedicated to uncertainty propagation by probabilistic methods, named OpenTURNS for Open source Treatment of Uncertainty, Risk and Statistics. OpenTURNS includes a large variety of qualified algorithms in order to manage uncertainties in industrial studies, from the uncertainty quantification step (with possibilities to model stochastic dependence thanks to the copula theory and stochastic processes), to the uncertainty propagation step (with some innovative simulation algorithms as the ziggurat method for normal variables) and the sensitivity analysis one (with some sensitivity index based on the evaluation of means conditioned to the realization of a particular event). It also enables to build some response surfaces that can include the stochastic modeling (with the chaos polynomial method for example). Generic wrappers to link OpenTURNS to the modeling software are proposed. At last, OpenTURNS is largely documented to provide rules to help use and contribution
Fuzzy-probabilistic calculations of water-balance uncertainty
Energy Technology Data Exchange (ETDEWEB)
Faybishenko, B.
2009-10-01
Hydrogeological systems are often characterized by imprecise, vague, inconsistent, incomplete, or subjective information, which may limit the application of conventional stochastic methods in predicting hydrogeologic conditions and associated uncertainty. Instead, redictions and uncertainty analysis can be made using uncertain input parameters expressed as probability boxes, intervals, and fuzzy numbers. The objective of this paper is to present the theory for, and a case study as an application of, the fuzzyprobabilistic approach, ombining probability and possibility theory for simulating soil water balance and assessing associated uncertainty in the components of a simple waterbalance equation. The application of this approach is demonstrated using calculations with the RAMAS Risk Calc code, to ssess the propagation of uncertainty in calculating potential evapotranspiration, actual evapotranspiration, and infiltration-in a case study at the Hanford site, Washington, USA. Propagation of uncertainty into the results of water-balance calculations was evaluated by hanging he types of models of uncertainty incorporated into various input parameters. The results of these fuzzy-probabilistic calculations are compared to the conventional Monte Carlo simulation approach and estimates from field observations at the Hanford site.
Verification of uncertainty budgets
DEFF Research Database (Denmark)
Heydorn, Kaj; Madsen, B.S.
2005-01-01
, because their influence requires samples taken at long intervals, e.g., the acquisition of a new calibrant. It is therefore recommended to include verification of the uncertainty budget in the continuous QA/QC monitoring; this will eventually lead to a test also for such rarely occurring effects....... full range of matrices and concentrations for which the budget is assumed to be valid. In this way the assumptions made in the uncertainty budget can be experimentally verified, both as regards sources of variability that are assumed negligible, and dominant uncertainty components. Agreement between...... observed and expected variability is tested by means of the T-test, which follows a chi-square distribution with a number of degrees of freedom determined by the number of replicates. Significant deviations between predicted and observed variability may be caused by a variety of effects, and examples will...
Commonplaces and social uncertainty
DEFF Research Database (Denmark)
Lassen, Inger
2008-01-01
This article explores the concept of uncertainty in four focus group discussions about genetically modified food. In the discussions, members of the general public interact with food biotechnology scientists while negotiating their attitudes towards genetic engineering. Their discussions offer...... an example of risk discourse in which the use of commonplaces seems to be a central feature (Myers 2004: 81). My analyses support earlier findings that commonplaces serve important interactional purposes (Barton 1999) and that they are used for mitigating disagreement, for closing topics and for facilitating...... risk discourse (Myers 2005; 2007). In additional, however, I argue that commonplaces are used to mitigate feelings of insecurity caused by uncertainty and to negotiate new codes of moral conduct. Keywords: uncertainty, commonplaces, risk discourse, focus groups, appraisal...
Nessel, James
2013-01-01
NASA Glenn Research Center has been involved in the characterization of atmospheric effects on space communications links operating at Ka-band and above for the past 20 years. This presentation reports out on the most recent activities of propagation characterization that NASA is currently involved in.
Urrutxua, Hodei; Sanjurjo-Rivo, Manuel; Peláez, Jesús
2016-01-01
In the year 2000 an in-house orbital propagator called DROMO (Peláez et al. in Celest Mech Dyn Astron 97:131-150, 2007. doi: 10.1007/s10569-006-9056-3) was developed by the Space Dynamics Group of the Technical University of Madrid, based in a set of redundant variables including Euler-Rodrigues parameters. An original deduction of the DROMO propagator is carried out, underlining its close relation with the ideal frame concept introduced by Hansen (Abh der Math-Phys Cl der Kon Sachs Ges der Wissensch 5:41-218, 1857). Based on the very same concept, Deprit (J Res Natl Bur Stand Sect B Math Sci 79B(1-2):1-15, 1975) proposed a formulation for orbit propagation. In this paper, similarities and differences with the theory carried out by Deprit are analyzed. Simultaneously, some improvements are introduced in the formulation, that lead to a more synthetic and better performing propagator. Also, the long-term effect of the oblateness of the primary is studied in terms of DROMO variables, and new numerical results are presented to evaluate the performance of the method.
Strategies for Application of Isotopic Uncertainties in Burnup Credit
Energy Technology Data Exchange (ETDEWEB)
Gauld, I.C.
2002-12-23
Uncertainties in the predicted isotopic concentrations in spent nuclear fuel represent one of the largest sources of overall uncertainty in criticality calculations that use burnup credit. The methods used to propagate the uncertainties in the calculated nuclide concentrations to the uncertainty in the predicted neutron multiplication factor (k{sub eff}) of the system can have a significant effect on the uncertainty in the safety margin in criticality calculations and ultimately affect the potential capacity of spent fuel transport and storage casks employing burnup credit. Methods that can provide a more accurate and realistic estimate of the uncertainty may enable increased spent fuel cask capacity and fewer casks needing to be transported, thereby reducing regulatory burden on licensee while maintaining safety for transporting spent fuel. This report surveys several different best-estimate strategies for considering the effects of nuclide uncertainties in burnup-credit analyses. The potential benefits of these strategies are illustrated for a prototypical burnup-credit cask design. The subcritical margin estimated using best-estimate methods is discussed in comparison to the margin estimated using conventional bounding methods of uncertainty propagation. To quantify the comparison, each of the strategies for estimating uncertainty has been performed using a common database of spent fuel isotopic assay measurements for pressurized-light-water reactor fuels and predicted nuclide concentrations obtained using the current version of the SCALE code system. The experimental database applied in this study has been significantly expanded to include new high-enrichment and high-burnup spent fuel assay data recently published for a wide range of important burnup-credit actinides and fission products. Expanded rare earth fission-product measurements performed at the Khlopin Radium Institute in Russia that contain the only known publicly-available measurement for {sup 103
Serenity in political uncertainty.
Doumit, Rita; Afifi, Rema A; Devon, Holli A
2015-01-01
College students are often faced with academic and personal stressors that threaten their well-being. Added to that may be political and environmental stressors such as acts of violence on the streets, interruptions in schooling, car bombings, targeted religious intimidations, financial hardship, and uncertainty of obtaining a job after graduation. Research on how college students adapt to the latter stressors is limited. The aims of this study were (1) to investigate the associations between stress, uncertainty, resilience, social support, withdrawal coping, and well-being for Lebanese youth during their first year of college and (2) to determine whether these variables predicted well-being. A sample of 293 first-year students enrolled in a private university in Lebanon completed a self-reported questionnaire in the classroom setting. The mean age of sample participants was 18.1 years, with nearly an equal percentage of males and females (53.2% vs 46.8%), who lived with their family (92.5%), and whose family reported high income levels (68.4%). Multiple regression analyses revealed that best determinants of well-being are resilience, uncertainty, social support, and gender that accounted for 54.1% of the variance. Despite living in an environment of frequent violence and political uncertainty, Lebanese youth in this study have a strong sense of well-being and are able to go on with their lives. This research adds to our understanding on how adolescents can adapt to stressors of frequent violence and political uncertainty. Further research is recommended to understand the mechanisms through which young people cope with political uncertainty and violence. PMID:25658930
Sources of uncertainties in modelling black carbon at the global scale
Vignati, E.; Karl, M; M. Krol; Wilson, J.(School of Physics and Astronomy, University of Birmingham, Birmingham, United Kingdom); Stier, P; F. Cavalli
2010-01-01
Our understanding of the global black carbon (BC) cycle is essentially qualitative due to uncertainties in our knowledge of its properties. This work investigates two source of uncertainties in modelling black carbon: those due to the use of different schemes for BC ageing and its removal rate in the global Transport-Chemistry model TM5 and those due to the uncertainties in the definition and quantification of the observations, which propagate through to both the emission inventories, and the...
Ben Abdallah, Nadia; Mouhous-Voyneau, Nassima; Denoeux, Thierry
2013-01-01
We present a methodology based on Dempster- Shafer theory to represent, combine and propagate statistical and epistemic uncertainties. This approach is first applied to estimate, via a semi-empirical model, the future sea level rise induced by global warming at the end of the century. Projections are affected by statistical uncertainties originating from model parameter estimation and epistemic uncertainties due to lack of knowledge of model inputs. We then study the overtopping response of a...
Instability Versus Equilibrium Propagation of Laser Beam in Plasma
Lushnikov, Pavel M.; Rose, Harvey A.
2003-01-01
We obtain, for the first time, an analytic theory of the forward stimulated Brillouin scattering instability of a spatially and temporally incoherent laser beam, that controls the transition between statistical equilibrium and non-equilibrium (unstable) self-focusing regimes of beam propagation. The stability boundary may be used as a comprehensive guide for inertial confinement fusion designs. Well into the stable regime, an analytic expression for the angular diffusion coefficient is obtain...
Uncertainty in artificial intelligence
Levitt, TS; Lemmer, JF; Shachter, RD
1990-01-01
Clearly illustrated in this volume is the current relationship between Uncertainty and AI.It has been said that research in AI revolves around five basic questions asked relative to some particular domain: What knowledge is required? How can this knowledge be acquired? How can it be represented in a system? How should this knowledge be manipulated in order to provide intelligent behavior? How can the behavior be explained? In this volume, all of these questions are addressed. From the perspective of the relationship of uncertainty to the basic questions of AI, the book divides naturally i
Traceability and Measurement Uncertainty
DEFF Research Database (Denmark)
Tosello, Guido; De Chiffre, Leonardo
2004-01-01
and motivating to this important group. The developed e-learning system consists on 12 different chapters dealing with the following topics: 1. Basics 2. Traceability and measurement uncertainty 3. Coordinate metrology 4. Form measurement 5. Surface testing 6. Optical measurement and testing 7. Measuring rooms 8....... Machine tool testing 9. The role of manufacturing metrology for QM 10. Inspection planning 11. Quality management of measurements incl. Documentation 12. Advanced manufacturing measurement technology The present report (which represents the section 2 - Traceability and Measurement Uncertainty – of the e...
Weighted Uncertainty Relations
Xiao, Yunlong; Jing, Naihuan; Li-Jost, Xianqing; Fei, Shao-Ming
2016-03-01
Recently, Maccone and Pati have given two stronger uncertainty relations based on the sum of variances and one of them is nontrivial when the quantum state is not an eigenstate of the sum of the observables. We derive a family of weighted uncertainty relations to provide an optimal lower bound for all situations and remove the restriction on the quantum state. Generalization to multi-observable cases is also given and an optimal lower bound for the weighted sum of the variances is obtained in general quantum situation.
Sensitivity and uncertainty analysis
Cacuci, Dan G; Navon, Ionel Michael
2005-01-01
As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c
Uncertainty and validation. Effect of user interpretation on uncertainty estimates
International Nuclear Information System (INIS)
Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the
Uncertainty Analyses and Strategy
International Nuclear Information System (INIS)
The DOE identified a variety of uncertainties, arising from different sources, during its assessment of the performance of a potential geologic repository at the Yucca Mountain site. In general, the number and detail of process models developed for the Yucca Mountain site, and the complex coupling among those models, make the direct incorporation of all uncertainties difficult. The DOE has addressed these issues in a number of ways using an approach to uncertainties that is focused on producing a defensible evaluation of the performance of a potential repository. The treatment of uncertainties oriented toward defensible assessments has led to analyses and models with so-called ''conservative'' assumptions and parameter bounds, where conservative implies lower performance than might be demonstrated with a more realistic representation. The varying maturity of the analyses and models, and uneven level of data availability, result in total system level analyses with a mix of realistic and conservative estimates (for both probabilistic representations and single values). That is, some inputs have realistically represented uncertainties, and others are conservatively estimated or bounded. However, this approach is consistent with the ''reasonable assurance'' approach to compliance demonstration, which was called for in the U.S. Nuclear Regulatory Commission's (NRC) proposed 10 CFR Part 63 regulation (64 FR 8640 [DIRS 101680]). A risk analysis that includes conservatism in the inputs will result in conservative risk estimates. Therefore, the approach taken for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) provides a reasonable representation of processes and conservatism for purposes of site recommendation. However, mixing unknown degrees of conservatism in models and parameter representations reduces the transparency of the analysis and makes the development of coherent and consistent probability statements about projected repository
Atomic Uncertainties and their Effects on Astrophysical Diagnostics
Sutherland, Robert; Loch, Stuart; Foster, Adam; Smith, Randall
2015-05-01
The astrophysics and laboratory plasma modeling community have been requesting meaningful uncertainties on atomic data for some time. This would allow them to determine uncertainties due to the atomic data on a range of plasma diagnostic quantities and explain some of the important discrepancies. In recent years there have been much talk, although relatively little progress, on this for theoretical cross section calculations. We present here a method of generating ``baseline'' uncertainties on atomic data, for use in astrophysical modeling. The uncertainty data was used in a modified version of the APEC spectral emission code, to carry these uncertainties on fundamental atomic data through to uncertainties in astrophysical diagnostics, such as fractional abundances and emissivities, providing uncertainties on line ratios. We use a Monte-Carlo method to propagate the uncertainties through to the emissivities, which were tested using a variety of distribution functions. As an illustration of the usefulness of the method, we show results for oxygen, and compare with an existing line ratio diagnostic which has a currently debated discrepancy.
Photon propagation in slowly varying electromagnetic fields
Karbstein, Felix
2016-01-01
We study the effective theory of soft photons in slowly varying electromagnetic background fields at one-loop order in QED. This is of relevance for the study of all-optical signatures of quantum vacuum nonlinearity in realistic electromagnetic background fields as provided by high-intensity lasers. The central result derived in this article is a new analytical expression for the photon polarization tensor in two linearly polarized counter-propagating pulsed Gaussian laser beams. As we treat the peak field strengths of both laser beams as free parameters this field configuration can be considered as interpolating between the limiting cases of a purely right- or left-moving laser beam (if one of the peak field strengths is set to zero) and the standing-wave type scenario with two counter-propagating beams of equal strength.
International Nuclear Information System (INIS)
This paper presents an efficient analytical Bayesian method for reliability and system response updating without using simulations. The method includes additional information such as measurement data via Bayesian modeling to reduce estimation uncertainties. Laplace approximation method is used to evaluate Bayesian posterior distributions analytically. An efficient algorithm based on inverse first-order reliability method is developed to evaluate system responses given a reliability index or confidence interval. Since the proposed method involves no simulations such as Monte Carlo or Markov chain Monte Carlo simulations, the overall computational efficiency improves significantly, particularly for problems with complicated performance functions. A practical fatigue crack propagation problem with experimental data, and a structural scale example are presented for methodology demonstration. The accuracy and computational efficiency of the proposed method are compared with traditional simulation-based methods.
Uncertainty and the Great Recession
Born, Benjamin; Breuer, Sebastian; Elstner, Steffen
2014-01-01
Has heightened uncertainty been a major contributor to the Great Recession and the slow recovery in the U.S.? To answer this question, we identify exogenous changes in six uncertainty proxies and quantify their contributions to GDP growth and the unemployment rate. Our results are threefold. First, only a minor part of the rise in uncertainty measures during the Great Recession was driven by exogenous uncertainty shocks. Second, while increased uncertainty explains less than one percentage po...
Uncertainties in repository modeling
International Nuclear Information System (INIS)
The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling
International Nuclear Information System (INIS)
The notion of 'risk' is discussed in its social and technological contexts, leading to an investigation of the terms factuality, hypotheticality, uncertainty, and vagueness, and to the problems of acceptance and acceptability especially in the context of political decision finding. (DG)
Uncertainties in repository modeling
Energy Technology Data Exchange (ETDEWEB)
Wilson, J.R.
1996-12-31
The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling.
Proportional Representation with Uncertainty
Francesco De Sinopoli; Giovanna Iannantuoni; Elena Manzoni; Carlos Pimienta
2014-01-01
We introduce a model with strategic voting in a parliamentary election with proportional representation and uncertainty about voters’ preferences. In any equilibrium of the model, most voters only vote for those parties whose positions are extreme. In the resulting parliament, a consensus government forms and the policy maximizing the sum of utilities of the members of the government is implemented.
Uncertainty and nonseparability
de La Torre, A. C.; Catuogno, P.; Ferrando, S.
1989-06-01
A quantum covariance function is introduced whose real and imaginary parts are related to the independent contributions to the uncertainty principle: noncommutativity of the operators and nonseparability. It is shown that factorizability of states is a sufficient but not necessary condition for separability. It is suggested that all quantum effects could be considered to be a consequence of nonseparability alone.
DEFF Research Database (Denmark)
Greasley, David; Madsen, Jakob B.
2006-01-01
A severe collapse of fixed capital formation distinguished the onset of the Great Depression from other investment downturns between the world wars. Using a model estimated for the years 1890-2000, we show that the expected profitability of capital measured by Tobin's q, and the uncertainty surro...... depression: rather, its slump helped to propel the wider collapse...
GIS and uncertainty management
Czech Academy of Sciences Publication Activity Database
Klimešová, Dana
Enschede : ISPRS ITC, 2006, s. 98-101. [ISPRS. Mid-term Symposium 2006. Enschede (NL), 08.05.2006-11.05.2006] Institutional research plan: CEZ:AV0Z10750506 Keywords : spatial information sciences * GIS * uncertainty correction * web services Subject RIV: BC - Control System s Theory
Institute of Scientific and Technical Information of China (English)
范梦璇
2015-01-01
<正>Employ change-related uncertainty is a condition that under current continually changing business environment,the organizations also have to change,the change include strategic direction,structure and staffing levels to help company to keep competitive(Armenakis&Bedeian,1999);However;these
Risk, Uncertainty and Entrepreneurship
DEFF Research Database (Denmark)
Koudstaal, Martin; Sloof, Randolph; Van Praag, Mirjam
Theory predicts that entrepreneurs have distinct attitudes towards risk and uncertainty, but empirical evidence is mixed. To better understand the unique behavioral characteristics of entrepreneurs and the causes of these mixed results, we perform a large ‘lab-in-the-field’ experiment comparing e...
Uncertainty In Quantum Computation
Kak, Subhash
2002-01-01
We examine the effect of previous history on starting a computation on a quantum computer. Specifically, we assume that the quantum register has some unknown state on it, and it is required that this state be cleared and replaced by a specific superposition state without any phase uncertainty, as needed by quantum algorithms. We show that, in general, this task is computationally impossible.
Propagation properties of partially polarized Gaussian Schell-model beams through an astigmatic lens
Pan, Liuzhan; Wang, Beizhan; Lu, Baida
2005-09-01
Based on the beam coherent-polarization (BCP) matrix approach and propagation law of partially coherent beams, analytical propagation equations of partially polarized Gaussian Schell-model (PGSM) beams through an astigmatic lens are derived, which enables us to study the propagation-induced polarization changes and irradiance distributions at any propagation distance of PGSM beams through an astigmatic lens within the framework of the paraxial approximation. Detailed numerical results for a PGSM beam passing through an astigmatic lens are presented. A comparison with the aberration-free case is made, and shows that the astigmatism affects the propagation properties of PGSM beams.
Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model
International Nuclear Information System (INIS)
Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs
A Stochastic Nonlinear Water Wave Model for Efficient Uncertainty Quantification
Bigoni, Daniele; Eskilsson, Claes
2014-01-01
A major challenge in next-generation industrial applications is to improve numerical analysis by quantifying uncertainties in predictions. In this work we present a stochastic formulation of a fully nonlinear and dispersive potential flow water wave model for the probabilistic description of the evolution waves. This model is discretized using the Stochastic Collocation Method (SCM), which provides an approximate surrogate of the model. This can be used to accurately and efficiently estimate the probability distribution of the unknown time dependent stochastic solution after the forward propagation of uncertainties. We revisit experimental benchmarks often used for validation of deterministic water wave models. We do this using a fully nonlinear and dispersive model and show how uncertainty in the model input can influence the model output. Based on numerical experiments and assumed uncertainties in boundary data, our analysis reveals that some of the known discrepancies from deterministic simulation in compa...
Effect of material uncertainties on dynamic analysis of piezoelectric fans
Srivastava, Swapnil; Yadav, Shubham Kumar; Mukherjee, Sujoy
2015-04-01
A piezofan is a resonant device that uses a piezoceramic material to induce oscillations in a cantilever beam. In this study, lumped-mass modelling is used to analyze a piezoelectric fan. Uncertainties are associated with the piezoelectric structures due to several reasons such as variation during manufacturing process, temperature, presence of adhesive layer between the piezoelectric actuator/sensor and the shim stock etc. Presence of uncertainty in the piezoelectric materials can influence the dynamic behavior of the piezoelectric fan such as natural frequency, tip deflection etc. Moreover, these quantities will also affect the performance parameters of the piezoelectric fan. Uncertainty analysis is performed using classical Monte Carlo Simulation (MCS). It is found that the propagation of uncertainty causes significant deviations from the baseline deterministic predictions, which also affect the achievable performance of the piezofan. The numerical results in this paper provide useful bounds on several performance parameters of the cooling fan and will enhance confidence in the design process.
Simulation of excitation and propagation of pico-second ultrasound
International Nuclear Information System (INIS)
This paper presents an analytic and numerical simulation of the generation and propagation of pico-second ultrasound with nano-scale wavelength, enabling the production of bulk waves in thin films. An analytic model of laser-matter interaction and elasto-dynamic wave propagation is introduced to calculate the elastic strain pulse in microstructures. The model includes the laser-pulse absorption on the material surface, heat transfer from a photon to the elastic energy of a phonon, and acoustic wave propagation to formulate the governing equations of ultra-short ultrasound. The excitation and propagation of acoustic pulses produced by ultra-short laser pulses are numerically simulated for an aluminum substrate using the finite-difference method and compared with the analytical solution. Furthermore, Fourier analysis was performed to investigate the frequency spectrum of the simulated elastic wave pulse. It is concluded that a pico-second bulk wave with a very high frequency of up to hundreds of gigahertz is successfully generated in metals using a 100-fs laser pulse and that it can be propagated in the direction of thickness for thickness less than 100 nm.
Simulation of excitation and propagation of pico-second ultrasound
Energy Technology Data Exchange (ETDEWEB)
Yang, Seung Yong; Kim, No Kyu [Dept. of Mechanical Engineering, Korea University of Technology and Education, Chunan (Korea, Republic of)
2014-12-15
This paper presents an analytic and numerical simulation of the generation and propagation of pico-second ultrasound with nano-scale wavelength, enabling the production of bulk waves in thin films. An analytic model of laser-matter interaction and elasto-dynamic wave propagation is introduced to calculate the elastic strain pulse in microstructures. The model includes the laser-pulse absorption on the material surface, heat transfer from a photon to the elastic energy of a phonon, and acoustic wave propagation to formulate the governing equations of ultra-short ultrasound. The excitation and propagation of acoustic pulses produced by ultra-short laser pulses are numerically simulated for an aluminum substrate using the finite-difference method and compared with the analytical solution. Furthermore, Fourier analysis was performed to investigate the frequency spectrum of the simulated elastic wave pulse. It is concluded that a pico-second bulk wave with a very high frequency of up to hundreds of gigahertz is successfully generated in metals using a 100-fs laser pulse and that it can be propagated in the direction of thickness for thickness less than 100 nm.
Vegetative propagation of jojoba
Energy Technology Data Exchange (ETDEWEB)
Low, C.B.; Hackett, W.P.
1981-03-01
Development of jojoba as an economically viable crop requires improved methods of propagation and culture. Rooting experiments were performed on cutting material collected from wild jojoba plants. A striking seasonal fluctuation in rooting potential was found. Jojoba plants can be successfully propagated from stem cuttings made during spring, summer, and, to some extent, fall. Variability among jojoba plants may also play a role in rooting potential, although it is not as important as season. In general, the use of auxin (4,000 ppm indolebutyric acid) on jojoba cuttings during periods of high rooting potential promotes adventitious root formation, but during periods of low rooting potential it has no effect or is even slightly inhibitory. In the greenhouse, cutting-grown plants apparently reproductively matured sooner than those grown from seed. If this observation holds true for plants transplanted into the field, earlier fruit production by cutting--grown plants would mean earlier return of initial planting and maintenance costs.
HIGH AMPLITUDE PROPAGATED CONTRACTIONS
Bharucha, Adil E.
2012-01-01
While most colonic motor activity is segmental and non-propulsive, colonic high amplitude propagated contractions (HAPC) can transfer colonic contents over long distances and often precede defecation. HAPC occur spontaneously, in response to pharmacological agents or colonic distention. In this issue of Neurogastroenterology and Motility, Rodriguez and colleagues report that anal relaxation during spontaneous and bisacodyl-induced HAPC exceeds anal relaxation during rectal distention in const...
Infrared finite electron propagator
International Nuclear Information System (INIS)
We investigate the properties of a dressed electron which reduces, in a particular class of gauges, to the usual fermion. A one-loop calculation of the propagator is presented. We show explicitly that an infrared finite, multiplicative, mass shell renormalization is possible for this dressed electron, or, equivalently, for the usual fermion in the above-mentioned gauges. The results are in complete accord with previous conjectures. copyright 1997 The American Physical Society
International Nuclear Information System (INIS)
The Analytic Hierarchy Process (AHP) has been used to help determine the importance of components and phenomena in thermal-hydraulic safety analyses of nuclear reactors. The AHP results are based, in part on expert opinion. Therefore, it is prudent to evaluate the uncertainty of the AHP ranks of importance. Prior applications have addressed uncertainty with experimental data comparisons and bounding sensitivity calculations. These methods work well when a sufficient experimental data base exists to justify the comparisons. However, in the case of limited or no experimental data the size of the uncertainty is normally made conservatively large. Accordingly, the author has taken another approach, that of performing a statistically based uncertainty analysis. The new work is based on prior evaluations of the importance of components and phenomena in the thermal-hydraulic safety analysis of the Advanced Neutron Source Reactor (ANSR), a new facility now in the design phase. The uncertainty during large break loss of coolant, and decay heat removal scenarios is estimated by assigning a probability distribution function (pdf) to the potential error in the initial expert estimates of pair-wise importance between the components. Using a Monte Carlo sampling technique, the error pdfs are propagated through the AHP software solutions to determine a pdf of uncertainty in the system wide importance of each component. To enhance the generality of the results, study of one other problem having different number of elements is reported, as are the effects of a larger assumed pdf error in the expert ranks. Validation of the Monte Carlo sample size and repeatability are also documented