Hong, Jinglan; Shaked, Shanna; Rosenbaum, Ralph K.;
2010-01-01
Background, aim, and scope Uncertainty information is essential for the proper use of Life Cycle Assessment (LCA) and environmental assessments in decision making. So far, parameter uncertainty propagation has mainly been studied using Monte Carlo techniques that are relatively computationally...... approach to the comparison of two or more LCA scenarios. Since in LCA it is crucial to account for both common inventory processes and common impact assessment characterization factors among the different scenarios, we further develop the approach to address this dependency. We provide a method to easily...... tested cases, we obtained a good concordance between the Monte Carlo and the Taylor series expansion methods regarding the probability that one scenario is better than the other. Discussion The Taylor series expansion method addresses the crucial need of accounting for dependencies in LCA, both for...
Sciacchitano, Andrea; Wieneke, Bernhard
2016-08-01
This paper discusses the propagation of the instantaneous uncertainty of PIV measurements to statistical and instantaneous quantities of interest derived from the velocity field. The expression of the uncertainty of vorticity, velocity divergence, mean value and Reynolds stresses is derived. It is shown that the uncertainty of vorticity and velocity divergence requires the knowledge of the spatial correlation between the error of the x and y particle image displacement, which depends upon the measurement spatial resolution. The uncertainty of statistical quantities is often dominated by the random uncertainty due to the finite sample size and decreases with the square root of the effective number of independent samples. Monte Carlo simulations are conducted to assess the accuracy of the uncertainty propagation formulae. Furthermore, three experimental assessments are carried out. In the first experiment, a turntable is used to simulate a rigid rotation flow field. The estimated uncertainty of the vorticity is compared with the actual vorticity error root-mean-square, with differences between the two quantities within 5–10% for different interrogation window sizes and overlap factors. A turbulent jet flow is investigated in the second experimental assessment. The reference velocity, which is used to compute the reference value of the instantaneous flow properties of interest, is obtained with an auxiliary PIV system, which features a higher dynamic range than the measurement system. Finally, the uncertainty quantification of statistical quantities is assessed via PIV measurements in a cavity flow. The comparison between estimated uncertainty and actual error demonstrates the accuracy of the proposed uncertainty propagation methodology.
Uncertainty propagation in nuclear forensics
Uncertainty propagation formulae are presented for age dating in support of nuclear forensics. The age of radioactive material in this context refers to the time elapsed since a particular radionuclide was chemically separated from its decay product(s). The decay of the parent radionuclide and ingrowth of the daughter nuclide are governed by statistical decay laws. Mathematical equations allow calculation of the age of specific nuclear material through the atom ratio between parent and daughter nuclides, or through the activity ratio provided that the daughter nuclide is also unstable. The derivation of the uncertainty formulae of the age may present some difficulty to the user community and so the exact solutions, some approximations, a graphical representation and their interpretation are presented in this work. Typical nuclides of interest are actinides in the context of non-proliferation commitments. The uncertainty analysis is applied to a set of important parent–daughter pairs and the need for more precise half-life data is examined. - Highlights: • Uncertainty propagation formulae for age dating with nuclear chronometers. • Applied to parent–daughter pairs used in nuclear forensics. • Investigated need for better half-life data
Stochastic and epistemic uncertainty propagation in LCA
Clavreul, Julie; Guyonnet, Dominique; Tonini, Davide;
2013-01-01
When performing uncertainty propagation, most LCA practitioners choose to represent uncertainties by single probability distributions and to propagate them using stochastic methods. However, the selection of single probability distributions appears often arbitrary when faced with scarce informati...
Uncertainty Propagation in an Ecosystem Nutrient Budget.
New aspects and advancements in classical uncertainty propagation methods were used to develop a nutrient budget with associated error for a northern Gulf of Mexico coastal embayment. Uncertainty was calculated for budget terms by propagating the standard error and degrees of fr...
Stochastic and epistemic uncertainty propagation in LCA
Clavreul, Julie; Guyonnet, Dominique; Tonini, Davide;
2013-01-01
When performing uncertainty propagation, most LCA practitioners choose to represent uncertainties by single probability distributions and to propagate them using stochastic methods. However, the selection of single probability distributions appears often arbitrary when faced with scarce information...... information is rich, then a purely statistical representation mode is adequate, but if the information is scarce, then it may be better conveyed by possibility distributions....
The Role of Uncertainty, Awareness, and Trust in Visual Analytics.
Sacha, Dominik; Senaratne, Hansi; Kwon, Bum Chul; Ellis, Geoffrey; Keim, Daniel A
2016-01-01
Visual analytics supports humans in generating knowledge from large and often complex datasets. Evidence is collected, collated and cross-linked with our existing knowledge. In the process, a myriad of analytical and visualisation techniques are employed to generate a visual representation of the data. These often introduce their own uncertainties, in addition to the ones inherent in the data, and these propagated and compounded uncertainties can result in impaired decision making. The user's confidence or trust in the results depends on the extent of user's awareness of the underlying uncertainties generated on the system side. This paper unpacks the uncertainties that propagate through visual analytics systems, illustrates how human's perceptual and cognitive biases influence the user's awareness of such uncertainties, and how this affects the user's trust building. The knowledge generation model for visual analytics is used to provide a terminology and framework to discuss the consequences of these aspects in knowledge construction and though examples, machine uncertainty is compared to human trust measures with provenance. Furthermore, guidelines for the design of uncertainty-aware systems are presented that can aid the user in better decision making. PMID:26529704
Towards a complete propagation uncertainties in depletion calculations
Martinez, J.S. [Universidad Politecnica de Madrid (Spain). Dept. of Nuclear Engineering; Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Garching (Germany); Zwermann, W.; Gallner, L.; Puente-Espel, Federico; Velkov, K.; Hannstein, V. [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Garching (Germany); Cabellos, O. [Universidad Politecnica de Madrid (Spain). Dept. of Nuclear Engineering
2013-07-01
Propagation of nuclear data uncertainties to calculated values is interesting for design purposes and libraries evaluation. XSUSA, developed at GRS, propagates cross section uncertainties to nuclear calculations. In depletion simulations, fission yields and decay data are also involved and are a possible source of uncertainty that should be taken into account. We have developed tools to generate varied fission yields and decay libraries and to propagate uncertainties through depletion in order to complete the XSUSA uncertainty assessment capabilities. A generic test to probe the methodology is defined and discussed. (orig.)
Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design
Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.
2002-01-01
Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.
Quantifying uncertainty in nuclear analytical measurements
The lack of international consensus on the expression of uncertainty in measurements was recognised by the late 1970s and led, after the issuance of a series of rather generic recommendations, to the publication of a general publication, known as GUM, the Guide to the Expression of Uncertainty in Measurement. This publication, issued in 1993, was based on co-operation over several years by the Bureau International des Poids et Mesures, the International Electrotechnical Commission, the International Federation of Clinical Chemistry, the International Organization for Standardization (ISO), the International Union of Pure and Applied Chemistry, the International Union of Pure and Applied Physics and the Organisation internationale de metrologie legale. The purpose was to promote full information on how uncertainty statements are arrived at and to provide a basis for harmonized reporting and the international comparison of measurement results. The need to provide more specific guidance to different measurement disciplines was soon recognized and the field of analytical chemistry was addressed by EURACHEM in 1995 in the first edition of a guidance report on Quantifying Uncertainty in Analytical Measurements, produced by a group of experts from the field. That publication translated the general concepts of the GUM into specific applications for analytical laboratories and illustrated the principles with a series of selected examples as a didactic tool. Based on feedback from the actual practice, the EURACHEM publication was extensively reviewed in 1997-1999 under the auspices of the Co-operation on International Traceability in Analytical Chemistry (CITAC), and a second edition was published in 2000. Still, except for a single example on the measurement of radioactivity in GUM, the field of nuclear and radiochemical measurements was not covered. The explicit requirement of ISO standard 17025:1999, General Requirements for the Competence of Testing and Calibration
Uncertainty propagation in fault trees using a quantile arithmetic methodology
A methodology based on Quantile Arithmetic, the probabilistic analog to Interval Analysis (Dempster 1969), is proposed for the computation of uncertainty propagation in Fault Tree Analysis (Apostolakis 1977). The basic events' continuous probability density functions are represented by equivalent discrete distributions through dividing them into a number of quantiles N. Quantile Arithmetic is then used to perform the binary arithmetical operations corresponding to the logical gates in the Boolean expression for the Top Event of a given Fault Tree. The computational characteristics of the proposed methodology as compared with the exact analytical solutions are discussed for the cases of the summation of M normal variables. It is further compared with the Monte Carlo method through the use of the efficiency ratio defined as the product of the labor and error ratios. (orig./HP)
Remaining Useful Life Estimation in Prognosis: An Uncertainty Propagation Problem
Sankararaman, Shankar; Goebel, Kai
2013-01-01
The estimation of remaining useful life is significant in the context of prognostics and health monitoring, and the prediction of remaining useful life is essential for online operations and decision-making. However, it is challenging to accurately predict the remaining useful life in practical aerospace applications due to the presence of various uncertainties that affect prognostic calculations, and in turn, render the remaining useful life prediction uncertain. It is challenging to identify and characterize the various sources of uncertainty in prognosis, understand how each of these sources of uncertainty affect the uncertainty in the remaining useful life prediction, and thereby compute the overall uncertainty in the remaining useful life prediction. In order to achieve these goals, this paper proposes that the task of estimating the remaining useful life must be approached as an uncertainty propagation problem. In this context, uncertainty propagation methods which are available in the literature are reviewed, and their applicability to prognostics and health monitoring are discussed.
Navacerrada Saturio, Maria Angeles; Díaz Sanchidrián, César; Pedrero González, Antonio; Iglesias Martínez, Luis
2008-01-01
The new Spanish Regulation in Building Acoustic establishes values and limits for the different acoustic magnitudes whose fulfillment can be verify by means field measurements. In this sense, an essential aspect of a field measurement is to give the measured magnitude and the uncertainty associated to such a magnitude. In the calculus of the uncertainty it is very usual to follow the uncertainty propagation method as described in the Guide to the expression of Uncertainty in Measurements (GUM...
Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I.
Bautista, M. A.; Fivet, V.; Quinet, P.; Dunn, J.; Gull, T. R.; Kallman, T. R.; Mendoza, C.
2013-01-01
We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data.We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of Oiii and Fe ii and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe ii]. Key words: atomic data - atomic processes - line: formation - methods: data analysis - molecular data - molecular processes - techniques: spectroscopic
UNCERTAINTIES IN ATOMIC DATA AND THEIR PROPAGATION THROUGH SPECTRAL MODELS. I
We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data. We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of O III and Fe II and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe II].
Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I
Bautista, Manuel A; Quinet, Pascal; Dunn, Jay; Kallman, Theodore R Gull Timothy R; Mendoza, Claudio
2013-01-01
We present a method for computing uncertainties in spectral models, i.e. level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data. We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of O III and Fe II and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe II].
UNCERTAINTIES IN ATOMIC DATA AND THEIR PROPAGATION THROUGH SPECTRAL MODELS. I
Bautista, M. A.; Fivet, V. [Department of Physics, Western Michigan University, Kalamazoo, MI 49008 (United States); Quinet, P. [Astrophysique et Spectroscopie, Universite de Mons-UMONS, B-7000 Mons (Belgium); Dunn, J. [Physical Science Department, Georgia Perimeter College, Dunwoody, GA 30338 (United States); Gull, T. R. [Code 667, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Kallman, T. R. [Code 662, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Mendoza, C., E-mail: manuel.bautista@wmich.edu [Centro de Fisica, Instituto Venezolano de Investigaciones Cientificas (IVIC), P.O. Box 20632, Caracas 1020A (Venezuela, Bolivarian Republic of)
2013-06-10
We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data. We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of O III and Fe II and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe II].
Uncertainty propagation in locally damped dynamic systems
Cortes Mochales, Lluis; Ferguson, Neil S.; Bhaskar, Atul
2012-01-01
In the field of stochastic structural dynamics, perturbation methods are widely used to estimate the response statistics of uncertain systems. When large built up systems are to be modelled in the mid-frequency range, perturbation methods are often combined with finite element model reduction techniques in order to considerably reduce the computation time of the response. Existing methods based on Component Mode Synthesis(CMS) allow the uncertainties in the system parameters to be treated ...
The uncertainty in the redshift distributions of galaxies has a significant potential impact on the cosmological parameter values inferred from multi-band imaging surveys. The accuracy of the photometric redshifts measured in these surveys depends not only on the quality of the flux data, but also on a number of modeling assumptions that enter into both the training set and spectral energy distribution (SED) fitting methods of photometric redshift estimation. In this work we focus on the latter, considering two types of modeling uncertainties: uncertainties in the SED template set and uncertainties in the magnitude and type priors used in a Bayesian photometric redshift estimation method. We find that SED template selection effects dominate over magnitude prior errors. We introduce a method for parameterizing the resulting ignorance of the redshift distributions, and for propagating these uncertainties to uncertainties in cosmological parameters.
New challenges on uncertainty propagation assessment of flood risk analysis
Martins, Luciano; Aroca-Jiménez, Estefanía; Bodoque, José M.; Díez-Herrero, Andrés
2016-04-01
Natural hazards, such as floods, cause considerable damage to the human life, material and functional assets every year and around the World. Risk assessment procedures has associated a set of uncertainties, mainly of two types: natural, derived from stochastic character inherent in the flood process dynamics; and epistemic, that are associated with lack of knowledge or the bad procedures employed in the study of these processes. There are abundant scientific and technical literature on uncertainties estimation in each step of flood risk analysis (e.g. rainfall estimates, hydraulic modelling variables); but very few experience on the propagation of the uncertainties along the flood risk assessment. Therefore, epistemic uncertainties are the main goal of this work, in particular,understand the extension of the propagation of uncertainties throughout the process, starting with inundability studies until risk analysis, and how far does vary a proper analysis of the risk of flooding. These methodologies, such as Polynomial Chaos Theory (PCT), Method of Moments or Monte Carlo, are used to evaluate different sources of error, such as data records (precipitation gauges, flow gauges...), hydrologic and hydraulic modelling (inundation estimation), socio-demographic data (damage estimation) to evaluate the uncertainties propagation (UP) considered in design flood risk estimation both, in numerical and cartographic expression. In order to consider the total uncertainty and understand what factors are contributed most to the final uncertainty, we used the method of Polynomial Chaos Theory (PCT). It represents an interesting way to handle to inclusion of uncertainty in the modelling and simulation process. PCT allows for the development of a probabilistic model of the system in a deterministic setting. This is done by using random variables and polynomials to handle the effects of uncertainty. Method application results have a better robustness than traditional analysis
Quantile arithmetic methodology for uncertainty propagation in fault trees
A methodology based on quantile arithmetic, the probabilistic analog to interval analysis, is proposed for the computation of uncertainties propagation in fault tree analysis. The basic events' continuous probability density functions (pdf's) are represented by equivalent discrete distributions by dividing them into a number of quantiles N. Quantile arithmetic is then used to performthe binary arithmetical operations corresponding to the logical gates in the Boolean expression of the top event expression of a given fault tree. The computational advantage of the present methodology as compared with the widely used Monte Carlo method was demonstrated for the cases of summation of M normal variables through the efficiency ratio defined as the product of the labor and error ratios. The efficiency ratio values obtained by the suggested methodology for M = 2 were 2279 for N = 5, 445 for N = 25, and 66 for N = 45 when compared with the results for 19,200 Monte Carlo samples at the 40th percentile point. Another advantage of the approach is that the exact analytical value of the median is always obtained for the top event
Estimation and propagation of uncertainties associated with paleomagnetic directions
Heslop, David; Roberts, Andrew P.
2016-04-01
Principal component analysis (PCA) is a well-established technique in paleomagnetism and provides a means to estimate magnetic remanence directions from univectorial segments of stepwise demagnetization data. Derived directions constrain past geomagnetic field behavior and form the foundation of chronological and tectonic reconstructions. PCA of isolated remanence segments relies on estimates of the segment mean and covariance matrix, which can carry large uncertainties given the relatively small number of demagnetization data points used to characterize individual specimens. Traditional PCA does not, however, lend itself to quantification of these uncertainties, and inferences drawn from paleomagnetic reconstructions suffer from an inability to propagate uncertainties from individual specimens to higher levels, such as in calculations of paleomagnetic site mean directions and pole positions. In this study, we employ a probabilistic reformulation of PCA that represents the unknowns involved in the data fitting process as probability density functions. Such probability density functions represent our state of knowledge about the unknowns in the fitting process and provide a tractable framework with which to rigorously quantify uncertainties associated with remanence directions estimated from demagnetization data. These uncertainties can be propagated readily through each step of a paleomagnetic reconstruction to enable quantification of uncertainties for all stages of the data interpretation sequence, removing the need for arbitrary selection/rejection criteria at the specimen level. Rigorous uncertainty determination helps to protect against spurious inferences being drawn from uncertain data.
HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks
Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-05-01
This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.
Investigation of Free Particle Propagator with Generalized Uncertainty Problem
Ghobakhloo, F
2016-01-01
We consider the Schrodinger equation with a generalized uncertainty principle for a free particle. We then transform the problem into a second ordinary differential equation and thereby obtain the corresponding propagator. The result of ordinary quantum mechanics is recovered for vanishing minimal length parameter.
Uncertainty propagation from raw data to final results
Reduction of data from raw numbers (counts per channel) to physically meaningful quantities (such as cross sections) is in itself a complicated procedure. Propagation of experimental uncertainties through that reduction process has sometimes been perceived as even more difficult, if not impossible. At the Oak Ridge Electron Linear Accelerator, a computer code ALEX has been developed to assist in the propagation process. The purpose of ALEX is to carefully and correctly propagate all experimental uncertainties through the entire reduction procedure, yielding the complete covariance matrix for the reduced data, while requiring little additional input from the experimentalist beyond that which is needed for the data reduction itself. The theoretical method used in ALEX is described, with emphasis on transmission measurements. Application to the natural iron and natural nickel measurements of D.C. Larson is shown
Uncertainty propagation in a cascade modelling approach to flood mapping
J. P. Rodríguez-Rincón
2014-07-01
Full Text Available The purpose of this investigation is to study the propagation of meteorological uncertainty within a cascade modelling approach to flood mapping. The methodology is comprised of a Numerical Weather Prediction Model (NWP, a distributed rainfall–runoff model and a standard 2-D hydrodynamic model. The cascade of models is used to reproduce an extreme flood event that took place in the Southeast of Mexico, during November 2009. The event is selected as high quality field data (e.g. rain gauges; discharge and satellite imagery are available. Uncertainty in the meteorological model (Weather Research and Forecasting model is evaluated through the use of a multi-physics ensemble technique, which considers twelve parameterization schemes to determine a given precipitation. The resulting precipitation fields are used as input in a distributed hydrological model, enabling the determination of different hydrographs associated to this event. Lastly, by means of a standard 2-D hydrodynamic model, hydrographs are used as forcing conditions to study the propagation of the meteorological uncertainty to an estimated flooded area. Results show the utility of the selected modelling approach to investigate error propagation within a cascade of models. Moreover, the error associated to the determination of the runoff, is showed to be lower than that obtained in the precipitation estimation suggesting that uncertainty do not necessarily increase within a model cascade.
Uncertainty propagation in a cascade modelling approach to flood mapping
Rodríguez-Rincón, J. P.; Pedrozo-Acuña, A.; Breña Naranjo, J. A.
2014-07-01
The purpose of this investigation is to study the propagation of meteorological uncertainty within a cascade modelling approach to flood mapping. The methodology is comprised of a Numerical Weather Prediction Model (NWP), a distributed rainfall-runoff model and a standard 2-D hydrodynamic model. The cascade of models is used to reproduce an extreme flood event that took place in the Southeast of Mexico, during November 2009. The event is selected as high quality field data (e.g. rain gauges; discharge) and satellite imagery are available. Uncertainty in the meteorological model (Weather Research and Forecasting model) is evaluated through the use of a multi-physics ensemble technique, which considers twelve parameterization schemes to determine a given precipitation. The resulting precipitation fields are used as input in a distributed hydrological model, enabling the determination of different hydrographs associated to this event. Lastly, by means of a standard 2-D hydrodynamic model, hydrographs are used as forcing conditions to study the propagation of the meteorological uncertainty to an estimated flooded area. Results show the utility of the selected modelling approach to investigate error propagation within a cascade of models. Moreover, the error associated to the determination of the runoff, is showed to be lower than that obtained in the precipitation estimation suggesting that uncertainty do not necessarily increase within a model cascade.
On analytic formulas of Feynman propagators in position space
ZHANG Hong-Hao; FENG Kai-Xi; QIU Si-Wei; ZHAO An; LI Xue-Song
2010-01-01
We correct an inaccurate result of previous work on the Feynman propagator in position space of a free Dirac field in(3+1)-dimensional spacetime; we derive the generalized analytic formulas of both the scalar Feynman propagator and the spinor Feynman propagator in position space in arbitrary(D+1)-dimensional spacetime; and we further find a recurrence relation among the spinor Feynman propagator in(D+l)-dimensional spacetime and the scalar Feynman propagators in(D+1)-,(D-1)-and(D+3)-dimensional spacetimes.
Uncertainty propagation within an integrated model of climate change
This paper demonstrates a methodology whereby stochastic dynamical systems are used to investigate a climate model's inherent capacity to propagate uncertainty over time. The usefulness of the methodology stems from its ability to identify the variables that account for most of the model's uncertainty. We accomplish this by reformulating a deterministic dynamical system capturing the structure of an integrated climate model into a stochastic dynamical system. Then, via the use of computational techniques of stochastic differential equations accurate uncertainty estimates of the model's variables are determined. The uncertainty is measured in terms of properties of probability distributions of the state variables. The starting characteristics of the uncertainty of the initial state and the random fluctuations are derived from estimates given in the literature. Two aspects of uncertainty are investigated: (1) the dependence on environmental scenario - which is determined by technological development and actions towards environmental protection; and (2) the dependence on the magnitude of the initial state measurement error determined by the progress of climate change and the total magnitude of the system's random fluctuations as well as by our understanding of the climate system. Uncertainty of most of the system's variables is found to be nearly independent of the environmental scenario for the time period under consideration (1990-2100). Even conservative uncertainty estimates result in scenario overlap of several decades during which the consequences of any actions affecting the environment could be very difficult to identify with a sufficient degree of confidence. This fact may have fundamental consequences on the level of social acceptance of any restrictive measures against accelerating global warming. In general, the stochastic fluctuations contribute more to the uncertainty than the initial state measurements. The variables coupling all major climate elements
Semi-analytical solution for soliton propagation in colloidal suspension
Senthilkumar Selvaraj
2013-04-01
Full Text Available We consider the propagation of soliton in colloidal nano-suspension. We derive the semi analytical solution for soliton propagation in colloidal nano-suspensions for both one and two spatial dimensions using variational method. This Variational method uses both Averaged Lagrangian and suitable trial functions. Finally we analyse about Rayleigh scattering loss in the soliton propagation through the colloidal nano-suspensions.
Propagation of Uncertainty in Rigid Body Attitude Flows
Lee, Taeyoung; Chaturvedi, Nalin A.; Sanyal, Amit K.; Leok, Melvin; McClamroch, N. Harris
2007-01-01
Motivated by attitude control and attitude estimation problems for a rigid body, computational methods are proposed to propagate uncertainties in the angular velocity and the attitude. The nonlinear attitude flow is determined by Euler-Poincar\\'e equations that describe the rotational dynamics of the rigid body acting under the influence of an attitude dependent potential and by a reconstruction equation that describes the kinematics expressed in terms of an orthogonal matrix representing the...
Uncertainty propagation in a cascade modelling approach to flood mapping
Rodríguez-Rincón, J. P.; Pedrozo-Acuña, A.; J. A. Breña Naranjo
2014-01-01
The purpose of this investigation is to study the propagation of meteorological uncertainty within a cascade modelling approach to flood mapping. The methodology is comprised of a Numerical Weather Prediction Model (NWP), a distributed rainfall–runoff model and a standard 2-D hydrodynamic model. The cascade of models is used to reproduce an extreme flood event that took place in the Southeast of Mexico, during November 2009. The event is selected as high quality field data...
Analysis of uncertainty propagation in nuclear fuel cycle scenarios
Nuclear scenario studies model nuclear fleet over a given period. They enable the comparison of different options for the reactor fleet evolution, and the management of the future fuel cycle materials, from mining to disposal, based on criteria such as installed capacity per reactor technology, mass inventories and flows, in the fuel cycle and in the waste. Uncertainties associated with nuclear data and scenario parameters (fuel, reactors and facilities characteristics) propagate along the isotopic chains in depletion calculations, and through out the scenario history, which reduces the precision of the results. The aim of this work is to develop, implement and use a stochastic uncertainty propagation methodology adapted to scenario studies. The method chosen is based on development of depletion computation surrogate models, which reduce the scenario studies computation time, and whose parameters include perturbations of the depletion model; and fabrication of equivalence model which take into account cross-sections perturbations for computation of fresh fuel enrichment. Then the uncertainty propagation methodology is applied to different scenarios of interest, considering different options of evolution for the French PWR fleet with SFR deployment. (author)
Uncertainty and Sensitivity Analyses of Duct Propagation Models
Nark, Douglas M.; Watson, Willie R.; Jones, Michael G.
2008-01-01
This paper presents results of uncertainty and sensitivity analyses conducted to assess the relative merits of three duct propagation codes. Results from this study are intended to support identification of a "working envelope" within which to use the various approaches underlying these propagation codes. This investigation considers a segmented liner configuration that models the NASA Langley Grazing Incidence Tube, for which a large set of measured data was available. For the uncertainty analysis, the selected input parameters (source sound pressure level, average Mach number, liner impedance, exit impedance, static pressure and static temperature) are randomly varied over a range of values. Uncertainty limits (95% confidence levels) are computed for the predicted values from each code, and are compared with the corresponding 95% confidence intervals in the measured data. Generally, the mean values of the predicted attenuation are observed to track the mean values of the measured attenuation quite well and predicted confidence intervals tend to be larger in the presence of mean flow. A two-level, six factor sensitivity study is also conducted in which the six inputs are varied one at a time to assess their effect on the predicted attenuation. As expected, the results demonstrate the liner resistance and reactance to be the most important input parameters. They also indicate the exit impedance is a significant contributor to uncertainty in the predicted attenuation.
Pulse propagation in tapered granular chains: An analytic study
Harbola, Upendra; Rosas, Alexandre; ESPOSITO, massimiliano; Lindenberg, Katja
2009-01-01
We study pulse propagation in one-dimensional tapered chains of spherical granules. Analytic results for the pulse velocity and other pulse features are obtained using a binary collision approximation. Comparisons with numerical results show that the binary collision approximation provides quantitatively accurate analytic results for these chains.
Uncertainty propagation in probabilistic safety analysis of nuclear power plants
The uncertainty propagation in probabilistic safety analysis of nuclear power plants, is done. The methodology of the minimal cut is implemented in the computer code SVALON and the results for several cases are compared with corresponding results obtained with the SAMPLE code, which employs the Monte Carlo method to propagate the uncertanties. The results have show that, for a relatively small number of dominant minimal cut sets (n approximately 25) and error factors (r approximately 5) the SVALON code yields results which are comparable to those obtained with SAMPLE. An analysis of the unavailability of the low pressure recirculation system of Angra 1 for both the short and long term recirculation phases, are presented. The results for the short term phase are in good agreement with the corresponding one given in WASH-1400. (E.G.)
Analytic solution for the propagation velocity in superconducting composities
The propagation velocity of normal zones in composite superconductors has been calculated analytically for the case of constant thermophysical properties, including the effects of current sharing. The solution is compared with that of a more elementary theory in which current sharing is neglected, i.e., in which there is a sharp transition from the superconducting to the normal state. The solution is also compared with experiment. This comparison demonstrates the important influence of transient heat transfer on the propagation velocity
Uncertainty Quantification and Propagation in Nuclear Density Functional Theory
Schunck, N; McDonnell, J D; Higdon, D; Sarich, J; Wild, S M
2015-03-17
Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going eff orts seek to better root nuclear DFT in the theory of nuclear forces, energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in fi nite nuclei. In this paper, we review recent eff orts to quantify the related uncertainties, and propagate them to model predictions. In particular, we cover the topics of parameter estimation for inverse problems, statistical analysis of model uncertainties and Bayesian inference methods. Illustrative examples are taken from the literature.
Uncertainty quantification and propagation in nuclear density functional theory
Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going efforts seek to better root nuclear DFT in the theory of nuclear forces (see Duguet et al., this Topical Issue), energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in finite nuclei. In this paper, we review recent efforts to quantify the related uncertainties, and propagate them to model predictions. In particular, we cover the topics of parameter estimation for inverse problems, statistical analysis of model uncertainties and Bayesian inference methods. Illustrative examples are taken from the literature. (orig.)
Propagation of radar rainfall uncertainty in urban flood simulations
Liguori, Sara; Rico-Ramirez, Miguel
2013-04-01
This work discusses the results of the implementation of a novel probabilistic system designed to improve ensemble sewer flow predictions for the drainage network of a small urban area in the North of England. The probabilistic system has been developed to model the uncertainty associated to radar rainfall estimates and propagate it through radar-based ensemble sewer flow predictions. The assessment of this system aims at outlining the benefits of addressing the uncertainty associated to radar rainfall estimates in a probabilistic framework, to be potentially implemented in the real-time management of the sewer network in the study area. Radar rainfall estimates are affected by uncertainty due to various factors [1-3] and quality control and correction techniques have been developed in order to improve their accuracy. However, the hydrological use of radar rainfall estimates and forecasts remains challenging. A significant effort has been devoted by the international research community to the assessment of the uncertainty propagation through probabilistic hydro-meteorological forecast systems [4-5], and various approaches have been implemented for the purpose of characterizing the uncertainty in radar rainfall estimates and forecasts [6-11]. A radar-based ensemble stochastic approach, similar to the one implemented for use in the Southern-Alps by the REAL system [6], has been developed for the purpose of this work. An ensemble generator has been calibrated on the basis of the spatial-temporal characteristics of the residual error in radar estimates assessed with reference to rainfall records from around 200 rain gauges available for the year 2007, previously post-processed and corrected by the UK Met Office [12-13]. Each ensemble member is determined by summing a perturbation field to the unperturbed radar rainfall field. The perturbations are generated by imposing the radar error spatial and temporal correlation structure to purely stochastic fields. A
Analytic Approximations for Transit Light Curve Observables, Uncertainties, and Covariances
Carter, Joshua A.; Yee, Jennifer C.; Eastman, Jason; Gaudi, B. Scott; Winn, Joshua N.
2008-01-01
The light curve of an exoplanetary transit can be used to estimate the planetary radius and other parameters of interest. Because accurate parameter estimation is a non-analytic and computationally intensive problem, it is often useful to have analytic approximations for the parameters as well as their uncertainties and covariances. Here we give such formulas, for the case of an exoplanet transiting a star with a uniform brightness distribution. We also assess the advantages of some relativel...
A new analytical framework for tidal propagation in estuaries
Cai, H.
2014-01-01
The ultimate aim of this thesis is to enhance our understanding of tidal wave propagation in convergent alluvial estuaries (of infinite length). In the process, a new analytical model has been developed as a function of externally defined dimensionless parameters describing friction, channel converg
Data Assimilation and Propagation of Uncertainty in Multiscale Cardiovascular Simulation
Schiavazzi, Daniele; Marsden, Alison
2015-11-01
Cardiovascular modeling is the application of computational tools to predict hemodynamics. State-of-the-art techniques couple a 3D incompressible Navier-Stokes solver with a boundary circulation model and can predict local and peripheral hemodynamics, analyze the post-operative performance of surgical designs and complement clinical data collection minimizing invasive and risky measurement practices. The ability of these tools to make useful predictions is directly related to their accuracy in representing measured physiologies. Tuning of model parameters is therefore a topic of paramount importance and should include clinical data uncertainty, revealing how this uncertainty will affect the predictions. We propose a fully Bayesian, multi-level approach to data assimilation of uncertain clinical data in multiscale circulation models. To reduce the computational cost, we use a stable, condensed approximation of the 3D model build by linear sparse regression of the pressure/flow rate relationship at the outlets. Finally, we consider the problem of non-invasively propagating the uncertainty in model parameters to the resulting hemodynamics and compare Monte Carlo simulation with Stochastic Collocation approaches based on Polynomial or Multi-resolution Chaos expansions.
Analytic structure of QCD propagators in Minkowski space
Siringo, Fabio
2016-01-01
Analytical functions for the propagators of QCD, including a set of chiral quarks, are derived by a one-loop massive expansion in the Landau gauge, deep in the infrared. By analytic continuation, the spectral functions are studied in Minkowski space, yielding a direct proof of positivity violation and confinement from first principles.The dynamical breaking of chiral symmetry is described on the same footing of gluon mass generation, providing a unified picture. While dealing with the exact Lagrangian, the expansion is based on massive free-particle propagators, is safe in the infrared and is equivalent to the standard perturbation theory in the UV. By dimensional regularization, all diverging mass terms cancel exactly without including mass counterterms that would spoil the gauge and chiral symmetry of the Lagrangian. Universal scaling properties are predicted for the inverse dressing functions and shown to be satisfied by the lattice data. Complex conjugated poles are found for the gluon propagator, in agre...
Ultrashort Optical Pulse Propagation in terms of Analytic Signal
Sh. Amiranashvili
2011-01-01
Full Text Available We demonstrate that ultrashort optical pulses propagating in a nonlinear dispersive medium are naturally described through incorporation of analytic signal for the electric field. To this end a second-order nonlinear wave equation is first simplified using a unidirectional approximation. Then the analytic signal is introduced, and all nonresonant nonlinear terms are eliminated. The derived propagation equation accounts for arbitrary dispersion, resonant four-wave mixing processes, weak absorption, and arbitrary pulse duration. The model applies to the complex electric field and is independent of the slowly varying envelope approximation. Still the derived propagation equation posses universal structure of the generalized nonlinear Schrödinger equation (NSE. In particular, it can be solved numerically with only small changes of the standard split-step solver or more complicated spectral algorithms for NSE. We present exemplary numerical solutions describing supercontinuum generation with an ultrashort optical pulse.
Risk classification and uncertainty propagation for virtual water distribution systems
While the secrecy of real water distribution system data is crucial, it poses difficulty for research as results cannot be publicized. This data includes topological layouts of pipe networks, pump operation schedules, and water demands. Therefore, a library of virtual water distribution systems can be an important research tool for comparative development of analytical methods. A virtual city, 'Micropolis', has been developed, including a comprehensive water distribution system, as a first entry into such a library. This virtual city of 5000 residents is fully described in both geographic information systems (GIS) and EPANet hydraulic model frameworks. A risk classification scheme and Monte Carlo analysis are employed for an attempted water supply contamination attack. Model inputs to be considered include uncertainties in: daily water demand, seasonal demand, initial storage tank levels, the time of day a contamination event is initiated, duration of contamination event, and contaminant quantity. Findings show that reasonable uncertainties in model inputs produce high variability in exposure levels. It is also shown that exposure level distributions experience noticeable sensitivities to population clusters within the contaminant spread area. High uncertainties in exposure patterns lead to greater resources needed for more effective mitigation strategies.
Uncertainty propagation in orbital mechanics via tensor decomposition
Sun, Yifei; Kumar, Mrinal
2016-03-01
Uncertainty forecasting in orbital mechanics is an essential but difficult task, primarily because the underlying Fokker-Planck equation (FPE) is defined on a relatively high dimensional (6-D) state-space and is driven by the nonlinear perturbed Keplerian dynamics. In addition, an enormously large solution domain is required for numerical solution of this FPE (e.g. encompassing the entire orbit in the x-y-z subspace), of which the state probability density function (pdf) occupies a tiny fraction at any given time. This coupling of large size, high dimensionality and nonlinearity makes for a formidable computational task, and has caused the FPE for orbital uncertainty propagation to remain an unsolved problem. To the best of the authors' knowledge, this paper presents the first successful direct solution of the FPE for perturbed Keplerian mechanics. To tackle the dimensionality issue, the time-varying state pdf is approximated in the CANDECOMP/PARAFAC decomposition tensor form where all the six spatial dimensions as well as the time dimension are separated from one other. The pdf approximation for all times is obtained simultaneously via the alternating least squares algorithm. Chebyshev spectral differentiation is employed for discretization on account of its spectral ("super-fast") convergence rate. To facilitate the tensor decomposition and control the solution domain size, system dynamics is expressed using spherical coordinates in a noninertial reference frame. Numerical results obtained on a regular personal computer are compared with Monte Carlo simulations.
Spin-Stabilized Spacecrafts: Analytical Attitude Propagation Using Magnetic Torques
Hélio Koiti Kuga; Maria Cecília F. P. S. Zanardi; Roberta Veloso Garcia
2009-01-01
An analytical approach for spin-stabilized satellites attitude propagation is presented, considering the influence of the residual magnetic torque and eddy currents torque. It is assumed two approaches to examine the influence of external torques acting during the motion of the satellite, with the Earth's magnetic field described by the quadripole model. In the first approach is included only the residual magnetic torque in the motion equations, with the satellites in circular or elliptical o...
Uncertainties in workplace external dosimetry - An analytical approach
The uncertainties associated with external dosimetry measurements at workplaces depend on the type of dosemeter used together with its performance characteristics and the information available on the measurement conditions. Performance characteristics were determined in the course of a type test and information about the measurement conditions can either be general, e.g. 'research' and 'medicine', or specific, e.g. 'X-ray testing equipment for aluminium wheel rims'. This paper explains an analytical approach to determine the measurement uncertainty. It is based on the Draft IEC Technical Report IEC 62461 Radiation Protection Instrumentation - Determination of Uncertainty in Measurement. Both this paper and the report cannot eliminate the fact that the determination of the uncertainty requires a larger effort than performing the measurement itself. As a counterbalance, the process of determining the uncertainty results not only in a numerical value of the uncertainty but also produces the best estimate of the quantity to be measured, which may differ from the indication of the instrument. Thus it also improves the result of the measurement. (authors)
Approximate analytical solutions for excitation and propagation in cardiac tissue
Greene, D'Artagnan; Shiferaw, Yohannes
2015-04-01
It is well known that a variety of cardiac arrhythmias are initiated by a focal excitation in heart tissue. At the single cell level these currents are typically induced by intracellular processes such as spontaneous calcium release (SCR). However, it is not understood how the size and morphology of these focal excitations are related to the electrophysiological properties of cardiac cells. In this paper a detailed physiologically based ionic model is analyzed by projecting the excitation dynamics to a reduced one-dimensional parameter space. Based on this analysis we show that the inward current required for an excitation to occur is largely dictated by the voltage dependence of the inward rectifier potassium current (IK 1) , and is insensitive to the detailed properties of the sodium current. We derive an analytical expression relating the size of a stimulus and the critical current required to induce a propagating action potential (AP), and argue that this relationship determines the necessary number of cells that must undergo SCR in order to induce ectopic activity in cardiac tissue. Finally, we show that, once a focal excitation begins to propagate, its propagation characteristics, such as the conduction velocity and the critical radius for propagation, are largely determined by the sodium and gap junction currents with a substantially lesser effect due to repolarizing potassium currents. These results reveal the relationship between ion channel properties and important tissue scale processes such as excitation and propagation.
Uncertainty propagation for systems of conservation laws, stochastic spectral methods
Uncertainty quantification through stochastic spectral methods has been recently applied to several kinds of stochastic PDEs. This thesis deals with stochastic systems of conservation laws. These systems are non linear and develop discontinuities in finite times: these difficulties can trigger the loss of hyperbolicity of the truncated system resulting of the application of sG-gPC (stochastic Galerkin-generalized Polynomial Chaos). We introduce a formalism based on both kinetic theory and moments theory in order to close the truncated system in such a way that the hyperbolicity of the latter is ensured. The idea is to close the truncated system obtained by Galerkin projection via the introduction of an entropy - strictly convex function on the definition domain of our unknowns. In the case this entropy is the mathematical entropy of the non truncated system, the hyperbolicity is ensured. We state several properties of this truncated system from a general non truncated system of conservation laws. We then apply the method to the case of the stochastic inviscid Burgers' equation with random initial conditions and to the stochastic Euler system in one and two space dimensions. In the vicinity of discontinuities, the new method bounds the oscillations due to Gibbs phenomenon to a certain range through the entropy of the system without the use of any adaptative random space discretizations. It is found to be more precise than the stochastic Galerkin method for several test problems. In a last chapter, we present two prospective outlooks: we first suggest an uncertainty propagation method based on the coupling of intrusive and non intrusive methods. We finally emphasize the modelling possibilities of the intrusive Polynomial Chaos methods in order to take into account three dimensional perturbations of a mean one dimensional flow. (author)
Propagation of nuclear data uncertainty: Exact or with covariances
van Veen D.
2010-10-01
Full Text Available Two distinct methods of propagation for basic nuclear data uncertainties to large scale systems will be presented and compared. The “Total Monte Carlo” method is using a statistical ensemble of nuclear data libraries randomly generated by means of a Monte Carlo approach with the TALYS system. These libraries are then directly used in a large number of reactor calculations (for instance with MCNP after which the exact probability distribution for the reactor parameter is obtained. The second method makes use of available covariance files and can be done in a single reactor calculation (by using the perturbation method. In this exercise, both methods are using consistent sets of data files, which implies that covariance files used in the second method are directly obtained from the randomly generated nuclear data libraries from the first method. This is a unique and straightforward comparison allowing to directly apprehend advantages and drawbacks of each method. Comparisons for different reactions and criticality-safety benchmarks from 19F to actinides will be presented. We can thus conclude whether current methods for using covariance data are good enough or not.
Uncertainty-aware video visual analytics of tracked moving objects
Markus Höferlin
2011-01-01
Full Text Available Vast amounts of video data render manual video analysis useless while recent automatic video analytics techniques suffer from insufficient performance. To alleviate these issues, we present a scalable and reliable approach exploiting the visual analytics methodology. This involves the user in the iterative process of exploration, hypotheses generation, and their verification. Scalability is achieved by interactive filter definitions on trajectory features extracted by the automatic computer vision stage. We establish the interface between user and machine adopting the VideoPerpetuoGram (VPG for visualization and enable users to provide filter-based relevance feedback. Additionally, users are supported in deriving hypotheses by context-sensitive statistical graphics. To allow for reliable decision making, we gather uncertainties introduced by the computer vision step, communicate these information to users through uncertainty visualization, and grant fuzzy hypothesis formulation to interact with the machine. Finally, we demonstrate the effectiveness of our approach by the video analysis mini challenge which was part of the IEEE Symposium on Visual Analytics Science and Technology 2009.
Propagating Uncertainty in Solar Panel Performance for Life Cycle Modeling in Early Stage Design
Honda, Tomonori; Chen, Heidi Qianyi; Chan, Kennis Y.; Yang, Maria
2011-01-01
One of the challenges in accurately applying metrics for life cycle assessment lies in accounting for both irreducible and inherent uncertainties in how a design will perform under real world conditions. This paper presents a preliminary study that compares two strategies, one simulation-based and one set-based, for propagating uncertainty in a system. These strategies for uncertainty propagation are then aggregated. This work is conducted in the context of an amorphou...
Assessment and Propagation of Input Uncertainty in Tree-based Option Pricing Models
Gzyl, Henryk; Molina, German; ter Horst, Enrique
2007-01-01
This paper aims to provide a practical example on the assessment and propagation of input uncertainty for option pricing when using tree-based methods. Input uncertainty is propagated into output uncertainty, reflecting that option prices are as unknown as the inputs they are based on. Option pricing formulas are tools whose validity is conditional not only on how close the model represents reality, but also on the quality of the inputs they use, and those inputs are usually not observable. W...
Preliminary Results on Uncertainty Quantification for Pattern Analytics
Stracuzzi, David John [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Brost, Randolph [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Chen, Maximillian Gene [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Malinas, Rebecca [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Peterson, Matthew Gregor [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Phillips, Cynthia A. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robinson, David G. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Woodbridge, Diane [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
This report summarizes preliminary research into uncertainty quantification for pattern ana- lytics within the context of the Pattern Analytics to Support High-Performance Exploitation and Reasoning (PANTHER) project. The primary focus of PANTHER was to make large quantities of remote sensing data searchable by analysts. The work described in this re- port adds nuance to both the initial data preparation steps and the search process. Search queries are transformed from does the specified pattern exist in the data? to how certain is the system that the returned results match the query? We show example results for both data processing and search, and discuss a number of possible improvements for each.
Díez, C. J.; Cabellos, O.; Martínez, J. S.
2015-01-01
Several approaches have been developed in last decades to tackle nuclear data uncertainty propagation problems of burn-up calculations. One approach proposed was the Hybrid Method, where uncertainties in nuclear data are propagated only on the depletion part of a burn-up problem. Because only depletion is addressed, only one-group cross sections are necessary, and hence, their collapsed one-group uncertainties. This approach has been applied successfully in several advanced reactor systems like EFIT (ADS-like reactor) or ESFR (Sodium fast reactor) to assess uncertainties on the isotopic composition. However, a comparison with using multi-group energy structures was not carried out, and has to be performed in order to analyse the limitations of using one-group uncertainties.
An Analytical Study of the Mode Propagation along the Plasmaline
Szeremley, Daniel; Brinkmann, Ralf Peter; Mussenbrock, Thomas; Eremin, Denis; Theoretical Electrical Engineering Team
2014-10-01
The market shows in recent years a growing demand for bottles made of polyethylene terephthalate (PET). Therefore, fast and efficient sterilization processes as well as barrier coatings to decrease gas permeation are required. A specialized microwave plasma source - referred to as the plasmaline - has been developed to allow for treatment of the inner surface of such PET bottles The plasmaline is a coaxial waveguide combined with a gas-inlet which is inserted into the empty bottle and initiates a reactive plasma. To optimize and control the different surface processes, it is essential to fully understand the microwave power coupling to the plasma inside the bottle and thus the electromagnetic wave propagation along the plasmaline. In this contribution, we present a detailed dispersion analysis based on an analytical approach. We study how modes of guided waves are propagating under different conditions (if at all). The analytical results are supported by a series of self-consistent numerical simulations of the plasmaline and the plasma. The authors acknowledge funding by the Deutsche Forschungsgemeinschaft within the frame of SFB-TR 87.
The reliability of a system, notwithstanding it intended function, can be significantly affected by the uncertainty in the reliability estimate of the components that define the system. This paper implements the Unscented Transformation to quantify the effects of the uncertainty of component reliability through two approaches. The first approach is based on the concept of uncertainty propagation, which is the assessment of the effect that the variability of the component reliabilities produces on the variance of the system reliability. This assessment based on UT has been previously considered in the literature but only for system represented through series/parallel configuration. In this paper the assessment is extended to systems whose reliability cannot be represented through analytical expressions and require, for example, Monte Carlo Simulation. The second approach consists on the evaluation of the importance of components, i.e., the evaluation of the components that most contribute to the variance of the system reliability. An extension of the UT is proposed to evaluate the so called “main effects” of each component, as well to assess high order component interaction. Several examples with excellent results illustrate the proposed approach. - Highlights: • Simulation based approach for computing reliability estimates. • Computation of reliability variance via 2n+1 points. • Immediate computation of component importance. • Application to network systems
A Multi-Model Approach for Uncertainty Propagation and Model Calibration in CFD Applications
Wang, Jian-xun; Xiao, Heng
2015-01-01
Proper quantification and propagation of uncertainties in computational simulations are of critical importance. This issue is especially challenging for CFD applications. A particular obstacle for uncertainty quantifications in CFD problems is the large model discrepancies associated with the CFD models used for uncertainty propagation. Neglecting or improperly representing the model discrepancies leads to inaccurate and distorted uncertainty distribution for the Quantities of Interest. High-fidelity models, being accurate yet expensive, can accommodate only a small ensemble of simulations and thus lead to large interpolation errors and/or sampling errors; low-fidelity models can propagate a large ensemble, but can introduce large modeling errors. In this work, we propose a multi-model strategy to account for the influences of model discrepancies in uncertainty propagation and to reduce their impact on the predictions. Specifically, we take advantage of CFD models of multiple fidelities to estimate the model ...
Analytical probabilistic proton dose calculation and range uncertainties
We introduce the concept of analytical probabilistic modeling (APM) to calculate the mean and the standard deviation of intensity-modulated proton dose distributions under the influence of range uncertainties in closed form. For APM, range uncertainties are modeled with a multivariate Normal distribution p(z) over the radiological depths z. A pencil beam algorithm that parameterizes the proton depth dose d(z) with a weighted superposition of ten Gaussians is used. Hence, the integrals ∫ dz p(z) d(z) and ∫ dz p(z) d(z)2 required for the calculation of the expected value and standard deviation of the dose remain analytically tractable and can be efficiently evaluated. The means μk, widths δk, and weights ωk of the Gaussian components parameterizing the depth dose curves are found with least squares fits for all available proton ranges. We observe less than 0.3% average deviation of the Gaussian parameterizations from the original proton depth dose curves. Consequently, APM yields high accuracy estimates for the expected value and standard deviation of intensity-modulated proton dose distributions for two dimensional test cases. APM can accommodate arbitrary correlation models and account for the different nature of random and systematic errors in fractionated radiation therapy. Beneficial applications of APM in robust planning are feasible.
A comparative study: top event unavailability by point estimates and uncertainty propagation
The results of five cases studied are presented to identify how close the cumulative value represented by the point estimate is to the corresponding statistics of the top event distribution. The computer code FTA-J is used for quantification of the fault trees studied, top event unavailabilities, moments and cumulative probability distributions inclusive. The FTA-J demonstrates its usefulness for large trees. In all cases, the point estimate unavailability of the top event based on the median values of the basic events, which has been widely and commonly used in risk assessment for the sake of its simplicity, are lower than its median unavailability by uncertainty propagation. The top event unavailability thus obtained can represent much lower values: i.e. the system would appear much better than it actually is. The point estimate based on the mean values, however, is shown the same as that obtained by uncertainty propagation numerically and analytically. The mean of the top event can be well approximated by forming the product of the means of the components in each cut set, then summing these products. The point estimate can not represent all of the probability distribution characteristics of the top event, so that the estimation of probability intervals for the top event unavailability should be made either by a Monte Carlo simulation or other analytical method. When it is forced to calculate the top event unavailability only by the point estimate, it is the mean value of the component failure data that should be used for its quantification. (author)
An analytical approach for the Propagation Saw Test
Benedetti, Lorenzo; Fischer, Jan-Thomas; Gaume, Johan
2016-04-01
The Propagation Saw Test (PST) [1, 2] is an experimental in-situ technique that has been introduced to assess crack propagation propensity in weak snowpack layers buried below cohesive snow slabs. This test attracted the interest of a large number of practitioners, being relatively easy to perform and providing useful insights for the evaluation of snow instability. The PST procedure requires isolating a snow column of 30 centimeters of width and -at least-1 meter in the downslope direction. Then, once the stratigraphy is known (e.g. from a manual snow profile), a saw is used to cut a weak layer which could fail, potentially leading to the release of a slab avalanche. If the length of the saw cut reaches the so-called critical crack length, the onset of crack propagation occurs. Furthermore, depending on snow properties, the crack in the weak layer can initiate the fracture and detachment of the overlying slab. Statistical studies over a large set of field data confirmed the relevance of the PST, highlighting the positive correlation between test results and the likelihood of avalanche release [3]. Recent works provided key information on the conditions for the onset of crack propagation [4] and on the evolution of slab displacement during the test [5]. In addition, experimental studies [6] and simplified models [7] focused on the qualitative description of snowpack properties leading to different failure types, namely full propagation or fracture arrest (with or without slab fracture). However, beside current numerical studies utilizing discrete elements methods [8], only little attention has been devoted to a detailed analytical description of the PST able to give a comprehensive mechanical framework of the sequence of processes involved in the test. Consequently, this work aims to give a quantitative tool for an exhaustive interpretation of the PST, stressing the attention on important parameters that influence the test outcomes. First, starting from a pure
Measuring the Gas Constant "R": Propagation of Uncertainty and Statistics
Olsen, Robert J.; Sattar, Simeen
2013-01-01
Determining the gas constant "R" by measuring the properties of hydrogen gas collected in a gas buret is well suited for comparing two approaches to uncertainty analysis using a single data set. The brevity of the experiment permits multiple determinations, allowing for statistical evaluation of the standard uncertainty u[subscript…
'spup' - an R package for uncertainty propagation in spatial environmental modelling
Sawicka, Kasia; Heuvelink, Gerard
2016-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in
Propagation of nuclear data uncertainties in fuel cycle calculations using Monte-Carlo technique
Nowadays, the knowledge of uncertainty propagation in depletion calculations is a critical issue because of the safety and economical performance of fuel cycles. Response magnitudes such as decay heat, radiotoxicity and isotopic inventory and their uncertainties should be known to handle spent fuel in present fuel cycles (e.g. high burnup fuel programme) and furthermore in new fuel cycles designs (e.g. fast breeder reactors and ADS). To deal with this task, there are different error propagation techniques, deterministic (adjoint/forward sensitivity analysis) and stochastic (Monte-Carlo technique) to evaluate the error in response magnitudes due to nuclear data uncertainties. In our previous works, cross-section uncertainties were propagated using a Monte-Carlo technique to calculate the uncertainty of response magnitudes such as decay heat and neutron emission. Also, the propagation of decay data, fission yield and cross-section uncertainties was performed, but only isotopic composition was the response magnitude calculated. Following the previous technique, the nuclear data uncertainties are taken into account and propagated to response magnitudes, decay heat and radiotoxicity. These uncertainties are assessed during cooling time. To evaluate this Monte-Carlo technique, two different applications are performed. First, a fission pulse decay heat calculation is carried out to check the Monte-Carlo technique, using decay data and fission yields uncertainties. Then, the results, experimental data and reference calculation (JEFF Report20), are compared. Second, we assess the impact of basic nuclear data (activation cross-section, decay data and fission yields) uncertainties on relevant fuel cycle parameters (decay heat and radiotoxicity) for a conceptual design of a modular European Facility for Industrial Transmutation (EFIT) fuel cycle. After identifying which time steps have higher uncertainties, an assessment of which uncertainties have more relevance is performed
Pragmatic aspects of uncertainty propagation: A conceptual review
Thacker, W.Carlisle
2015-09-11
When quantifying the uncertainty of the response of a computationally costly oceanographic or meteorological model stemming from the uncertainty of its inputs, practicality demands getting the most information using the fewest simulations. It is widely recognized that, by interpolating the results of a small number of simulations, results of additional simulations can be inexpensively approximated to provide a useful estimate of the variability of the response. Even so, as computing the simulations to be interpolated remains the biggest expense, the choice of these simulations deserves attention. When making this choice, two requirement should be considered: (i) the nature of the interpolation and ii) the available information about input uncertainty. Examples comparing polynomial interpolation and Gaussian process interpolation are presented for three different views of input uncertainty.
Analytic Matrix Method for the Study of Propagation Characteristics of a Bent Planar Waveguide
LIU Qing; CAO Zhuang-Qi; SHEN Qi-Shun; DOU Xiao-Ming; CHEN Ying-Li
2000-01-01
An analytic matrix method is used to analyze and accurately calculate the propagation constant and bendinglosses of a bent planar waveguide. This method gives not only a dispersion equation with explicit physical insight,but also accurate complex propagation constants.
Monte Carlo uncertainty propagation approaches in ADS burn-up calculations
Highlights: ► Two Monte Carlo uncertainty propagation approaches are compared. ► How to make both approaches equivalent is presented and applied. ► ADS burn-up calculation is selected as the application of approaches. ► The cross-section uncertainties of 239Pu and 241Pu are propagated. ► Cross-correlations appear as a source of differences between approaches. - Abstract: In activation calculations, there are several approaches to quantify uncertainties: deterministic by means of sensitivity analysis, and stochastic by means of Monte Carlo. Here, two different Monte Carlo approaches for nuclear data uncertainty are presented: the first one is the Total Monte Carlo (TMC). The second one is by means of a Monte Carlo sampling of the covariance information included in the nuclear data libraries to propagate these uncertainties throughout the activation calculations. This last approach is what we named Covariance Uncertainty Propagation, CUP. This work presents both approaches and their differences. Also, they are compared by means of an activation calculation, where the cross-section uncertainties of 239Pu and 241Pu are propagated in an ADS activation calculation
An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method
Campolina, Daniel; Lima, Paulo Rubens I., E-mail: campolina@cdtn.br, E-mail: pauloinacio@cpejr.com.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil). Servico de Tecnologia de Reatores; Pereira, Claubia; Veloso, Maria Auxiliadora F., E-mail: claubia@nuclear.ufmg.br, E-mail: dora@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Dept. de Engenharia Nuclear
2015-07-01
Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k{sub eff} was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)
An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method
Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor keff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)
Wansik Yu; Eiichi Nakakita; Sunmin Kim; Kosei Yamaguchi
2016-01-01
The common approach to quantifying the precipitation forecast uncertainty is ensemble simulations where a numerical weather prediction (NWP) model is run for a number of cases with slightly different initial conditions. In practice, the spread of ensemble members in terms of flood discharge is used as a measure of forecast uncertainty due to uncertain precipitation forecasts. This study presents the uncertainty propagation of rainfall forecast into hydrological response with catchment scale t...
Pedroni, Nicola; Zio, Enrico; Ferrario, Elisa; Pasanisi, Alberto; Couplet, Mathieu
2013-01-01
We consider a model for the risk-based design of a flood protection dike, and use probability distributions to represent aleatory uncertainty and possibility distributions to describe the epistemic uncertainty associated to the poorly known parameters of such probability distributions. A hybrid method is introduced to hierarchically propagate the two types of uncertainty, and the results are compared with those of a Monte Carlo-based Dempster-Shafer approach employing independent random sets ...
Propagation of uncertainties in the nuclear DFT models
Parameters of the nuclear density functional theory (DFT) models are usually adjusted to experimental data. As a result they carry certain theoretical error, which, as a consequence, carries through to the predicted quantities. In this work we address the propagation of theoretical error, within the nuclear DFT models, from the model parameters to the predicted observables. In particularly, the focus is set on the Skyrme energy density functional models. (paper)
Propagation of uncertainties in the nuclear DFT models
Kortelainen, Markus
2014-01-01
Parameters of the nuclear density functional theory (DFT) models are usually adjusted to experimental data. As a result they carry certain theoretical error, which, as a consequence, carries out to the predicted quantities. In this work we address the propagation of theoretical error, within the nuclear DFT models, from the model parameters to the predicted observables. In particularly, the focus is set on the Skyrme energy density functional models.
Propagation of Computational Uncertainty Using the Modern Design of Experiments
DeLoach, Richard
2007-01-01
This paper describes the use of formally designed experiments to aid in the error analysis of a computational experiment. A method is described by which the underlying code is approximated with relatively low-order polynomial graduating functions represented by truncated Taylor series approximations to the true underlying response function. A resource-minimal approach is outlined by which such graduating functions can be estimated from a minimum number of case runs of the underlying computational code. Certain practical considerations are discussed, including ways and means of coping with high-order response functions. The distributional properties of prediction residuals are presented and discussed. A practical method is presented for quantifying that component of the prediction uncertainty of a computational code that can be attributed to imperfect knowledge of independent variable levels. This method is illustrated with a recent assessment of uncertainty in computational estimates of Space Shuttle thermal and structural reentry loads attributable to ice and foam debris impact on ascent.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Soft computing approaches to uncertainty propagation in environmental risk mangement
Kumar, Vikas
2008-01-01
Real-world problems, especially those that involve natural systems, are complex and composed of many nondeterministic components having non-linear coupling. It turns out that in dealing with such systems, one has to face a high degree of uncertainty and tolerate imprecision. Classical system models based on numerical analysis, crisp logic or binary logic have characteristics of precision and categoricity and classified as hard computing approach. In contrast soft computing approaches like pro...
Gadomski, P. J.; Deems, J. S.; Glennie, C. L.; Hartzell, P. J.; Butler, H.; Finnegan, D. C.
2015-12-01
The use of high-resolution topographic data in the form of three-dimensional point clouds obtained from laser scanning systems (LiDAR) is becoming common across scientific disciplines.However little consideration has typically been given to the accuracy and the precision of LiDAR-derived measurements at the individual point scale.Numerous disparate sources contribute to the aggregate precision of each point measurement, including uncertainties in the range measurement, measurement of the attitude and position of the LiDAR collection platform, uncertainties associated with the interaction between the laser pulse and the target surface, and more.We have implemented open-source software tools to calculate per-point stochastic measurement errors for a point cloud using the general LiDAR georeferencing equation.We demonstrate the use of these propagated uncertainties by applying our methods to data collected by the Airborne Snow Observatory ALS, a NASA JPL project using a combination of airborne hyperspectral and LiDAR data to estimate snow-water equivalent distributions over full river basins.We present basin-scale snow depth maps with associated uncertainties, and demonstrate the propagation of those uncertainties to snow volume and snow-water equivalent calculations.
Epistemic uncertainty propagation in energy flows between structural vibrating systems
Xu, Menghui; Du, Xiaoping; Qiu, Zhiping; Wang, Chong
2016-03-01
A dimension-wise method for predicting fuzzy energy flows between structural vibrating systems coupled by joints with epistemic uncertainties is established. Based on its Legendre polynomial approximation at α=0, both the minimum and maximum point vectors of the energy flow of interest are calculated dimension by dimension within the space spanned by the interval parameters determined by fuzzy those at α=0 and the resulted interval bounds are used to assemble the concerned fuzzy energy flows. Besides the proposed method, vertex method as well as two current methods is also applied. Comparisons among results by different methods are accomplished by two numerical examples and the accuracy of all methods is simultaneously verified by Monte Carlo simulation.
Servin, Christian
2015-01-01
On various examples ranging from geosciences to environmental sciences, this book explains how to generate an adequate description of uncertainty, how to justify semiheuristic algorithms for processing uncertainty, and how to make these algorithms more computationally efficient. It explains in what sense the existing approach to uncertainty as a combination of random and systematic components is only an approximation, presents a more adequate three-component model with an additional periodic error component, and explains how uncertainty propagation techniques can be extended to this model. The book provides a justification for a practically efficient heuristic technique (based on fuzzy decision-making). It explains how the computational complexity of uncertainty processing can be reduced. The book also shows how to take into account that in real life, the information about uncertainty is often only partially known, and, on several practical examples, explains how to extract the missing information about uncer...
For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95. percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input. Particularly it was shown that during the burnup, the variances when considering all the parameters uncertainties is equivalent to the sum of variances if the parameter uncertainties are sampled separately
Rodríguez-Rincón, J. P.; Pedrozo-Acuña, A.; Breña-Naranjo, J. A.
2015-01-01
This investigation aims to study the propagation of meteorological uncertainty within a cascade modelling approach to flood prediction. The methodology was comprised of a numerical weather prediction (NWP) model, a distributed rainfall–runoff model and a 2-D hydrodynamic model. The uncertainty evaluation was carried out at the meteorological and hydrological levels of the model chain, which enabled the investigation of how errors that originated in the rainfall prediction ...
Myers, Casey A.; Laz, Peter J.; Shelburne, Kevin B.; Davidson, Bradley S.
2014-01-01
Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse ki...
Understanding uncertainty propagation in life cycle assessments of waste management systems
Bisinella, Valentina; Conradsen, Knut; Christensen, Thomas Højlund; Astrup, Thomas Fruergaard
2015-01-01
Uncertainty analysis in Life Cycle Assessments (LCAs) of waste management systems often results obscure and complex, with key parameters rarely determined on a case-by-case basis. The paper shows an application of a simplified approach to uncertainty coupled with a Global Sensitivity Analysis (GSA......) perspective on three alternative waste management systems for Danish single-family household waste. The approach provides a fast and systematic method to select the most important parameters in the LCAs, understand their propagation and contribution to uncertainty....
Comparison of nuclear data uncertainty propagation methodologies for PWR burn-up simulations
Diez, Carlos Javier; Hoefer, Axel; Porsch, Dieter; Cabellos, Oscar
2014-01-01
Several methodologies using different levels of approximations have been developed for propagating nuclear data uncertainties in nuclear burn-up simulations. Most methods fall into the two broad classes of Monte Carlo approaches, which are exact apart from statistical uncertainties but require additional computation time, and first order perturbation theory approaches, which are efficient for not too large numbers of considered response functions but only applicable for sufficiently small nuclear data uncertainties. Some methods neglect isotopic composition uncertainties induced by the depletion steps of the simulations, others neglect neutron flux uncertainties, and the accuracy of a given approximation is often very hard to quantify. In order to get a better sense of the impact of different approximations, this work aims to compare results obtained based on different approximate methodologies with an exact method, namely the NUDUNA Monte Carlo based approach developed by AREVA GmbH. In addition, the impact ...
Propagation of Nuclear Data Uncertainties for ELECTRA Burn-up Calculations
Sjöstrand, H.; Alhassan, E.; Duan, J.; Gustavsson, C.; Koning, A. J.; Pomp, S.; Rochman, D.; Österlund, M.
2014-04-01
The European Lead-Cooled Training Reactor (ELECTRA) has been proposed as a training reactor for fast systems within the Swedish nuclear program. It is a low-power fast reactor cooled by pure liquid lead. In this work, we propagate the uncertainties in 239Pu transport data to uncertainties in the fuel inventory of ELECTRA during the reactor lifetime using the Total Monte Carlo approach (TMC). Within the TENDL project, nuclear models input parameters were randomized within their uncertainties and 740 239Pu nuclear data libraries were generated. These libraries are used as inputs to reactor codes, in our case SERPENT, to perform uncertainty analysis of nuclear reactor inventory during burn-up. The uncertainty in the inventory determines uncertainties in: the long-term radio-toxicity, the decay heat, the evolution of reactivity parameters, gas pressure and volatile fission product content. In this work, a methodology called fast TMC is utilized, which reduces the overall calculation time. The uncertainty of some minor actinides were observed to be rather large and therefore their impact on multiple recycling should be investigated further. It was also found that, criticality benchmarks can be used to reduce inventory uncertainties due to nuclear data. Further studies are needed to include fission yield uncertainties, more isotopes, and a larger set of benchmarks.
Mullor, R. [Dpto. Estadistica e Investigacion Operativa, Universidad Alicante (Spain); Sanchez, A., E-mail: aisanche@eio.upv.e [Dpto. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica Valencia, Camino de Vera s/n 46022 (Spain); Martorell, S. [Dpto. Ingenieria Quimica y Nuclear, Universidad Politecnica Valencia (Spain); Martinez-Alzamora, N. [Dpto. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica Valencia, Camino de Vera s/n 46022 (Spain)
2011-06-15
Safety related systems performance optimization is classically based on quantifying the effects that testing and maintenance activities have on reliability and cost (R+C). However, R+C quantification is often incomplete in the sense that important uncertainties may not be considered. An important number of studies have been published in the last decade in the field of R+C based optimization considering uncertainties. They have demonstrated that inclusion of uncertainties in the optimization brings the decision maker insights concerning how uncertain the R+C results are and how this uncertainty does matter as it can result in differences in the outcome of the decision making process. Several methods of uncertainty propagation based on the theory of tolerance regions have been proposed in the literature depending on the particular characteristics of the variables in the output and their relations. In this context, the objective of this paper focuses on the application of non-parametric and parametric methods to analyze uncertainty propagation, which will be implemented on a multi-objective optimization problem where reliability and cost act as decision criteria and maintenance intervals act as decision variables. Finally, a comparison of results of these applications and the conclusions obtained are presented.
Wansik Yu
2016-01-01
Full Text Available The common approach to quantifying the precipitation forecast uncertainty is ensemble simulations where a numerical weather prediction (NWP model is run for a number of cases with slightly different initial conditions. In practice, the spread of ensemble members in terms of flood discharge is used as a measure of forecast uncertainty due to uncertain precipitation forecasts. This study presents the uncertainty propagation of rainfall forecast into hydrological response with catchment scale through distributed rainfall-runoff modeling based on the forecasted ensemble rainfall of NWP model. At first, forecast rainfall error based on the BIAS is compared with flood forecast error to assess the error propagation. Second, the variability of flood forecast uncertainty according to catchment scale is discussed using ensemble spread. Then we also assess the flood forecast uncertainty with catchment scale using an estimation regression equation between ensemble rainfall BIAS and discharge BIAS. Finally, the flood forecast uncertainty with RMSE using specific discharge in catchment scale is discussed. Our study is carried out and verified using the largest flood event by typhoon “Talas” of 2011 over the 33 subcatchments of Shingu river basin (2,360 km2, which is located in the Kii Peninsula, Japan.
Safety related systems performance optimization is classically based on quantifying the effects that testing and maintenance activities have on reliability and cost (R+C). However, R+C quantification is often incomplete in the sense that important uncertainties may not be considered. An important number of studies have been published in the last decade in the field of R+C based optimization considering uncertainties. They have demonstrated that inclusion of uncertainties in the optimization brings the decision maker insights concerning how uncertain the R+C results are and how this uncertainty does matter as it can result in differences in the outcome of the decision making process. Several methods of uncertainty propagation based on the theory of tolerance regions have been proposed in the literature depending on the particular characteristics of the variables in the output and their relations. In this context, the objective of this paper focuses on the application of non-parametric and parametric methods to analyze uncertainty propagation, which will be implemented on a multi-objective optimization problem where reliability and cost act as decision criteria and maintenance intervals act as decision variables. Finally, a comparison of results of these applications and the conclusions obtained are presented.
Parker, Jack C.; Park, Eungyu; Tang, Guoping
2008-11-01
A vertically-integrated analytical model for dissolved phase transport is described that considers a time-dependent DNAPL source based on the upscaled dissolution kinetics model of Parker and Park with extensions to consider time-dependent source zone biodecay, partial source mass reduction, and remediation-enhanced source dissolution kinetics. The model also considers spatial variability in aqueous plume decay, which is treated as the sum of aqueous biodecay and volatilization due to diffusive transport and barometric pumping through the unsaturated zone. The model is implemented in Excel/VBA coupled with (1) an inverse solution that utilizes prior information on model parameters and their uncertainty to condition the solution, and (2) an error analysis module that computes parameter covariances and total prediction uncertainty due to regression error and parameter uncertainty. A hypothetical case study is presented to evaluate the feasibility of calibrating the model from limited noisy field data. The results indicate that prediction uncertainty increases significantly over time following calibration, primarily due to propagation of parameter uncertainty. However, differences between the predicted performance of source zone partial mass reduction and the known true performance were reasonably small. Furthermore, a clear difference is observed between the predicted performance for the remedial action scenario versus that for a no-action scenario, which is consistent with the true system behavior. The results suggest that the model formulation can be effectively utilized to assess monitored natural attenuation and source remediation options if careful attention is given to model calibration and prediction uncertainty issues.
Parker, Jack C; Park, Eungyu; Tang, Guoping
2008-11-14
A vertically-integrated analytical model for dissolved phase transport is described that considers a time-dependent DNAPL source based on the upscaled dissolution kinetics model of Parker and Park with extensions to consider time-dependent source zone biodecay, partial source mass reduction, and remediation-enhanced source dissolution kinetics. The model also considers spatial variability in aqueous plume decay, which is treated as the sum of aqueous biodecay and volatilization due to diffusive transport and barometric pumping through the unsaturated zone. The model is implemented in Excel/VBA coupled with (1) an inverse solution that utilizes prior information on model parameters and their uncertainty to condition the solution, and (2) an error analysis module that computes parameter covariances and total prediction uncertainty due to regression error and parameter uncertainty. A hypothetical case study is presented to evaluate the feasibility of calibrating the model from limited noisy field data. The results indicate that prediction uncertainty increases significantly over time following calibration, primarily due to propagation of parameter uncertainty. However, differences between the predicted performance of source zone partial mass reduction and the known true performance were reasonably small. Furthermore, a clear difference is observed between the predicted performance for the remedial action scenario versus that for a no-action scenario, which is consistent with the true system behavior. The results suggest that the model formulation can be effectively utilized to assess monitored natural attenuation and source remediation options if careful attention is given to model calibration and prediction uncertainty issues. PMID:18502537
Development of analytical orbit propagation technique with drag
1979-01-01
Two orbit computation methods were used: (1) numerical method- The solution to the satellite differential equations were solved in a step-by-step manner, using a mathematical algorithm taken from numerical analysis; and (2) analytical method - The solution was expressed by explicit functions of the independent variable. Analytical drag modules, tesseral terms initialization module, second order and long period terms module, and verification testing of the ASOP program were also considered.
Propagation of statistical and nuclear data uncertainties in Monte Carlo burn-up calculations
Garcia-Herranz, Nuria [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain)], E-mail: nuria@din.upm.es; Cabellos, Oscar [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain); Sanz, Javier [Departamento de Ingenieria Energetica, Universidad Nacional de Educacion a Distancia, UNED (Spain); Juan, Jesus [Laboratorio de Estadistica, Universidad Politecnica de Madrid, UPM (Spain); Kuijper, Jim C. [NRG - Fuels, Actinides and Isotopes Group, Petten (Netherlands)
2008-04-15
Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB. A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants. This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files.
This thesis presents a comprehensive study of sensitivity/uncertainty analysis for reactor performance parameters (e.g. the k-effective) to the base nuclear data from which they are computed. The analysis starts at the fundamental step, the Evaluated Nuclear Data File and the uncertainties inherently associated with the data they contain, available in the form of variance/covariance matrices. We show that when a methodical and consistent computation of sensitivity is performed, conventional deterministic formalisms can be sufficient to propagate nuclear data uncertainties with the level of accuracy obtained by the most advanced tools, such as state-of-the-art Monte Carlo codes. By applying our developed methodology to three exercises proposed by the OECD (Uncertainty Analysis for Criticality Safety Assessment Benchmarks), we provide insights of the underlying physical phenomena associated with the used formalisms. (author)
Analysis of analytical uncertainties of the methodology of simulation of processes for obtaining isotopic ending inventory of spent fuel, the ARIANE experiment explores the part of simulation of burning.
Analytical Model for Fictitious Crack Propagation in Concrete Beams
Ulfkjær, J. P.; Krenk, Steen; Brincker, Rune
1995-01-01
are modeled by beam theory. The state of stress in the elastic layer is assumed to depend bilinearly on local elongation corresponding to a linear softening relation for the fictitious crack. Results from the analytical model are compared with results from a more detailed model based on numerical......An analytical model for load-displacement curves of concrete beams is presented. The load-displacement curve is obtained by combining two simple models. The fracture is modeled by a fictitious crack in an elastic layer around the midsection of the beam. Outside the elastic layer the deformations...... methods for different beam sizes. The analytical model is shown to be in agreement with the numerical results if the thickness of the elastic layer is taken as half the beam depth. It is shown that the point on the load-displacement curve where the fictitious crack starts to develop and the point where...
Analytical and numerical methods for wave propagation in fluid media
Murawski, K
2002-01-01
This book surveys analytical and numerical techniques appropriate to the description of fluid motion with an emphasis on the most widely used techniques exhibiting the best performance.Analytical and numerical solutions to hyperbolic systems of wave equations are the primary focus of the book. In addition, many interesting wave phenomena in fluids are considered using examples such as acoustic waves, the emission of air pollutants, magnetohydrodynamic waves in the solar corona, solar wind interaction with the planet venus, and ion-acoustic solitons.
Propagation of nuclear data uncertainties for ELECTRA burn-up calculations
ostrand, H; Duan, J; Gustavsson, C; Koning, A; Pomp, S; Rochman, D; Osterlund, M
2013-01-01
The European Lead-Cooled Training Reactor (ELECTRA) has been proposed as a training reactor for fast systems within the Swedish nuclear program. It is a low-power fast reactor cooled by pure liquid lead. In this work, we propagate the uncertainties in Pu-239 transport data to uncertainties in the fuel inventory of ELECTRA during the reactor life using the Total Monte Carlo approach (TMC). Within the TENDL project the nuclear models input parameters were randomized within their uncertainties and 740 Pu-239 nuclear data libraries were generated. These libraries are used as inputs to reactor codes, in our case SERPENT, to perform uncertainty analysis of nuclear reactor inventory during burn-up. The uncertainty in the inventory determines uncertainties in: the long-term radio-toxicity, the decay heat, the evolution of reactivity parameters, gas pressure and volatile fission product content. In this work, a methodology called fast TMC is utilized, which reduces the overall calculation time. The uncertainty in the ...
The report describes the how-to-use of the codes: MUP (Monte Carlo Uncertainty Propagation) for uncertainty analysis by Monte Carlo simulation, including correlation analysis, extreme value identification and study of selected ranges of the variable space; CEC-DES (Central Composite Design) for building experimental matrices according to the requirements of Central Composite and Factorial Experimental Designs; and, STRADE (Stratified Random Design) for experimental designs based on the Latin Hypercube Sampling Techniques. Application fields, of the codes are probabilistic risk assessment, experimental design, sensitivity analysis and system identification problems
Analytical solution to an investment problem under uncertainties with shocks
Cl\\'audia Nunes; Rita Pimentel
2015-01-01
We derive the optimal investment decision in a project where both demand and investment costs are stochastic processes, eventually subject to shocks. We extend the approach used in Dixit and Pindyck (1994), chapter 6.5, to deal with two sources of uncertainty, but assuming that the underlying processes are no longer geometric Brownian diffusions but rather jump diffusion processes. For the class of isoelastic functions that we address in this paper, it is still possible to derive a closed exp...
Non-parametric order statistics method applied to uncertainty propagation in fuel rod calculations
Advances in modeling fuel rod behavior and accumulations of adequate experimental data have made possible the introduction of quantitative methods to estimate the uncertainty of predictions made with best-estimate fuel rod codes. The uncertainty range of the input variables is characterized by a truncated distribution which is typically a normal, lognormal, or uniform distribution. While the distribution for fabrication parameters is defined to cover the design or fabrication tolerances, the distribution of modeling parameters is inferred from the experimental database consisting of separate effects tests and global tests. The final step of the methodology uses a Monte Carlo type of random sampling of all relevant input variables and performs best-estimate code calculations to propagate these uncertainties in order to evaluate the uncertainty range of outputs of interest for design analysis, such as internal rod pressure and fuel centerline temperature. The statistical method underlying this Monte Carlo sampling is non-parametric order statistics, which is perfectly suited to evaluate quantiles of populations with unknown distribution. The application of this method is straightforward in the case of one single fuel rod, when a 95/95 statement is applicable: 'with a probability of 95% and confidence level of 95% the values of output of interest are below a certain value'. Therefore, the 0.95-quantile is estimated for the distribution of all possible values of one fuel rod with a statistical confidence of 95%. On the other hand, a more elaborate procedure is required if all the fuel rods in the core are being analyzed. In this case, the aim is to evaluate the following global statement: with 95% confidence level, the expected number of fuel rods which are not exceeding a certain value is all the fuel rods in the core except only a few fuel rods. In both cases, the thresholds determined by the analysis should be below the safety acceptable design limit. An indirect
Analytical Model for Fictitious Crack Propagation in Concrete Beams
Ulfkjær, J. P.; Krenk, S.; Brincker, Rune
the elastic layer the deformations are modelled by the Timoshenko beam theory. The state of stress in the elastic layer is assumed to depend bi-lineary on local elongation corresponding to a linear softening relation for the fictitious crack. For different beam size results from the analytical model......An analytical model for load-displacement curves of unreinforced notched and un-notched concrete beams is presented. The load displacement-curve is obtained by combining two simple models. The fracture is modelled by a fictitious crack in an elastic layer around the mid-section of the beam. Outside...... the load-displacement curve where the fictitious crack starts to develope, and the point where the real crack starts to grow will always correspond to the same bending moment. Closed from solutions for the maximum size of the fracture zone and the minimum slope on the load-displacement curve is given...
Development and depletion code surrogate models for uncertainty propagation in scenario studies
Transition scenario studies are necessary to compare different options of the reactor fleet evolution. COSI code is developed by CEA and is used to perform scenario calculations. It allows us to model any fuel type, reactor fleet, fuel facility, and permits the tracking of U, Pu, minor actinides and fission products nuclides on a large time scale. COSI is coupled with the CESAR code which performs the depletion calculations based on one-group cross-section libraries and nuclear data. Different types of uncertainties have an impact on scenario studies: nuclear data and scenario assumptions. Therefore, it is necessary to evaluate their impact on the major scenario results. The methodology adopted to propagate these uncertainties throughout the scenario calculations is a stochastic approach. Considering the amount of inputs to be sampled in order to perform a stochastic calculation of the propagated uncertainty, it appears necessary to reduce the calculation time. Given that evolution calculations represent approximately 95% of the total scenario simulation time, an optimization can be done, with the development and implementation of a surrogate models library of CESAR in COSI. The input parameters of CESAR are sampled with URANIE, the CEA uncertainty platform, and for every sample, the isotopic composition after evolution evaluated with CESAR is stored. Then statistical analysis of the input and output tables allow to model the behavior of CESAR on each CESAR library, i.e. building a surrogate model. Several quality tests are performed on each surrogate model to insure the prediction power is satisfying. Afterwards, a new routine implemented in COSI reads these surrogate models and using them in replacement of CESAR calculations. A preliminary study of the calculation time gain shows that the use of surrogate models allows stochastic calculation of the uncertainty propagation. Once the set of surrogate models is complete, one of the first expected results will be the
Sin, Gürkan; Gernaey, Krist; Eliasson Lantz, Anna
2009-01-01
compared to the large uncertainty observed in the antibiotic and off-gas CO2 predictions. The output uncertainty was observed to be lower during the exponential growth phase, while higher in the stationary and death phases - meaning the model describes some periods better than others. To understand which...... input parameters are responsible for the output uncertainty, three sensitivity methods (Standardized Regression Coefficients, Morris and differential analysis) were evaluated and compared. The results from these methods were mostly in agreement with each other and revealed that only few parameters......The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model-building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input...
Nuclear data uncertainty propagation in a lattice physics code using stochastic sampling
A methodology is presented for 'black box' nuclear data uncertainty propagation in a lattice physics code using stochastic sampling. The methodology has 4 components: i) processing nuclear data variance/covariance matrices including converting the native group structure to a group structure 'compatible' with the lattice physics code, ii) generating (relative) random samples of nuclear data, iii) perturbing the lattice physics code nuclear data according to the random samples, and iv) analyzing the distribution of outputs to estimate the uncertainty. The scheme is described as implemented at PSI, in a modified version of the lattice physics code CASMO-5M, including all relevant practical details. Uncertainty results are presented for a BWR pin-cell at hot zero power conditions and a PWR assembly at hot full power conditions with depletion. Results are presented for uncertainties in eigenvalue, 1-group microscopic cross sections, 2-group macroscopic cross sections, and isotopics. Interesting behavior is observed with burnup, including a minimum uncertainty due to the presence of fertile U-238 and a global effect described as 'synergy', observed when comparing the uncertainty resulting from simultaneous and one-at-a-time variations of nuclear data. (authors)
Propagation of systematic uncertainty due to data reduction in transmission measurement of iron
A technique of determinantal inequalities to estimate the bounds for statistical and systematic uncertainties in neutron cross section measurement have been developed. In the measurement of neutron cross section, correlation is manifested due to the process of measurement and due to many systematic components like geometrical factor, half life, back scattering etc. However propagation of experimental uncertainties through the reduction of cross section data is itself a complicated procedure and has been inviting attention in recent times. The concept of determinantal inequalities to a transmission measurement of iron cross section and demonstration of how in such data reduction procedures the systematic uncertainty dominates over the statistical and estimate their individual bounds have been applied in this paper. (author). 2 refs., 1 tab
Uncertainty propagation in a 3-D thermal code for performance assessment of a nuclear waste disposal
Given the very large time scale involved, the performance assessment of a nuclear waste repository requires numerical modelling. Because we are uncertain of the exact value of the input parameters, we have to analyse the impact of these uncertainties on the outcome of the physical models. The EDF Division Research and Development has set a reliability method to propagate these uncertainties or variability through models which requires much less physical simulations than the usual simulation methods. We apply the reliability method MEFISTO to a base case modelling the heat transfers in a virtual disposal in the future site of the French underground research laboratory, in the East of France. This study is led in collaboration with ANDRA which is the French Nuclear Waste Management Agency. With this exercise, we want to evaluate the thermal behaviour of a concept related to the variation of physical parameters and their uncertainty. (author)
This PhD study is in the field of nuclear energy, the back end of nuclear fuel cycle and uncertainty calculations. The CEA must design the prototype ASTRID, a sodium cooled fast reactor (SFR) and one of the selected concepts of the Generation IV forum, for which the calculation of the value and the uncertainty of the decay heat have a significant impact. In this study is developed a code of propagation of uncertainties of nuclear data on the decay heat in SFR. The process took place in three stages. The first step has limited the number of parameters involved in the calculation of the decay heat. For this, an experiment on decay heat on the reactor PHENIX (PUIREX 2008) was studied to validate experimentally the DARWIN package for SFR and quantify the source terms of the decay heat. The second step was aimed to develop a code of propagation of uncertainties: CyRUS (Cycle Reactor Uncertainty and Sensitivity). A deterministic propagation method was chosen because calculations are fast and reliable. Assumptions of linearity and normality have been validated theoretically. The code has also been successfully compared with a stochastic code on the example of the thermal burst fission curve of 235U. The last part was an application of the code on several experiments: decay heat of a reactor, isotopic composition of a fuel pin and the burst fission curve of 235U. The code has demonstrated the possibility of feedback on nuclear data impacting the uncertainty of this problem. Two main results were highlighted. Firstly, the simplifying assumptions of deterministic codes are compatible with a precise calculation of the uncertainty of the decay heat. Secondly, the developed method is intrusive and allows feedback on nuclear data from experiments on the back end of nuclear fuel cycle. In particular, this study showed how important it is to measure precisely independent fission yields along with their covariance matrices in order to improve the accuracy of the calculation of the
Cosmin Sandric, Ionut; Chitu, Zenaida; Jurchescu, Marta; Malet, Jean-Philippe; Ciprian Margarint, Mihai; Micu, Mihai
2015-04-01
An increasing number of free and open access global digital elevation models has become available in the past 15 years and these DEMs have been widely used for the assessment of landslide susceptibility at medium and small scales. Even though the global vertical and horizontal accuracies of each DEM are known, what it is still unknown is the uncertainty that propagates from the first and second derivatives of DEMs, like slope gradient, into the final landslide susceptibility map For the present study we focused on the assessment of the uncertainty propagation from the following digital elevation models: SRTM 90m spatial resolution, ASTERDEM 30m spatial resolution, EUDEM 30m spatial resolution and the latest release SRTM 30m spatial resolution. From each DEM dataset the slope gradient was generated and used in the landslide susceptibility analysis. A restricted number of spatial predictors are used for landslide susceptibility assessment, represented by lithology, land-cover and slope, were the slope is the only predictor that changes with each DEM. The study makes use of the first national landslide inventory (Micu et al, 2014) obtained from compiling literature data, personal or institutional landslide inventories. The landslide inventory contains more than 27,900 cases classified in three main categories: slides flows and falls The results present landslide susceptibility maps obtained from each DEM and from the combinations of DEM datasets. Maps with uncertainty propagation at country level and differentiated by topographic regions from Romania and by landslide typology (slides, flows and falls) are obtained for each DEM dataset and for the combinations of these. An objective evaluation of each DEM dataset and a final map of landslide susceptibility and the associated uncertainty are provided
Myers, Casey A; Laz, Peter J; Shelburne, Kevin B; Davidson, Bradley S
2015-05-01
Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse kinematics, inverse dynamics, and muscle force prediction. A series of Monte Carlo simulations were performed that considered intrarater variability in marker placement, movement artifacts in each phase of gait, variability in body segment parameters, and variability in muscle parameters calculated from cadaveric investigations. Propagation of uncertainty was performed by also using the output distributions from one stage as input distributions to subsequent stages. Confidence bounds (5-95%) and sensitivity of outputs to model input parameters were calculated throughout the gait cycle. The combined impact of uncertainty resulted in mean bounds that ranged from 2.7° to 6.4° in joint kinematics, 2.7 to 8.1 N m in joint moments, and 35.8 to 130.8 N in muscle forces. The impact of movement artifact was 1.8 times larger than any other propagated source. Sensitivity to specific body segment parameters and muscle parameters were linked to where in the gait cycle they were calculated. We anticipate that through the increased use of probabilistic tools, researchers will better understand the strengths and limitations of their musculoskeletal simulations and more effectively use simulations to evaluate hypotheses and inform clinical decisions. PMID:25404535
Rodríguez-Rincón, J. P.; Pedrozo-Acuña, A.; Breña-Naranjo, J. A.
2015-07-01
This investigation aims to study the propagation of meteorological uncertainty within a cascade modelling approach to flood prediction. The methodology was comprised of a numerical weather prediction (NWP) model, a distributed rainfall-runoff model and a 2-D hydrodynamic model. The uncertainty evaluation was carried out at the meteorological and hydrological levels of the model chain, which enabled the investigation of how errors that originated in the rainfall prediction interact at a catchment level and propagate to an estimated inundation area and depth. For this, a hindcast scenario is utilised removing non-behavioural ensemble members at each stage, based on the fit with observed data. At the hydrodynamic level, an uncertainty assessment was not incorporated; instead, the model was setup following guidelines for the best possible representation of the case study. The selected extreme event corresponds to a flood that took place in the southeast of Mexico during November 2009, for which field data (e.g. rain gauges; discharge) and satellite imagery were available. Uncertainty in the meteorological model was estimated by means of a multi-physics ensemble technique, which is designed to represent errors from our limited knowledge of the processes generating precipitation. In the hydrological model, a multi-response validation was implemented through the definition of six sets of plausible parameters from past flood events. Precipitation fields from the meteorological model were employed as input in a distributed hydrological model, and resulting flood hydrographs were used as forcing conditions in the 2-D hydrodynamic model. The evolution of skill within the model cascade shows a complex aggregation of errors between models, suggesting that in valley-filling events hydro-meteorological uncertainty has a larger effect on inundation depths than that observed in estimated flood inundation extents.
Fuzzy probability based fault tree analysis to propagate and quantify epistemic uncertainty
Highlights: • Fuzzy probability based fault tree analysis is to evaluate epistemic uncertainty in fuzzy fault tree analysis. • Fuzzy probabilities represent likelihood occurrences of all events in a fault tree. • A fuzzy multiplication rule quantifies epistemic uncertainty of minimal cut sets. • A fuzzy complement rule estimate epistemic uncertainty of the top event. • The proposed FPFTA has successfully evaluated the U.S. Combustion Engineering RPS. - Abstract: A number of fuzzy fault tree analysis approaches, which integrate fuzzy concepts into the quantitative phase of conventional fault tree analysis, have been proposed to study reliabilities of engineering systems. Those new approaches apply expert judgments to overcome the limitation of the conventional fault tree analysis when basic events do not have probability distributions. Since expert judgments might come with epistemic uncertainty, it is important to quantify the overall uncertainties of the fuzzy fault tree analysis. Monte Carlo simulation is commonly used to quantify the overall uncertainties of conventional fault tree analysis. However, since Monte Carlo simulation is based on probability distribution, this technique is not appropriate for fuzzy fault tree analysis, which is based on fuzzy probabilities. The objective of this study is to develop a fuzzy probability based fault tree analysis to overcome the limitation of fuzzy fault tree analysis. To demonstrate the applicability of the proposed approach, a case study is performed and its results are then compared to the results analyzed by a conventional fault tree analysis. The results confirm that the proposed fuzzy probability based fault tree analysis is feasible to propagate and quantify epistemic uncertainties in fault tree analysis
Heo, Jaeseok, E-mail: jheo@kaeri.re.kr; Kim, Kyung Doo, E-mail: kdkim@kaeri.re.kr
2015-10-15
Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper.
Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper
Wave-like warp propagation in circumbinary discs I. Analytic theory and numerical simulations
Facchini, Stefano; Lodato, Giuseppe; Price, Daniel J.
2013-01-01
In this paper we analyse the propagation of warps in protostellar circumbinary discs. We use these systems as a test environment in which to study warp propagation in the bending-wave regime, with the addition of an external torque due to the binary gravitational potential. In particular, we want to test the linear regime, for which an analytic theory has been developed. In order to do so, we first compute analytically the steady state shape of an inviscid disc subject to the binary torques. ...
Watson, Cameron S.; Carrivick, Jonathan; Quincey, Duncan
2015-10-01
Modelling glacial lake outburst floods (GLOFs) or 'jökulhlaups', necessarily involves the propagation of large and often stochastic uncertainties throughout the source to impact process chain. Since flood routing is primarily a function of underlying topography, communication of digital elevation model (DEM) uncertainty should accompany such modelling efforts. Here, a new stochastic first-pass assessment technique was evaluated against an existing GIS-based model and an existing 1D hydrodynamic model, using three DEMs with different spatial resolution. The analysis revealed the effect of DEM uncertainty and model choice on several flood parameters and on the prediction of socio-economic impacts. Our new model, which we call MC-LCP (Monte Carlo Least Cost Path) and which is distributed in the supplementary information, demonstrated enhanced 'stability' when compared to the two existing methods, and this 'stability' was independent of DEM choice. The MC-LCP model outputs an uncertainty continuum within its extent, from which relative socio-economic risk can be evaluated. In a comparison of all DEM and model combinations, the Shuttle Radar Topography Mission (SRTM) DEM exhibited fewer artefacts compared to those with the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM), and were comparable to those with a finer resolution Advanced Land Observing Satellite Panchromatic Remote-sensing Instrument for Stereo Mapping (ALOS PRISM) derived DEM. Overall, we contend that the variability we find between flood routing model results suggests that consideration of DEM uncertainty and pre-processing methods is important when assessing flow routing and when evaluating potential socio-economic implications of a GLOF event. Incorporation of a stochastic variable provides an illustration of uncertainty that is important when modelling and communicating assessments of an inherently complex process.
Nikolopoulos, Efthymios I.; Polcher, Jan; Anagnostou, Emmanouil N.; Eisner, Stephanie; Fink, Gabriel; Kallos, George
2016-04-01
Precipitation is arguably one of the most important forcing variables that drive terrestrial water cycle processes. The process of precipitation exhibits significant variability in space and time, is associated with different water phases (liquid or solid) and depends on several other factors (aerosols, orography etc), which make estimation and modeling of this process a particularly challenging task. As such, precipitation information from different sensors/products is associated with uncertainty. Propagation of this uncertainty into hydrologic simulations can have a considerable impact on the accuracy of the simulated hydrologic variables. Therefore, to make hydrologic predictions more useful, it is important to investigate and assess the impact of precipitation uncertainty in hydrologic simulations in order to be able to quantify it and identify ways to minimize it. In this work we investigate the impact of precipitation uncertainty in hydrologic simulations using land surface models (e.g. ORCHIDEE) and global hydrologic models (e.g. WaterGAP3) for the simulation of several hydrologic variables (soil moisture, ET, runoff) over the Iberian Peninsula. Uncertainty in precipitation is assessed by utilizing various sources of precipitation input that include one reference precipitation dataset (SAFRAN), three widely-used satellite precipitation products (TRMM 3B42v7, CMORPH, PERSIANN) and a state-of-the-art reanalysis product (WFDEI) based on the ECMWF ERA-Interim reanalysis. Comparative analysis is based on using the SAFRAN-simulations as reference and it is carried out at different space (0.5deg or regional average) and time (daily or seasonal) scales. Furthermore, as an independent verification, simulated discharge is compared against available discharge observations for selected major rivers of Iberian region. Results allow us to draw conclusions regarding the impact of precipitation uncertainty with respect to i) hydrologic variable of interest, ii
Part of the application for a license for a high-level radioactive waste repository is an assessment of repository performance over thousands of years, which will inevitably have to treat uncertainties. One source of uncertainty is in the conceptualization of the natural repository system. Generally, conceptual models are developed based on interpretation of existing data using expert judgment. Uncertainties in conceptual models, which are propagated through the performance assessment calculations, are introduced when simplifying assumptions are made about the behavior of the real system. These assumptions are made because the data, knowledge that is based on the data, and other information considered in the interpretation are incomplete. Additionally, any relationships that exist among the data are generally inexact or may be undefined. In this work, causal networks have been applied to the conceptual model development process. This representation of the conceptual model expresses existing knowledge about the real system in a graphical form and extracts the qualitative dependency relationships among the underlying data and assumptions. Strict probabilistic reasoning is used to quantitatively explore these relationships. This probabilistic network provides a means by which to quantify, propagate, and reduce the pervading uncertainty in a coherent probabilistic manner. The conceptualization of the Avra Valley regional ground water flow system in Arizona and the ground water flow system of the proposed high-level radioactive waste repository site at Yucca Mountain in Nevada have been investigated to develop a preliminary data base of important assumptions and their relationships. Based on the conceptual models for these sites, a prototype version of the probabilistic network for the development of conceptual models is under development on a microExplorer Lisp workstation. 9 refs., 1 fig., 1 tab
Wijnant, Ysbrand; Spiering, Ruud; Blijderveen, van Maarten; Boer, de André
2006-01-01
Previous research has shown that viscothermal wave propagation in narrow gaps can efficiently be described by means of the low reduced frequency model. For simple geometries and boundary conditions, analytical solutions are available. For example, Beltman [4] gives the acoustic pressure in the gap b
Uncertainty Propagation in a Fundamental Climate Data Record derived from Meteosat Visible Band Data
Rüthrich, Frank; John, Viju; Roebeling, Rob; Wagner, Sebastien; Viticchie, Bartolomeo; Hewison, Tim; Govaerts, Yves; Quast, Ralf; Giering, Ralf; Schulz, Jörg
2016-04-01
The series of Meteosat First Generation (MFG) Satellites provides a unique opportunity for the monitoring of climate variability and of possible changes. 6 Satellites were operationally employed; all equipped with identical MVIRI radiometers. The time series now covers, for some parts of the globe, more than 34 years with a high temporal (30 minutes) and spatial (2.5 x 2.5 km²) resolution for the visible band. However, subtle differences between the radiometers in terms of the silicon photodiodes, sensor spectral ageing and variability due to other sources of uncertainties have limited the thorough exploitation of this unique time series so far. For instance upper level wind fields and surface albedo data records could be derived and used for the assimilation into Numerical Weather Prediction models for re-analysis and climate studies, respectively. However, the derivation of aerosol depth with high quality has not been possible so far. In order to enhance the quality of MVIRI reflectances for enabling an aerosol and improved surface albedo data record it is necessary to perform a re-calibration of the MVIRI instruments visible bands that corrects for above mentioned effects and results in an improved Fundamental Climate Data Record (FCDR) of Meteosat/MVIRI radiance data. This re-calibration has to be consistent over the entire period, to consider the ageing of the sensor's spectral response functions and to add accurate information about the combined uncertainty of the radiances. Therefore the uncertainties from all different sources have to be thoroughly investigated and propagated into the final product. This presentation aims to introduce all sources of uncertainty present in MVIRI visible data and points on the major mechanisms of uncertainty propagation. An outlook will be given on the enhancements of the calibration procedure as it will be carried out at EUMETSAT in the course of the EU Horizon 2020 FIDUCEO project (FIDelity and Uncertainty in Climate data
The present document constitutes my Habilitation thesis report. It recalls my scientific activity of the twelve last years, since my PhD thesis until the works completed as a research engineer at CEA Cadarache. The two main chapters of this document correspond to two different research fields both referring to the uncertainty treatment in engineering problems. The first chapter establishes a synthesis of my work on high frequency wave propagation in random medium. It more specifically relates to the study of the statistical fluctuations of acoustic wave travel-times in random and/or turbulent media. The new results mainly concern the introduction of the velocity field statistical anisotropy in the analytical expressions of the travel-time statistical moments according to those of the velocity field. This work was primarily carried by requirements in geophysics (oil exploration and seismology). The second chapter is concerned by the probabilistic techniques to study the effect of input variables uncertainties in numerical models. My main applications in this chapter relate to the nuclear engineering domain which offers a large variety of uncertainty problems to be treated. First of all, a complete synthesis is carried out on the statistical methods of sensitivity analysis and global exploration of numerical models. The construction and the use of a meta-model (inexpensive mathematical function replacing an expensive computer code) are then illustrated by my work on the Gaussian process model (kriging). Two additional topics are finally approached: the high quantile estimation of a computer code output and the analysis of stochastic computer codes. We conclude this memory with some perspectives about the numerical simulation and the use of predictive models in industry. This context is extremely positive for future researches and application developments. (author)
无
2010-01-01
For structural system with random basic variables as well as fuzzy basic variables,uncertain propagation from two kinds of basic variables to the response of the structure is investigated.A novel algorithm for obtaining membership function of fuzzy reliability is presented with saddlepoint approximation(SA)based line sampling method.In the presented method,the value domain of the fuzzy basic variables under the given membership level is firstly obtained according to their membership functions.In the value domain of the fuzzy basic variables corresponding to the given membership level,bounds of reliability of the structure response satisfying safety requirement are obtained by employing the SA based line sampling method in the reduced space of the random variables.In this way the uncertainty of the basic variables is propagated to the safety measurement of the structure,and the fuzzy membership function of the reliability is obtained.Compared to the direct Monte Carlo method for propagating the uncertainties of the fuzzy and random basic variables,the presented method can considerably improve computational efficiency with acceptable precision.The presented method has wider applicability compared to the transformation method,because it doesn’t limit the distribution of the variable and the explicit expression of performance function, and no approximation is made for the performance function during the computing process.Additionally,the presented method can easily treat the performance function with cross items of the fuzzy variable and the random variable,which isn’t suitably approximated by the existing transformation methods.Several examples are provided to illustrate the advantages of the presented method.
Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM
Highlights: • We performed burnup calculations of PWR and BWR benchmarks using ALEPH and SCALE. • We propagated nuclear data uncertainty and correlations using different procedures and code. • Decay data uncertainties have negligible impact on nuclide densities. • Uncorrelated fission yields play a major role on the uncertainties of fission products. • Fission yields impact is strongly reduced by the introduction of correlations. - Abstract: Two fuel assemblies, one belonging to the Takahama-3 PWR and the other to the Fukushima-Daini-2 BWR, were modelled and the fuel irradiation was simulated with the TRITON module of SCALE 6.2 and with the ALEPH-2 code. Our results were compared to the experimental measurements of four samples: SF95-4 and SF96-4 were taken from the Takahama-3 reactor, while samples SF98-6 and SF99-6 belonged to the Fukushima-Daini-2. Then, we propagated the uncertainties coming from the nuclear data to the isotopic inventory of sample SF95-4. We used the ALEPH-2 adjoint procedure to propagate the decay constant uncertainties. The impact was inappreciable. The cross-section covariance information was propagated with the SAMPLER module of the beta3 version of SCALE 6.2. This contribution mostly affected the uncertainties of the actinides. Finally, the uncertainties of the fission yields were propagated both through ALEPH-2 and TRITON with a Monte Carlo sampling approach and appeared to have the largest impact on the uncertainties of the fission products. However, the lack of fission yield correlations results is a serious overestimation of the response uncertainties
eHabitat - A web service for habitat similarity modeling with uncertainty propagation
Olav Skøien, Jon; Schulz, Michael; Dubois, Gregoire; Heuvelink, Gerard
2013-04-01
We are developing eHabitat, a Web Processing Service (WPS) that can model current and future habitat similarity for point observations, polygons defining an existing or hypothetical protected area, or sets of polygons defining the estimated ranges for one or more species. A range of Web Clients makes it easy to use the WPS with predefined data for predictions of the current or future climatic niche. The WPS is also able to document propagating uncertainties of the input data to the estimated similarity maps, if such information is available. The presentation will focus on the architecture of the service and the clients, on how uncertainties are handled by the model and on the presentation of uncertain results. The idea behind eHabitat is that one can classify the similarity between a reference geometry (point locations or polygons) and the surroundings based on one or more species distribution models (SDMs) and a set of ecological indicators. The ecological indicators are typically raster bioclimatic data (DEMs, climate data, vegetation maps …) describing important features for the species or habitats of interest. All these data sets have uncertainties, which can usually be described by treating the value of each pixel as a mean with a standard deviation. As the standard deviation will also be pixel based, it can be given as rasters. If standard deviations of the rasters are not available in the input data, this can also be guesstimated by the service to allow end-users to generate uncertainty scenarios. Rasters of standard deviations are used for simulating a set of spatially correlated maps of the input data, which are then used in the SDM. Additionally, the service can do bootstrapping samples from the input data, which is one of the classic methods for assessing uncertainty of SDMs. The two methods can also be combined, a convenient solution considering that simulation is a computationally much slower process than bootstrapping. Uncertainties in the results
On the uncertainty of stream networks derived from elevation data: the error propagation approach
T. Hengl
2010-01-01
Full Text Available DEM error propagation methodology is extended to the derivation of vector-based objects (stream networks using geostatistical simulations. First, point sampled elevations are used to fit a variogram model. Next 100 DEM realizations are generated using conditional sequential Gaussian simulation; the stream network map is extracted for each of these realizations, and the collection of stream networks is analyzed to quantify the error propagation. At each grid cell, the probability of the occurrence of a stream and the propagated error are estimated. The method is illustrated using two small data sets: Baranja hill (30 m grid cell size; 16 512 pixels; 6367 sampled elevations, and Zlatibor (30 m grid cell size; 15 000 pixels; 2051 sampled elevations. All computations are run in the open source software for statistical computing R: package geoR is used to fit variogram; package gstat is used to run sequential Gaussian simulation; streams are extracted using the open source GIS SAGA via the RSAGA library. The resulting stream error map (Information entropy of a Bernoulli trial clearly depicts areas where the extracted stream network is least precise – usually areas of low local relief, slightly concave. In both cases, significant parts of the study area (17.3% for Baranja Hill; 6.2% for Zlatibor show high error (H>0.5 of locating streams. By correlating the propagated uncertainty of the derived stream network with various land surface parameters sampling of height measurements can be optimized so that delineated streams satisfy a required accuracy level. Remaining issue to be tackled is the computational burden of geostatistical simulations: this framework is at the moment limited to small to moderate data sets with several hundreds of points. Scripts and data sets used in this article are available on-line via the http://www.geomorphometry.org/ website and can be easily adopted
A Framework for Propagation of Uncertainties in the Kepler Data Analysis Pipeline
Clarke, Bruce D.; Allen, Christopher; Bryson, Stephen T.; Caldwell, Douglas A.; Chandrasekaran, Hema; Cote, Miles T.; Girouard, Forrest; Jenkins, Jon M.; Klaus, Todd C.; Li, Jie; Middour, Chris; McCauliff, Sean; Quintana, Elisa V.; Tenebaum, Peter; Twicken, Joseph D.; Wohler, Bill; Wu, Hayley
2010-01-01
The Kepler space telescope is designed to detect Earth-like planets around Sun-like stars using transit photometry by simultaneously observing 100,000 stellar targets nearly continuously over a three and a half year period. The 96-megapixel focal plane consists of 42 charge-coupled devices (CCD) each containing two 1024 x 1100 pixel arrays. Cross-correlations between calibrated pixels are introduced by common calibrations performed on each CCD requiring downstream data products access to the calibrated pixel covariance matrix in order to properly estimate uncertainties. The prohibitively large covariance matrices corresponding to the 75,000 calibrated pixels per CCD preclude calculating and storing the covariance in standard lock-step fashion. We present a novel framework used to implement standard propagation of uncertainties (POU) in the Kepler Science Operations Center (SOC) data processing pipeline. The POU framework captures the variance of the raw pixel data and the kernel of each subsequent calibration transformation allowing the full covariance matrix of any subset of calibrated pixels to be recalled on-the-fly at any step in the calibration process. Singular value decomposition (SVD) is used to compress and low-pass filter the raw uncertainty data as well as any data dependent kernels. The combination of POU framework and SVD compression provide downstream consumers of the calibrated pixel data access to the full covariance matrix of any subset of the calibrated pixels traceable to pixel level measurement uncertainties without having to store, retrieve and operate on prohibitively large covariance matrices. We describe the POU Framework and SVD compression scheme and its implementation in the Kepler SOC pipeline.
Analytical Algorithms to Quantify the Uncertainty in Remaining Useful Life Prediction
Sankararaman, Shankar; Saxena, Abhinav; Daigle, Matthew; Goebel, Kai
2013-01-01
This paper investigates the use of analytical algorithms to quantify the uncertainty in the remaining useful life (RUL) estimate of components used in aerospace applications. The prediction of RUL is affected by several sources of uncertainty and it is important to systematically quantify their combined effect by computing the uncertainty in the RUL prediction in order to aid risk assessment, risk mitigation, and decisionmaking. While sampling-based algorithms have been conventionally used for quantifying the uncertainty in RUL, analytical algorithms are computationally cheaper and sometimes, are better suited for online decision-making. While exact analytical algorithms are available only for certain special cases (for e.g., linear models with Gaussian variables), effective approximations can be made using the the first-order second moment method (FOSM), the first-order reliability method (FORM), and the inverse first-order reliability method (Inverse FORM). These methods can be used not only to calculate the entire probability distribution of RUL but also to obtain probability bounds on RUL. This paper explains these three methods in detail and illustrates them using the state-space model of a lithium-ion battery.
Propagation of Isotopic Bias and Uncertainty to Criticality Safety Analyses of PWR Waste Packages
Radulescu, Georgeta [ORNL
2010-06-01
predicted spent fuel compositions (i.e., determine the penalty in reactivity due to isotopic composition bias and uncertainty) for use in disposal criticality analysis employing burnup credit. The method used in this calculation to propagate the isotopic bias and bias-uncertainty values to k{sub eff} is the Monte Carlo uncertainty sampling method. The development of this report is consistent with 'Test Plan for: Isotopic Validation for Postclosure Criticality of Commercial Spent Nuclear Fuel'. This calculation report has been developed in support of burnup credit activities for the proposed repository at Yucca Mountain, Nevada, and provides a methodology that can be applied to other criticality safety applications employing burnup credit.
Vecherin, S.; Ketcham, S.; Parker, M.; Picucci, J.
2015-12-01
To make a prediction for the propagation of seismic pulses, one needs to specify physical properties and subsurface ground structure of the site. This information is frequently unknown or estimated with significant uncertainty. We developed a methodology for the ensemble prediction of the propagation of weak seismic pulses for short ranges. The ranges of interest are 10-100 of meters, and the pulse bandwidth is up to 200 Hz. Instead of specifying specific values for viscoelastic site properties, the methodology operates with probability distribution functions of the inputs. This yields ensemble realizations of the pulse at specified locations, where mean, median, and maximum likelihood predictions can be made, and confidence intervals are estimated. Starting with the site's Vs30, the methodology creates an ensemble of plausible vertically stratified Vs profiles for the site. The number and thickness of the layers are modeled using inhomogeneous Poisson process, and the Vs values in the layers are modeled by Gaussian correlated process. The Poisson expectation rate and Vs correlation between adjacent layers take into account layers depth and thickness, and are specific for a site class, as defined by the Federal Emergency Management Agency (FEMA). High-fidelity three-dimension thin layer method (TLM) is used to yield an ensemble of frequency response functions. Comparison with experiments revealed that measured signals are not always within the predicted ensemble. Variance-based global sensitivity analysis has shown that the most significant parameter in the TLM for the prediction of the pulse energy is the shear quality factor, Qs. Some strategies how to account for significant uncertainty in this parameter and to improve accuracy of the ensemble predictions for a specific site are investigated and discussed.
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-09-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
This paper presents a 3D uncertainty propagation methodology and its application to the case of a small heterogeneous reactor system ('slab' reactor benchmark). Key neutron parameters (keff, reactivity worth, local power, ...) and their corresponding cross-section sensitivities are derived by using the French calculation route APOLLO2 (2D transport lattice code), CRONOS2 (3D diffusion code) and TRIPOLI4 (3D Monte-Carlo reference calculations) with consistent JEF2.2 cross-section libraries (punctual or CEA93 multigroup cross-sections) and adapted perturbation methods (the Heuristically-based Generalized Perturbation Theory implemented in the framework of the CRONOS2 diffusion method or the correlation techniques used in Monte-Carlo simulations). The investigation of the slab system underlined notable differences between the 2D/3D computed sensitivity coefficients and consequently a priori uncertainties (when sensitivity coefficients are combined with covariance matrices the discrepancies rise up to 20% due to thermal and fast flux variations). In addition, the induced local power effect of nuclear data perturbations (JEF-2.2 vs. Leal-Derrien-Wright-Larson 235U evaluation) had been be correctly estimated with the standard 3D CRONOS2 depletion calculations. For industrial applications (PWR neutron parameters optimization problems, R and D studies dealing with the design of future fission reactors, ...), the same calculation route could be advantageously applied to infer the target accuracies (knowing the required safety criteria) of future nuclear data evaluation (JEFF-3 data library for instance). (author)
Long-time uncertainty propagation using generalized polynomial chaos and flow map composition
We present an efficient and accurate method for long-time uncertainty propagation in dynamical systems. Uncertain initial conditions and parameters are both addressed. The method approximates the intermediate short-time flow maps by spectral polynomial bases, as in the generalized polynomial chaos (gPC) method, and uses flow map composition to construct the long-time flow map. In contrast to the gPC method, this approach has spectral error convergence for both short and long integration times. The short-time flow map is characterized by small stretching and folding of the associated trajectories and hence can be well represented by a relatively low-degree basis. The composition of these low-degree polynomial bases then accurately describes the uncertainty behavior for long integration times. The key to the method is that the degree of the resulting polynomial approximation increases exponentially in the number of time intervals, while the number of polynomial coefficients either remains constant (for an autonomous system) or increases linearly in the number of time intervals (for a non-autonomous system). The findings are illustrated on several numerical examples including a nonlinear ordinary differential equation (ODE) with an uncertain initial condition, a linear ODE with an uncertain model parameter, and a two-dimensional, non-autonomous double gyre flow
Brault, A; Lucor, D
2016-01-01
SUMMARY This work aims at quantifying the effect of inherent uncertainties from cardiac output on the sensitivity of a human compliant arterial network response based on stochastic simulations of a reduced-order pulse wave propagation model. A simple pulsatile output form is utilized to reproduce the most relevant cardiac features with a minimum number of parameters associated with left ventricle dynamics. Another source of critical uncertainty is the spatial heterogeneity of the aortic compliance which plays a key role in the propagation and damping of pulse waves generated at each cardiac cycle. A continuous representation of the aortic stiffness in the form of a generic random field of prescribed spatial correlation is then considered. Resorting to a stochastic sparse pseudospectral method, we investigate the spatial sensitivity of the pulse pressure and waves reflection magnitude with respect to the different model uncertainties. Results indicate that uncertainties related to the shape and magnitude of th...
Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
Helgesson, P.; Sjöstrand, H.; Koning, A. J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.
2016-01-01
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also
A Semi-Analytical Orbit Propagator Program for Highly Elliptical Orbits
Lara, M.; San-Juan, J. F.; Hautesserres, D.
2016-05-01
A semi-analytical orbit propagator to study the long-term evolution of spacecraft in Highly Elliptical Orbits is presented. The perturbation model taken into account includes the gravitational effects produced by the first nine zonal harmonics and the main tesseral harmonics affecting to the 2:1 resonance, which has an impact on Molniya orbit-types, of Earth's gravitational potential, the mass-point approximation for third body perturbations, which on ly include the Legendre polynomial of second order for the sun and the polynomials from second order to sixth order for the moon, solar radiation pressure and atmospheric drag. Hamiltonian formalism is used to model the forces of gravitational nature so as to avoid time-dependence issues the problem is formulated in the extended phase space. The solar radiation pressure is modeled as a potential and included in the Hamiltonian, whereas the atmospheric drag is added as a generalized force. The semi-analytical theory is developed using perturbation techniques based on Lie transforms. Deprit's perturbation algorithm is applied up to the second order of the second zonal harmonics, J2, including Kozay-type terms in the mean elements Hamiltonian to get "centered" elements. The transformation is developed in closed-form of the eccentricity except for tesseral resonances and the coupling between J_2 and the moon's disturbing effects are neglected. This paper describes the semi-analytical theory, the semi-analytical orbit propagator program and some of the numerical validations.
S.V. Bystrov
2016-05-01
Full Text Available Subject of Research.We present research results for the signal uncertainty problem that naturally arises for the developers of servomechanisms, including analytical design of serial compensators, delivering the required quality indexes for servomechanisms. Method. The problem was solved with the use of Besekerskiy engineering approach, formulated in 1958. This gave the possibility to reduce requirements for input signal composition of servomechanisms by using only two of their quantitative characteristics, such as maximum speed and acceleration. Information about input signal maximum speed and acceleration allows entering into consideration the equivalent harmonic input signal with calculated amplitude and frequency. In combination with requirements for maximum tracking error, the amplitude and frequency of the equivalent harmonic effects make it possible to estimate analytically the value of the amplitude characteristics of the system by error and then convert it to amplitude characteristic of open-loop system transfer function. While previously Besekerskiy approach was mainly used in relation to the apparatus of logarithmic characteristics, we use this approach for analytical synthesis of consecutive compensators. Main Results. Proposed technique is used to create analytical representation of "input–output" and "error–output" polynomial dynamic models of the designed system. In turn, the desired model of the designed system in the "error–output" form of analytical representation of transfer functions is the basis for the design of consecutive compensator, that delivers the desired placement of state matrix eigenvalues and, consequently, the necessary set of dynamic indexes for the designed system. The given procedure of consecutive compensator analytical design on the basis of Besekerskiy engineering approach under conditions of signal uncertainty is illustrated by an example. Practical Relevance. The obtained theoretical results are
Soheil Salahshour
2015-02-01
Full Text Available In this paper, we apply the concept of Caputo’s H-differentiability, constructed based on the generalized Hukuhara difference, to solve the fuzzy fractional differential equation (FFDE with uncertainty. This is in contrast to conventional solutions that either require a quantity of fractional derivatives of unknown solution at the initial point (Riemann–Liouville or a solution with increasing length of their support (Hukuhara difference. Then, in order to solve the FFDE analytically, we introduce the fuzzy Laplace transform of the Caputo H-derivative. To the best of our knowledge, there is limited research devoted to the analytical methods to solve the FFDE under the fuzzy Caputo fractional differentiability. An analytical solution is presented to confirm the capability of the proposed method.
Antoshchenkova, Ekaterina; Imbert, David; Richet, Yann; Bardet, Lise; Duluc, Claire-Marie; Rebour, Vincent; Gailler, Audrey; Hébert, Hélène
2016-04-01
The aim of this study is to assess evaluation the tsunamigenic potential of the Azores-Gibraltar Fracture Zone (AGFZ). This work is part of the French project TANDEM (Tsunamis in the Atlantic and English ChaNnel: Definition of the Effects through numerical Modeling; www-tandem.cea.fr), special attention is paid to French Atlantic coasts. Structurally, the AGFZ region is complex and not well understood. However, a lot of its faults produce earthquakes with significant vertical slip, of a type that can result in tsunami. We use the major tsunami event of the AGFZ on purpose to have a regional estimation of the tsunamigenic potential of this zone. The major reported event for this zone is the 1755 Lisbon event. There are large uncertainties concerning source location and focal mechanism of this earthquake. Hence, simple deterministic approach is not sufficient to cover on the one side the whole AGFZ with its geological complexity and on the other side the lack of information concerning the 1755 Lisbon tsunami. A parametric modeling environment Promethée (promethee.irsn.org/doku.php) was coupled to tsunami simulation software based on shallow water equations with the aim of propagation of uncertainties. Such a statistic point of view allows us to work with multiple hypotheses simultaneously. In our work we introduce the seismic source parameters in a form of distributions, thus giving a data base of thousands of tsunami scenarios and tsunami wave height distributions. Exploring our tsunami scenarios data base we present preliminary results for France. Tsunami wave heights (within one standard deviation of the mean) can be about 0.5 m - 1 m for the Atlantic coast and approaching 0.3 m for the English Channel.
Hutton, Christopher; Brazier, Richard
2012-06-01
SummaryAdvances in remote sensing technology, notably in airborne Light Detection And Ranging (LiDAR), have facilitated the acquisition of high-resolution topographic and vegetation datasets over increasingly large areas. Whilst such datasets may provide quantitative information on surface morphology and vegetation structure in riparian zones, existing approaches for processing raw LiDAR data perform poorly in riparian channel environments. A new algorithm for separating vegetation from topography in raw LiDAR data, and the performance of the Elliptical Inverse Distance Weighting (EIDW) procedure for interpolating the remaining ground points, are evaluated using data derived from a semi-arid ephemeral river. The filtering procedure, which first applies a threshold (either slope or elevation) to classify vegetation high-points, and second a regional growing algorithm from these high-points, avoids the classification of high channel banks as vegetation, preserving existing channel morphology for subsequent interpolation (2.90-9.21% calibration error; 4.53-7.44% error in evaluation for slope threshold). EIDW, which accounts for surface anisotropy by converting the remaining elevation points to streamwise co-ordinates, can outperform isoptropic interpolation (IDW) on channel banks, however, performs less well in isotropic conditions, and when local anisotropy is different to that of the main channel. A key finding of this research is that filtering parameter uncertainty affects the performance of the interpolation procedure; resultant errors may propagate into the Digital Elevation Model (DEM) and subsequently derived products, such as Canopy Height Models (CHMs). Consequently, it is important that this uncertainty is assessed. Understanding and developing methods to deal with such errors is important to inform users of the true quality of laser scanning products, such that they can be used effectively in hydrological applications.
Korun, M
2001-11-01
Explicit expressions are derived describing the variance of the counting efficiency for a homogeneous cylindrical sample, placed coaxially on the detector's symmetry axis, in terms of the variances of the sample properties thickness, density and composition. In the derivation, the emission of gamma-rays parallel to the sample axis and the efficiency for an area source proportional to the solid angle subtended by the source from the effective point of interaction of the gamma-rays within the detector crystal are assumed. For the uncertainties of the mass attenuation coefficients, as well as for the uncertainties of concentrations of admixtures to the sample matrix, constant relative uncertainties are assumed. PMID:11573802
Wave-like warp propagation in circumbinary discs I. Analytic theory and numerical simulations
Facchini, Stefano; Price, Daniel J
2013-01-01
In this paper we analyse the propagation of warps in protostellar circumbinary discs. We use these systems as a test environment in which to study warp propagation in the bending-wave regime, with the addition of an external torque due to the binary gravitational potential. In particular, we want to test the linear regime, for which an analytic theory has been developed. In order to do so, we first compute analytically the steady state shape of an inviscid disc subject to the binary torques. The steady state tilt is a monotonically increasing function of radius. In the absence of viscosity, the disc does not present any twist. Then, we compare the time-dependent evolution of the warped disc calculated via the known linearised equations both with the analytic solutions and with full 3D numerical simulations, which have been performed with the PHANTOM SPH code using 2 million particles. We find a good agreement both in the tilt and in the phase evolution for small inclinations, even at very low viscosities. Mor...
Sega, Michela; Pennecchi, Francesca; Rinaldi, Sarah; Rolle, Francesca
2016-05-12
A proper evaluation of the uncertainty associated to the quantification of micropollutants in the environment, like Polycyclic Aromatic Hydrocarbons (PAHs), is crucial for the reliability of the measurement results. The present work describes a comparison between the uncertainty evaluation carried out according to the GUM uncertainty framework and the Monte Carlo (MC) method. This comparison was carried out starting from real data sets obtained from the quantification of benzo[a]pyrene (BaP), spiked on filters commonly used for airborne particulate matter sampling. BaP was chosen as target analyte as it is listed in the current European legislation as marker of the carcinogenic risk for the whole class of PAHs. MC method, being useful for nonlinear models and when the resulting output distribution for the measurand is non-symmetric, can particularly fit the cases in which the results of intrinsically positive quantities are very small and the lower limit of a desired coverage interval, obtained according to the GUM uncertainty framework, can be dramatically close to zero, if not even negative. In the case under study, it was observed that the two approaches for the uncertainty evaluation provide different results for BaP masses in samples containing different masses of the analyte, MC method giving larger coverage intervals. In addition, in cases of analyte masses close to zero, the GUM uncertainty framework would give even negative lower limit of uncertainty coverage interval for the measurand, an unphysical result which is avoided when using MC method. MC simulations, indeed, can be configured in a way that only positive values are generated thus obtaining a coverage interval for the measurand that is always positive. PMID:27114218
Cardiff, Michael; Liu, Xiaoyi; Kitanidis, Peter K.; Parker, Jack; Kim, Ungtae
2010-04-01
Dense non-aqueous phase liquid (DNAPL) spills represent a potential long-term source of aquifer contamination, and successful low-cost remediation may require a combination of both plume management and source treatment. In addition, substantial uncertainty exists in many of the parameters that control field-scale behavior of DNAPL sources and plumes. For these reasons, cost optimization of DNAPL cleanup needs to consider multiple treatment options and their associated costs while also gauging the influence of prediction uncertainty on expected costs. In this paper, we present a management methodology for field-scale DNAPL source and plume management under uncertainty. Using probabilistic methods, historical data and prior information are combined to produce a set of equally likely realizations of true field conditions (i.e., parameter sets). These parameter sets are then used in a simulation-optimization framework to produce DNAPL cleanup solutions that have the lowest possible expected net present value (ENPV) cost and that are suitably cautious in the presence of high uncertainty. For simulation, we utilize a fast-running semi-analytic field-scale model of DNAPL source and plume evolution that also approximates the effects of remedial actions. The degree of model prediction uncertainty is gauged using a restricted maximum likelihood method, which helps to produce suitably cautious remediation strategies. We test our methodology on a synthetic field-scale problem with multiple source architectures, for which source zone thermal treatment and electron donor injection are considered as remedial actions. The lowest cost solution found utilizes a combination of source and plume remediation methods, and is able to successfully meet remediation constraints for a majority of possible scenarios. Comparisons with deterministic optimization results show that not taking into account uncertainty can result in optimization strategies that are not aggressive enough and result
This paper presents an evaluation of uncertainty associated to analytical measurement of eighteen polycyclic aromatic compounds (PACs) in ambient air by liquid chromatography with fluorescence detection (HPLC/FD). The study was focused on analyses of PM10, PM2.5 and gas phase fractions. Main analytical uncertainty was estimated for eleven polycyclic aromatic hydrocarbons (PAHs), four nitro polycyclic aromatic hydrocarbons (nitro-PAHs) and two hydroxy polycyclic aromatic hydrocarbons (OH-PAHs) based on the analytical determination, reference material analysis and extraction step. Main contributions reached 15-30% and came from extraction process of real ambient samples, being those for nitro- PAHs the highest (20-30%). Range and mean concentration of PAC mass concentrations measured in gas phase and PM10/PM2.5 particle fractions during a full year are also presented. Concentrations of OH-PAHs were about 2-4 orders of magnitude lower than their parent PAHs and comparable to those sparsely reported in literature. (Author)
Ryerson, F. J.; Ezzedine, S. M.; Antoun, T.
2013-12-01
The success of implementation and execution of numerous subsurface energy technologies such shale gas extraction, geothermal energy, underground coal gasification rely on detailed characterization of the geology and the subsurface properties. For example, spatial variability of subsurface permeability controls multi-phase flow, and hence impacts the prediction of reservoir performance. Subsurface properties can vary significantly over several length scales making detailed subsurface characterization unfeasible if not forbidden. Therefore, in common practices, only sparse measurements of data are available to image or characterize the entire reservoir. For example pressure, P, permeability, k, and production rate, Q, measurements are only available at the monitoring and operational wells. Elsewhere, the spatial distribution of k is determined by various deterministic or stochastic interpolation techniques and P and Q are calculated from the governing forward mass balance equation assuming k is given at all locations. Several uncertainty drivers, such as PSUADE, are then used to propagate and quantify the uncertainty (UQ) of quantities (variable) of interest using forward solvers. Unfortunately, forward-solver techniques and other interpolation schemes are rarely constrained by the inverse problem itself: given P and Q at observation points determine the spatially variable map of k. The approach presented here, motivated by fluid imaging for subsurface characterization and monitoring, was developed by progressively solving increasingly complex realistic problems. The essence of this novel approach is that the forward and inverse partial differential equations are the interpolator themselves for P, k and Q rather than extraneous and sometimes ad hoc schemes. Three cases with different sparsity of data are investigated. In the simplest case, a sufficient number of passive pressure data (pre-production pressure gradients) are given. Here, only the inverse hyperbolic
Efficiency of analytical methodologies in uncertainty analysis of seismic core damage frequency
Fault Tree and Event Tree analysis is almost exclusively relied upon in the assessments of seismic Core Damage Frequency (CDF). In this approach, Direct Quantification of Fault tree using Monte Carlo simulation (DQFM) method, or simply called Monte Carlo (MC) method, and Binary Decision Diagram (BDD) method were introduced as alternatives for a traditional approximation method, namely Minimal Cut Set (MCS) method. However, there is still no agreement as to which method should be used in a risk assessment of seismic CDF, especially for uncertainty analysis. The purpose of this study is to examine the efficiencies of the three methods in uncertainty analysis as well as in point estimation so that the decision of selecting a proper method can be made effectively. The results show that the most efficient method would be BDD method in terms of accuracy and computational time. However, it will be discussed that BDD method is not always applicable to PSA models while MC method is so in theory. In turn, MC method was confirmed to agree with the exact solution obtained by BDD method, but it took a large amount of time, in particular for uncertainty analysis. On the other hand, it was shown that the approximation error of MCS method may not be as bad in uncertainty analysis as it is in point estimation. Based on these results and previous works, this paper will propose a scheme to select an appropriate analytical method for a seismic PSA study. Throughout this study, SECOM2-DQFM code was expanded to be able to utilize BDD method and to conduct uncertainty analysis with both MC and BDD method. (author)
Gosset, Marielle; Casse, Claire; Peugeot, christophe; boone, aaron; pedinotti, vanessa
2015-04-01
Global measurement of rainfall offers new opportunity for hydrological monitoring, especially for some of the largest Tropical river where the rain gauge network is sparse and radar is not available. Member of the GPM constellation, the new French-Indian satellite Mission Megha-Tropiques (MT) dedicated to the water and energy budget in the tropical atmosphere contributes to a better monitoring of rainfall in the inter-tropical zone. As part of this mission, research is developed on the use of satellite rainfall products for hydrological research or operational application such as flood monitoring. A key issue for such applications is how to account for rainfall products biases and uncertainties, and how to propagate them into the end user models ? Another important question is how to choose the best space-time resolution for the rainfall forcing, given that both model performances and rain-product uncertainties are resolution dependent. This paper analyses the potential of satellite rainfall products combined with hydrological modeling to monitor the Niger river floods in the city of Niamey, Niger. A dramatic increase of these floods has been observed in the last decades. The study focuses on the 125000 km2 area in the vicinity of Niamey, where local runoff is responsible for the most extreme floods recorded in recent years. Several rainfall products are tested as forcing to the SURFEX-TRIP hydrological simulations. Differences in terms of rainfall amount, number of rainy days, spatial extension of the rainfall events and frequency distribution of the rain rates are found among the products. Their impacts on the simulated outflow is analyzed. The simulations based on the Real time estimates produce an excess in the discharge. For flood prediction, the problem can be overcome by a prior adjustment of the products - as done here with probability matching - or by analysing the simulated discharge in terms of percentile or anomaly. All tested products exhibit some
Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.
2014-01-01
Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.
Pal Verma, Mahendra [Instituto de Investigaciones Electricas, Cuernavaca, Morelos (Mexico)
2008-07-01
A procedure was developed to consider the analytical uncertainty in each parameter of geochemical analysis of geothermal fluid. The estimation of the uncertainty is based on the results of the geochemical analyses of geothermal fluids (numbered from the 0 to the 14), obtained within the framework of the comparisons program among the geochemical laboratories in the last 30 years. Also the propagation of the analytical uncertainty was realized in the calculation of the parameters of the geothermal fluid in the reservoir, through the methods of interval of uncertainty and GUM (Guide to the expression of Uncertainty of Measurement). The application of the methods is illustrated in the pH calculation of the geothermal fluid in the reservoir, considering samples 10 and 11 as separated waters at atmospheric conditions. [Spanish] Se desarrollo un procedimiento para estimar la incertidumbre analitica en cada parametro de analisis geoquimico de fluido geotermico. La estimacion de la incertidumbre esta basada en los resultados de los analisis geoquimicos de fluidos geotermicos (numerados del 0 al 14), obtenidos en el marco del programa de comparaciones entre los laboratorios geoquimicos en los ultimos 30 anos. Tambien se realizo la propagacion de la incertidumbre analitica en el calculo de los parametros del fluido geotermico en el yacimiento, a traves de los metodos de intervalo de incertidumbre y GUM (Guide to the expression of Uncertainty of Measurement). La aplicacion de los metodos se ilustra en el calculo de pH del fluido geotermico en el yacimiento, considerando las muestras 10 y 11 como aguas separadas a las condiciones atmosfericas.
Vinai, Paolo [Paul Scherrer Institute, Villigen (Switzerland); Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland); Chalmers University of Technology, Goeteborg (Sweden); Macian-Juan, Rafael [Technische Universitaet Muenchen, Garching (Germany); Chawla, Rakesh [Paul Scherrer Institute, Villigen (Switzerland); Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland)
2008-07-01
The paper describes the propagation of void fraction uncertainty, as quantified by employing a novel methodology developed at PSI, in the RETRAN-3D simulation of the Peach Bottom turbine trip test. Since the transient considered is characterized by a strongly coupling between thermal-hydraulics and neutronics, the accuracy in the void fraction model has a very important influence on the prediction of the power history and, in particular, of the maximum power reached. It has been shown that the objective measures used for the void fraction uncertainty, based on the direct comparison between experimental and predicted values extracted from a database of appropriate separate-effect tests, provides power uncertainty bands that are narrower and more realistic than those based, for example, on expert opinion. The applicability of such an approach to NPP transient best estimate analysis has thus been demonstrated. (authors)
Mishra, S.; Schwab, Ch.; Šukys, J.
2016-05-01
We consider the very challenging problem of efficient uncertainty quantification for acoustic wave propagation in a highly heterogeneous, possibly layered, random medium, characterized by possibly anisotropic, piecewise log-exponentially distributed Gaussian random fields. A multi-level Monte Carlo finite volume method is proposed, along with a novel, bias-free upscaling technique that allows to represent the input random fields, generated using spectral FFT methods, efficiently. Combined together with a recently developed dynamic load balancing algorithm that scales to massively parallel computing architectures, the proposed method is able to robustly compute uncertainty for highly realistic random subsurface formations that can contain a very high number (millions) of sources of uncertainty. Numerical experiments, in both two and three space dimensions, illustrating the efficiency of the method are presented.
Abdul Samad, Noor Asma Fazli Bin; Sin, Gürkan; Gernaey, Krist;
2013-01-01
, while for sensitivity analysis, global methods including the standardized regression coefficients (SRC) and Morris screening are used to identify the most significant parameters. The potassium dihydrogen phosphate (KDP) crystallization process is used as a case study, both in open-loop and closed......This paper presents the application of uncertainty and sensitivity analysis as part of a systematic modelbased process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty...
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
Rose, K.; Bauer, J. R.; Baker, D. V.
2015-12-01
As big data computing capabilities are increasingly paired with spatial analytical tools and approaches, there is a need to ensure uncertainty associated with the datasets used in these analyses is adequately incorporated and portrayed in results. Often the products of spatial analyses, big data and otherwise, are developed using discontinuous, sparse, and often point-driven data to represent continuous phenomena. Results from these analyses are generally presented without clear explanations of the uncertainty associated with the interpolated values. The Variable Grid Method (VGM) offers users with a flexible approach designed for application to a variety of analyses where users there is a need to study, evaluate, and analyze spatial trends and patterns while maintaining connection to and communicating the uncertainty in the underlying spatial datasets. The VGM outputs a simultaneous visualization representative of the spatial data analyses and quantification of underlying uncertainties, which can be calculated using data related to sample density, sample variance, interpolation error, uncertainty calculated from multiple simulations. In this presentation we will show how we are utilizing Hadoop to store and perform spatial analysis through the development of custom Spark and MapReduce applications that incorporate ESRI Hadoop libraries. The team will present custom 'Big Data' geospatial applications that run on the Hadoop cluster and integrate with ESRI ArcMap with the team's probabilistic VGM approach. The VGM-Hadoop tool has been specially built as a multi-step MapReduce application running on the Hadoop cluster for the purpose of data reduction. This reduction is accomplished by generating multi-resolution, non-overlapping, attributed topology that is then further processed using ESRI's geostatistical analyst to convey a probabilistic model of a chosen study region. Finally, we will share our approach for implementation of data reduction and topology generation
NAJI, Noor Ezzulddin
2011-01-01
Presented is a derivation of an analytical expression for the mode-coherence coefficients of uniform-distributed wave propagating within different homogeneous media-as in the case of hyperbolic Gaussian beams-and a simple method involving the superposition of two such beams is proposed. The results obtained from this work are very applicable to study and analysis of Hermite-Gaussian beam propagation, especially in the problems of radiation-matter interaction, and laser beam propagatio...
On the propagation of diel signals in river networks using analytic solutions of flow equations
Fonley, M.; Mantilla, R.; Small, S. J.; Curtu, R.
2015-08-01
Two hypotheses have been put forth to explain the magnitude and timing of diel streamflow oscillations during low flow conditions. The first suggests that delays between the peaks and troughs of streamflow and daily evapotranspiration are due to processes occurring in the soil as water moves toward the channels in the river network. The second posits that they are due to the propagation of the signal through the channels as water makes its way to the outlet of the basin. In this paper, we design and implement a theoretical experiment to test these hypotheses. We impose a baseflow signal entering the river network and use a linear transport equation to represent flow along the network. We develop analytic streamflow solutions for two cases: uniform and nonuniform velocities in space over all river links. We then use our analytic solutions to simulate streamflows along a self-similar river network for different flow velocities. Our results show that the amplitude and time delay of the streamflow solution are heavily influenced by transport in the river network. Moreover, our equations show that the geomorphology and topology of the river network play important roles in determining how amplitude and signal delay are reflected in streamflow signals. Finally, our results are consistent with empirical observations that delays are more significant as low flow decreases.
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
Wave-like warp propagation in circumbinary discs - I. Analytic theory and numerical simulations
Facchini, Stefano; Lodato, Giuseppe; Price, Daniel J.
2013-08-01
In this paper we analyse the propagation of warps in protostellar circumbinary discs. We use these systems as a test environment in which to study warp propagation in the bending-wave regime, with the addition of an external torque due to the binary gravitational potential. In particular, we want to test the linear regime, for which an analytic theory has been developed. In order to do so, we first compute analytically the steady-state shape of an inviscid disc subject to the binary torques. The steady-state tilt is a monotonically increasing function of radius, but misalignment is found at the disc inner edge. In the absence of viscosity, the disc does not present any twist. Then, we compare the time-dependent evolution of the warped disc calculated via the known linearized equations both with the analytic solutions and with full 3D numerical simulations. The simulations have been performed with the PHANTOM smoothed particle hydrodynamics (SPH) code using two million particles. We find a good agreement both in the tilt and in the phase evolution for small inclinations, even at very low viscosities. Moreover, we have verified that the linearized equations are able to reproduce the diffusive behaviour when α > H/R, where α is the disc viscosity parameter. Finally, we have used the 3D simulations to explore the non-linear regime. We observe a strongly non-linear behaviour, which leads to the breaking of the disc. Then, the inner disc starts precessing with its own precessional frequency. This behaviour has already been observed with numerical simulations in accretion discs around spinning black holes. The evolution of circumstellar accretion discs strongly depends on the warp evolution. Therefore, the issue explored in this paper could be of fundamental importance in order to understand the evolution of accretion discs in crowded environments, when the gravitational interaction with other stars is highly likely, and in multiple systems. Moreover, the evolution of
The control of uncertainties in the field of reactor physics and their propagation in best-estimate modeling are a major issue in safety analysis. In this framework, the CEA develops a methodology to perform multi-physics simulations including uncertainties analysis. The present paper aims to present and apply this methodology for the analysis of an accidental situation such as REA (Rod Ejection Accident). This accident is characterized by a strong interaction between the different areas of the reactor physics (neutronic, fuel thermal and thermal hydraulic). The modeling is performed with CRONOS2 code. The uncertainties analysis has been conducted with the URANIE platform developed by the CEA: For each identified response from the modeling (output) and considering a set of key parameters with their uncertainties (input), a surrogate model in the form of a neural network has been produced. The set of neural networks is then used to carry out a sensitivity analysis which consists on a global variance analysis with the determination of the Sobol indices for all responses. The sensitivity indices are obtained for the input parameters by an approach based on the use of polynomial chaos. The present exercise helped to develop a methodological flow scheme, to consolidate the use of URANIE tool in the framework of parallel calculations. Finally, the use of polynomial chaos allowed computing high order sensitivity indices and thus highlighting and classifying the influence of identified uncertainties on each response of the analysis (single and interaction effects). (authors)
The known analytic expressions for the evolution of the polarization of electromagnetic waves propagating in a plasma with uniformly sheared magnetic field are extended to the case where the shear is not constant. Exact analytic expressions are found for the case when the space variations of the medium are such that the magnetic field components and the plasma density satisfy a particular condition (eq. 13), possibly in a convenient reference frame of polarization space
Highlights: • Fission yield data and uncertainty comparison between major nuclear data libraries. • Fission yield covariance generation through Bayesian technique. • Study of the effect of fission yield correlations on decay heat calculations. • Covariance information contribute to reduce fission pulse decay heat uncertainty. - Abstract: Fission product yields are fundamental parameters in burnup/activation calculations and the impact of their uncertainties was widely studied in the past. Evaluations of these uncertainties were released, still without covariance data. Therefore, the nuclear community expressed the need of full fission yield covariance matrices to be able to produce inventory calculation results that take into account the complete uncertainty data. State-of-the-art fission yield data and methodologies for fission yield covariance generation were researched in this work. Covariance matrices were generated and compared to the original data stored in the library. Then, we focused on the effect of fission yield covariance information on fission pulse decay heat results for thermal fission of 235U. Calculations were carried out using different libraries and codes (ACAB and ALEPH-2) after introducing the new covariance values. Results were compared with those obtained with the uncertainty data currently provided by the libraries. The uncertainty quantification was performed first with Monte Carlo sampling and then compared with linear perturbation. Indeed, correlations between fission yields strongly affect the uncertainty of decay heat. Eventually, a sensitivity analysis of fission product yields to fission pulse decay heat was performed in order to provide a full set of the most sensitive nuclides for such a calculation
Nuclear Data Uncertainty Propagation to Reactivity Coefficients of a Sodium Fast Reactor
Herrero, J. J.; Ochoa, R.; Martínez, J. S.; Díez, C. J.; García-Herranz, N.; Cabellos, O.
2014-04-01
The assessment of the uncertainty levels on the design and safety parameters for the innovative European Sodium Fast Reactor (ESFR) is mandatory. Some of these relevant safety quantities are the Doppler and void reactivity coefficients, whose uncertainties are quantified. Besides, the nuclear reaction data where an improvement will certainly benefit the design accuracy are identified. This work has been performed with the SCALE 6.1 codes suite and its multigroups cross sections library based on ENDF/B-VII.0 evaluation.
Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model
Wang, Shitao
2016-05-27
Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model\\'s estimates of the plume\\'s trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate\\'s contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.
Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model
Wang, Shitao; Iskandarani, Mohamed; Srinivasan, Ashwanth; Thacker, W. Carlisle; Winokur, Justin; Knio, Omar M.
2016-05-01
Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model's estimates of the plume's trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate's contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.
The prompt fission neutron spectrum (PFNS) uncertainties in the n+239Pu fission reaction are used to study the impact on several fast critical assemblies modeled in the MCNP6.1 code. The newly developed sensitivity capability in MCNP6.1 is used to compute the keff sensitivity coefficients with respect to the PFNS. In comparison, the covariance matrix given in the ENDF/B-VII.1 library is decomposed and randomly sampled realizations of the PFNS are propagated through the criticality calculation, preserving the PFNS covariance matrix. The information gathered from both approaches, including the overall keff uncertainty, is statistically analyzed. Overall, the forward and backward approaches agree as expected. The results from a new method appear to be limited by the process used to evaluate the PFNS and is not necessarily a flaw of the method itself. Final thoughts and directions for future work are suggested
Analytical propagation of errors in dynamic SPECT: estimators, degrading factors, bias and noise
Dynamic SPECT is a relatively new technique that may potentially benefit many imaging applications. Though similar to dynamic PET, the accuracy and precision of dynamic SPECT parameter estimates are degraded by factors that differ from those encountered in PET. In this work we formulate a methodology for analytically studying the propagation of errors from dynamic projection data to kinetic parameter estimates. This methodology is used to study the relationships between reconstruction estimators, image degrading factors, bias and statistical noise for the application of dynamic cardiac imaging with 99mTc-teboroxime. Dynamic data were simulated for a torso phantom, and the effects of attenuation, detector response and scatter were successively included to produce several data sets. The data were reconstructed to obtain both weighted and unweighted least squares solutions, and the kinetic rate parameters for a two- compartment model were estimated. The expected values and standard deviations describing the statistical distribution of parameters that would be estimated from noisy data were calculated analytically. The results of this analysis present several interesting implications for dynamic SPECT. Statistically weighted estimators performed only marginally better than unweighted ones, implying that more computationally efficient unweighted estimators may be appropriate. This also suggests that it may be beneficial to focus future research efforts upon regularization methods with beneficial bias-variance trade-offs. Other aspects of the study describe the fundamental limits of the bias-variance trade-off regarding physical degrading factors and their compensation. The results characterize the effects of attenuation, detector response and scatter, and they are intended to guide future research into dynamic SPECT reconstruction and compensation methods. (author)
Zhang, Yaoju
2007-10-10
A simple and rigorous analytical expression of the propagating field behind an axicon illuminated by an azimuthally polarized beam has been deduced by use of the vector interference theory. This analytical expression can easily be used to calculate accurately the propagation field distribution of azimuthally polarized beams throughout the whole space behind an axicon with any size base angle, not just restricted inside the geometric focal region as does the Fresnel diffraction integral. The numerical results show that the pattern of the beam produced by the azimuthally polarized Gaussian beam that passes through an axicon is a multiring, almost-equal-intensity, and propagation-invariant interference beam in the geometric focal region. The number of bright rings increases with the propagation distance, reaching its maximum at half of the geometric focal length and then decreasing. The intensity of bright rings gradually decreases with the propagation distance in the geometric focal region. However, in the far-field (noninterference) region, only one single-ring pattern is produced and the dark spot size expands rapidly with propagation distance. PMID:17932537
Xu, Yanlong
2015-08-01
The coupled mode theory with coupling of diffraction modes and waveguide modes is usually used on the calculations of transmission and reflection coefficients for electromagnetic waves traveling through periodic sub-wavelength structures. In this paper, I extend this method to derive analytical solutions of high-order dispersion relations for shear horizontal (SH) wave propagation in elastic plates with periodic stubs. In the long wavelength regime, the explicit expression is obtained by this theory and derived specially by employing an effective medium. This indicates that the periodical stubs are equivalent to an effective homogenous layer in the long wavelength. Notably, in the short wavelength regime, high-order diffraction modes in the plate and high-order waveguide modes in the stubs are considered with modes coupling to compute the band structures. Numerical results of the coupled mode theory fit pretty well with the results of the finite element method (FEM). In addition, the band structures\\' evolution with the height of the stubs and the thickness of the plate shows clearly that the method can predict well the Bragg band gaps, locally resonant band gaps and high-order symmetric and anti-symmetric thickness-twist modes for the periodically structured plates. © 2015 Elsevier B.V.
Calibration and Forward Uncertainty Propagation for Large-eddy Simulations of Engineering Flows
Templeton, Jeremy Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Blaylock, Myra L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Domino, Stefan P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hewson, John C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kumar, Pritvi Raj [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ling, Julia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Najm, Habib N. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ruiz, Anthony [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Safta, Cosmin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stewart, Alessia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wagner, Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-09-01
The objective of this work is to investigate the efficacy of using calibration strategies from Uncertainty Quantification (UQ) to determine model coefficients for LES. As the target methods are for engineering LES, uncertainty from numerical aspects of the model must also be quantified. 15 The ultimate goal of this research thread is to generate a cost versus accuracy curve for LES such that the cost could be minimized given an accuracy prescribed by an engineering need. Realization of this goal would enable LES to serve as a predictive simulation tool within the engineering design process.
Diky, Vladimir; Chirico, Robert D.; Muzny, Chris;
. However, the accuracy of such calculations are generally unknown that often leads to overdesign of the operational units and results in significant additional cost. TDE provides a tool for the analysis of uncertainty of property calculations for multi-component streams. A process stream in TDE can be......ThermoData Engine (TDE, NIST Standard Reference Databases 103a and 103b) is the first product that implements the concept of Dynamic Data Evaluation in the fields of thermophysics and thermochemistry, which includes maintaining the comprehensive and up-to-date database of experimentally measured...... variations). Predictions can be compared to the available experimental data, and uncertainties are estimated for all efficiency criteria. Calculations of the properties of multi-component streams including composition at phase equilibria (flash calculations) are at the heart of process simulation engines...
ZHAO Yan-Zhong; SUN Hua-Yan; ZHENG Yong-Hui
2011-01-01
Based on the generalized diffraction integral formula and the idea that the angle misalignment of the cat-eye optical lens can be transformed into the displacement misalignment,an approximate analytical propagation formula for Gaussian beams through a cat-eye optical lens under large incidence angle condition is derived.Numerical results show that the diffraction effect of the apertures of the cat-eye optical lens becomes stronger along with the increase in incidence angle. The results are also compared with those from using an angular spectrum diffraction integral and experiment to illustrate the applicability and validity of our theoretical formula.It is shown that the approximate extent is good enough for the application of a cat-eye optical lens with a radius of 20 mm and a propagation distance of 100m,and the approximate extent becomes better along with the increase in the radius of the cat-eye optical lens and the propagation distance.
Study of propagation along the body at 60 GHz with analytical models and skin-equivalent phantoms
Valerio, Guido; Chahat, Nacer; Zhadobov, Maxim; Sauleau, Ronan
2013-01-01
Propagation on the surface of the human body is investigated for the first time in the mm-wave frequency range. The study, motivated by the increasing number of applications of body area networks, is performed through an accurate analytical model for the fields excited by a small source in the proximity of the human body. New asymptotic expressions are derived for the fields, uniformly valid for the range of values of the physical and geometrical parameters of interest. The theoretical analys...
Gustafsson, Johan; Brolin, Gustav; Cox, Maurice; Ljungberg, Michael; Johansson, Lena; Sjögreen Gleisner, Katarina
2015-11-01
A computer model of a patient-specific clinical 177Lu-DOTATATE therapy dosimetry system is constructed and used for investigating the variability of renal absorbed dose and biologically effective dose (BED) estimates. As patient models, three anthropomorphic computer phantoms coupled to a pharmacokinetic model of 177Lu-DOTATATE are used. Aspects included in the dosimetry-process model are the gamma-camera calibration via measurement of the system sensitivity, selection of imaging time points, generation of mass-density maps from CT, SPECT imaging, volume-of-interest delineation, calculation of absorbed-dose rate via a combination of local energy deposition for electrons and Monte Carlo simulations of photons, curve fitting and integration to absorbed dose and BED. By introducing variabilities in these steps the combined uncertainty in the output quantity is determined. The importance of different sources of uncertainty is assessed by observing the decrease in standard deviation when removing a particular source. The obtained absorbed dose and BED standard deviations are approximately 6% and slightly higher if considering the root mean square error. The most important sources of variability are the compensation for partial volume effects via a recovery coefficient and the gamma-camera calibration via the system sensitivity.
Gustafsson, Johan; Brolin, Gustav; Cox, Maurice; Ljungberg, Michael; Johansson, Lena; Gleisner, Katarina Sjögreen
2015-11-01
A computer model of a patient-specific clinical (177)Lu-DOTATATE therapy dosimetry system is constructed and used for investigating the variability of renal absorbed dose and biologically effective dose (BED) estimates. As patient models, three anthropomorphic computer phantoms coupled to a pharmacokinetic model of (177)Lu-DOTATATE are used. Aspects included in the dosimetry-process model are the gamma-camera calibration via measurement of the system sensitivity, selection of imaging time points, generation of mass-density maps from CT, SPECT imaging, volume-of-interest delineation, calculation of absorbed-dose rate via a combination of local energy deposition for electrons and Monte Carlo simulations of photons, curve fitting and integration to absorbed dose and BED. By introducing variabilities in these steps the combined uncertainty in the output quantity is determined. The importance of different sources of uncertainty is assessed by observing the decrease in standard deviation when removing a particular source. The obtained absorbed dose and BED standard deviations are approximately 6% and slightly higher if considering the root mean square error. The most important sources of variability are the compensation for partial volume effects via a recovery coefficient and the gamma-camera calibration via the system sensitivity. PMID:26458139
Alhassan, Erwin; Duan, Junfeng; Gustavsson, Cecilia; Koning, Arjan; Pomp, Stephan; Rochman, Dimitri; Österlund, Michael
2013-01-01
Analyses are carried out to assess the impact of nuclear data uncertainties on keff for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of Pu-239 random ENDF-formated libraries generated using the TALYS based system were processed into ACE format with NJOY99.336 code and used as input into the Serpent Monte Carlo neutron transport code to obtain distribution in keff. The keff distribution obtained was compared with the latest major nuclear data libraries - JEFF-3.1.2, ENDF/B-VII.1 and JENDL-4.0. A method is proposed for the selection of benchmarks for specific applications using the Total Monte Carlo approach. Finally, an accept/reject criterion was investigated based on chi square values obtained using the Pu-239 Jezebel criticality benchmark. It was observed that nuclear data uncertainties in keff were reduced considerably from 748 to 443 pcm by applying a more rigid acceptance criteria for accepting random files.
Applied Analytical Methods for Solving Some Problems of Wave Propagation in the Coastal Areas
Gagoshidze, Shalva; Kodua, Manoni
2016-04-01
Analytical methods, easy for application, are proposed for the solution of the following four classical problems of coastline hydro mechanics: 1. Refraction of waves on coast slopes of arbitrary steepness; 2. Wave propagation in tapering water areas; 3. Longitudinal waves in open channels; 4. Long waves on uniform and non-uniform flows of water. The first three of these problems are solved by the direct Galerkin-Kantorovich method with a choice , of basic functions which completely satisfy all boundary conditions. This approach leads to obtaining new evolutionary equations which can be asymptotically solved by the WKB method. The WKB solution of the first problem enables us to easily determine the three-dimensional field of velocities and to construct the refraction picture of the wave surface near the coast having an arbitrary angle of slope to the horizon varying from 0° to 180°. This solution, in particular for a vertical cliff, fully agrees with Stoker's particular but difficult solution. Moreover, it is shown for the first time that our Schrödinger type evolutionary equation leads to the formation of the so-called "potential wells" if the angle of coast slope to the horizon exceeds 45°, while the angle given at infinity (i.e. at a large distance from the shore) between the wave crests and the coastline exceeds 75°. This theoretical result expressed in terms of elementary functions is well consistent with the experimental observations and with lot of aerial photographs of waves in the coastal zones of the oceans [1,2]. For the second problem we introduce the notions of "wide" and "narrow" water areas. It is shown that Green's law on the wave height growth holds only for the narrow part of the water area, whereas in the wide part the tapering of the water area leads to an insignificant decrease of the wave height. For the third problem, the bank slopes of trapezoidal channels are assumed to have an arbitrary angle of steepness. So far we have known the
Gates, Robert L
2015-01-01
This work proposes a scheme for significantly reducing the computational complexity of discretized problems involving the non-smooth forward propagation of uncertainty by combining the adaptive hierarchical sparse grid stochastic collocation method (ALSGC) with a hierarchy of successively finer spatial discretizations (e.g. finite elements) of the underlying deterministic problem. To achieve this, we build strongly upon ideas from the Multilevel Monte Carlo method (MLMC), which represents a well-established technique for the reduction of computational complexity in problems affected by both deterministic and stochastic error contributions. The resulting approach is termed the Multilevel Adaptive Sparse Grid Collocation (MLASGC) method. Preliminary results for a low-dimensional, non-smooth parametric ODE problem are promising: the proposed MLASGC method exhibits an error/cost-relation of $\\varepsilon \\sim t^{-0.95}$ and therefore significantly outperforms the single-level ALSGC ($\\varepsilon \\sim t^{-0.65}$) a...
To fullfill the needs of the probabilistic safety assessment for the Angra 1 nuclear power plant, a computer code for performing event tree analyses ETAP2, has developed in PASCAL language for the Burroughs B-6700 computer. The code employs the impact vector method. A dependency matrix is defined which allows for proper consideration of all relevant intersystem dependencies. The analyses are carried out to the subsystem (train or channel) level. The uncertainty analysis is performance on the dominant accident sequences lumped in two different groups: one for assessing the core-degradation-class frequencies and another for obtaining the core-degradation frequency concerning the initiator under analysis. For this purpose we use a discrete Monte Carlo algorithm which is faster than others available besides furnishing reliable results. The Loss-Of-Offsite power initiator analysis is presented for illustration purposes. The ETAP2 current version is implemented in the VAX 11/780 computer. (author)
This work uses a new methodology to evaluate DNBR, named mini-RTDP. The standard thermal design procedure method (STDP) currently in use establishes a limit design value which cannot be surpassed. This limit value is determined taking into account the uncertainties of the empirical correlation used in COBRAIIIC/MIT code, modified do Angra-1 conditions. The correlation used is the Westinghouse's W-3 and the minimum DNBR (MDNBR) value cannot be less than 1.3. The new methodology reduces the excessive level of conservatism associated with the parameters used in the DNBR calculation, which are in their most unfavorable values of the STDP methodology, by using their best estimate values. The final goal is to obtain a new DNBR design limit which will provide a margin gain due to more realistic parameters values used in the methodology. (author). 11 refs., 2 tabs
Bilionis, Ilias; Gonzalez, Marcial
2016-01-01
The prohibitive cost of performing Uncertainty Quantification (UQ) tasks with a very large number of input parameters can be addressed, if the response exhibits some special structure that can be discovered and exploited. Several physical responses exhibit a special structure known as an active subspace (AS), a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction with the AS represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the model, we design a two-step maximum likelihood optimization procedure that ensures the ...
Camici, Stefania; Tito Aronica, Giuseppe; Tarpanelli, Angelica; Moramarco, Tommaso
2013-04-01
Hydraulic models are an essential tool in many fields, e.g. civil engineering, flood hazard and risk assessments, evaluation of flood control measures, etc. Nowadays there are many models of different complexity regarding the mathematical foundation and spatial dimensions available, and most of them are comparatively easy to operate due to sophisticated tools for model setup and control. However, the calibration of these models is still underdeveloped in contrast to other models like e.g. hydrological models or models used in ecosystem analysis. This has basically two reasons. First, the lack of relevant data necessary for the model calibration. Indeed, flood events are very rarely monitored due to the disturbances inflicted by them and the lack of appropriate measuring equipment. The second reason is related to the choice of a suitable performance measures for calibrating and to evaluate model predictions in a credible and consistent way (and to reduce the uncertainty). This study takes a well documented flood event in November 2012 in Paglia river basin (Central Italy). For this area a detailed description of the main channel morphology, obtained from an accurate topographical surveys and by a DEM with spatial resolution of 2 m, and several points within the floodplain areas, in which the maximum water level has been measured, were available for the post-event analysis. On basis of these information two-dimensional inertial finite element hydraulic model was set up and calibrated using different performance measures. Manning roughness coefficients obtained from the different calibrations were then used for the delineation of inundation maps including also uncertainty. The water levels of three hydrometric stations and flooded area extensions, derived by video recording the day after the flood event, have been used for the validation of the model.
This paper summarizes the results of the dynamic response analysis of the Zion reactor containment building using three different soil-structure interaction (SSI) analytical procedures which are: the substructure method, CLASSI; the equivalent linear finite element approach, ALUSH; and the nonlinear finite element procedure, DYNA3D. Uncertainties in analyzing a soil-structure system due to SSI analysis procedures were investigated. Responses at selected locations in the structure were compared through peak accelerations and response spectra
Ramin Shamshiri
2014-01-01
Full Text Available Wave propagation and heat distribution are both governed by second order linear constant coefficient partial differential equations, however their solutions yields very different properties. This study presents a comprehensive comparison between hyperbolic wave equation and parabolic heat equation. Issues such as conservation of wave profile versus averaging, transporting information, finite versus infinite speed propagation, time reversibility versus irreversibility and propagation of singularities versus instantaneous smoothing have been addressed and followed by examples and graphical evidences from computer simulations to support the arguments.
One of the most important thermalhydraulics safety parameters is the DNBR (Departure from Nucleate Boiling Ratio). The current methodology in use at Eletronuclear to determine DNBR is extremely conservative and may result in penalties to the reactor power due to an increase plugging level of steam generator tubes. This work uses a new methodology to evaluate DNBR, named mini-RTDP. The standard methodology (STDP) currently in use establishes a limit design value which cannot be surpassed. This limit value is determined taking into account the uncertainties of the empirical correlation used in COBRA IIC/MIT code, modified to Angra 1 conditions. The correlation used is the Westinghouse's W-3 and the minimum DNBR (MDBR) value cannot be less than 1.3. The new methodology reduces the excessive level of conservatism associated with the parameters used in the DNBR calculation, which take most unfavorable values in the STDP methodology, by using their best estimate values. The final goal is to obtain a new DNBR design limit which will provide a margin gain due to more realistic parameters values used in the methodology. (author)
Stolarski, R. S.; Douglass, A. R.
1986-01-01
Models of stratospheric photochemistry are generally tested by comparing their predictions for the composition of the present atmosphere with measurements of species concentrations. These models are then used to make predictions of the atmospheric sensitivity to perturbations. Here the problem of the sensitivity of such a model to chlorine perturbations ranging from the present influx of chlorine-containing compounds to several times that influx is addressed. The effects of uncertainties in input parameters, including reaction rate coefficients, cross sections, solar fluxes, and boundary conditions, are evaluated using a Monte Carlo method in which the values of the input parameters are randomly selected. The results are probability distributions for present atmosheric concentrations and for calculated perturbations due to chlorine from fluorocarbons. For more than 300 Monte Carlo runs the calculated ozone perturbation for continued emission of fluorocarbons at today's rates had a mean value of -6.2 percent, with a 1-sigma width of 5.5 percent. Using the same runs but only allowing the cases in which the calculated present atmosphere values of NO, NO2, and ClO at 25 km altitude fell within the range of measurements yielded a mean ozone depletion of -3 percent, with a 1-sigma deviation of 2.2 percent. The model showed a nonlinear behavior as a function of added fluorocarbons. The mean of the Monte Carlo runs was less nonlinear than the model run using mean value of the input parameters.
Kuzmin, Evgeny Anatol'evich
2012-01-01
Uncertainty and certainty of organizational-economic systems are their integral properties. Existence and development of any object in stochastic conditions is not obviously possible without presence of uncertain conditions and the certain factors determining the subsequent conditions of organizational-economic system. Representation and a substantiation of the methodological device of carrying out of an estimation of uncertainty and the certainty, the author stated earlier in the publication...
horizontal branch (HB) stars, but not by appealing to inadequacies in either theoretical stellar atmospheres or canonical evolutionary phases (e.g., the main-sequence turnoff). The different model predictions in the near-IR for intermediate age systems are due to different treatments of the thermally pulsating asymptotic giant branch stellar evolutionary phase. We emphasize that due to a lack of calibrating star cluster data in regions of the metallicity-age plane relevant for galaxies, all of these models continue to suffer from serious uncertainties that are difficult to quantify.
Uncertainty in soil-structure interaction analysis arising from differences in analytical techniques
This study addresses uncertainties arising from variations in different modeling approaches to soil-structure interaction of massive structures at a nuclear power plant. To perform a comprehensive systems analysis, it is necessary to quantify, for each phase of the traditional analysis procedure, both the realistic seismic response and the uncertainties associated with them. In this study two linear soil-structure interaction techniques were used to analyze the Zion, Illinois nuclear power plant: a direct method using the FLUSH computer program and a substructure approach using the CLASSI family of computer programs. In-structure response from two earthquakes, one real and one synthetic, was compared. Structure configurations from relatively simple to complicated multi-structure cases were analyzed. The resulting variations help quantify uncertainty in structure response due to analysis procedures
An Analytic Solution to the Propagation of Cylindrical Blast Waves in a Radiative Gas
B.G Verma
1977-01-01
Full Text Available In this paper, we have obtained a set of non-similarity in closed forms for the propagation of a cylindrical blast wave in a radiative gas. An explosion in a gas of constant density and pressure has been considered by assuming the existence of an initial uniform magnetic field in the axial direction. The disturbance is supposed to be headed by a shock surface of variable strength and the total energy of the wave varies with time.
Accounting for the analytical properties of the quark propagator from Dyson-Schwinger equation
Dorkin, S M; Kampfer, B
2014-01-01
An approach based on combined solutions of the Bethe-Salpeter (BS) and Dyson-Schwinger (DS) equations within the ladder-rainbow approximation in the presence of singularities is proposed to describe the meson spectrum as quark antiquark bound states. We consistently implement into the BS equation the quark propagator functions from the DS equation, with and without pole-like singularities, and show that, by knowing the precise positions of the poles and their residues, one is able to develop reliable methods of obtaining finite interaction BS kernels and to solve the BS equation numerically. We show that, for bound states with masses $M 1 $ GeV, however, the propagator functions reveal pole-like structures. Consequently, for each type of mesons (unflavored, strange and charmed) we analyze the relevant intervals of $M$ where the pole-like singularities of the corresponding quark propagator influence the solution of the BS equation and develop a framework within which they can be consistently accounted for. The...
Raupach, Rainer; Flohr, Thomas G, E-mail: rainer.raupach@siemens.com [Siemens AG Healthcare Sector, H IM CT R and D PA, Siemensstrasse 1, D-91301 Forchheim (Germany)
2011-04-07
We analyze the signal and noise propagation of differential phase-contrast computed tomography (PCT) compared with conventional attenuation-based computed tomography (CT) from a theoretical point of view. This work focuses on grating-based differential phase-contrast imaging. A mathematical framework is derived that is able to analytically predict the relative performance of both imaging techniques in the sense of the relative contrast-to-noise ratio for the contrast of any two materials. Two fundamentally different properties of PCT compared with CT are identified. First, the noise power spectra show qualitatively different characteristics implying a resolution-dependent performance ratio. The break-even point is derived analytically as a function of system parameters such as geometry and visibility. A superior performance of PCT compared with CT can only be achieved at a sufficiently high spatial resolution. Second, due to periodicity of phase information which is non-ambiguous only in a bounded interval statistical phase wrapping can occur. This effect causes a collapse of information propagation for low signals which limits the applicability of phase-contrast imaging at low dose.
Based on the generalized diffraction integral formula and the idea that the angle misalignment of the cat-eye optical lens can be transformed into the displacement misalignment, an approximate analytical propagation formula for Gaussian beams through a cat-eye optical lens under large incidence angle condition is derived. Numerical results show that the diffraction effect of the apertures of the cat-eye optical lens becomes stronger along with the increase in incidence angle. The results are also compared with those from using an angular spectrum diffraction integral and experiment to illustrate the applicability and validity of our theoretical formula. It is shown that the approximate extent is good enough for the application of a cat-eye optical lens with a radius of 20 mm and a propagation distance of 100 m, and the approximate extent becomes better along with the increase in the radius of the cat-eye optical lens and the propagation distance. (fundamental areas of phenomenology(including applications))
A characteristic that sets radioactivity measurements apart from most spectrometries is that the precision of a single determination can be estimated from Poisson statistics. This easily calculated counting uncertainty permits the detection of other sources of uncertainty by comparing observed with a priori precision. A good way to test the many underlysing assumptions in radiochemical measurements is to strive for high accuracy. For example, a measurement by instrumental neutron activation analysis (INAA) of gold film thickness in our laboratory revealed the need for pulse pileup correction even at modest dead times. Recently, the International Organization for Standardization (ISO) and other international bodies have formalized the quantitative determination and statement of uncertainty so that the weaknesses of each measurement are exposed for improvement. In the INAA certification measurement of ion-implanted arsenic in silicon (Standard Reference Material 2134), we recently achieved an expanded (95 % confidence) relative uncertainly of 0.38 % for 90 ng of arsenic per sample. A complete quantitative error analysis was performed. This measurement meets the CCQM definition of a primary ratio method. (author)
The relativistic dispersion relation of a nearly perpendicular injected electron cyclotron wave is solved in different regions. The coupling of the O-mode and the X-mode is described by a correct expression qualitatively different from that obtained from the non-relativistic approximation. The damping factor shows that wave absorption is due to two mechanisms: the relativistic O-mode damping and the coupled X-mode damping. Analytic expression for these damping is obtained
Addressing analytical uncertainties in the determination of trichloroacetic acid in soil
Dickey, Catherine A; Heal, Kate V.; Cape, Neil; Stidson, Ruth; Reeves, Nicholas; Heal, Mathew R.
2005-01-01
Soil is an important compartment in the environmental cycling of trichloroacetic acid (TCA), but soil TCA concentration is a methodologically defined quantity; analytical methods either quantify TCA in an aqueous extract of the soil, or thermally decarboxylate TCA to chloroform in the whole soil sample. The former may underestimate the total soil TCA, whereas the latter may overestimate TCA if other soil components (e.g. humic material) liberate chloroform under the decarboxylation conditions...
M. W. Rotach
2012-08-01
Full Text Available D-PHASE was a Forecast Demonstration Project of the World Weather Research Programme (WWRP related to the Mesoscale Alpine Programme (MAP. Its goal was to demonstrate the reliability and quality of operational forecasting of orographically influenced (determined precipitation in the Alps and its consequences on the distribution of run-off characteristics. A special focus was, of course, on heavy-precipitation events.
The D-PHASE Operations Period (DOP ran from June to November~2007, during which an end-to-end forecasting system was operated covering many individual catchments in the Alps, with their water authorities, civil protection organizations or other end users. The forecasting system's core piece was a Visualization Platform where precipitation and flood warnings from some 30 atmospheric and 7 hydrological models (both deterministic and probabilistic and corresponding model fields were displayed in uniform and comparable formats. Also, meteograms, nowcasting information and end user communication was made available to all the forecasters, users and end users. D-PHASE information was assessed and used by some 50 different groups ranging from atmospheric forecasters to civil protection authorities or water management bodies.
In the present contribution, D-PHASE is briefly presented along with its outstanding scientific results and, in particular, the lessons learnt with respect to uncertainty propagation. A focus is thereby on the transfer of ensemble prediction information into the hydrological community and its use with respect to other aspects of societal impact. Objective verification of forecast quality is contrasted to subjective quality assessments during the project (end user workshops, questionnaires and some general conclusions concerning forecast demonstration projects are drawn.
Starting from the path-integral representation for the electron propagator without fermion loops in QED, we analytically investigate the strong-coupling behavior in an arbitrary background electromagnetic field through a series expansion in powers of 1/e. Contrary to the perturbation theory expansion in e the new series only contains positive powers of the derivative operator p. Due to infrared singularities in the path integral the series does not exist beyond the lowest orders, although one can build a systematic expansion in powers of p (not 1/e) which can be calculated up to any order. To handle infinities we regularize using a Pauli-Villars approach. The introduction of fermion loops would not correspond to higher orders in 1/e, so a priori our results are only pertinent to the sector of QED we have chosen. 17 refs., 1 fig
Carlotti, A.; Pueyo, L.
2011-10-01
Since the radius of curvature of a mirror cannot be zero, the apodization that is created by a phase-induced amplitude apodizer (PIAA) formed by a pair of mirrors cannot be zero at the edge of the pupil. If contrasts lower than 10-10 must be obtained, then an additional apodizer must be used with the PIAA mirrors. This has a consequence on the throughput of the system, as well as on its inner working angle (IWA). The intensity distribution in the final pupil plane computed in the ray-optics approximation is misleading, and diffraction must be taken into account to evaluate the true performance of the system. We compute the propagated electric field using two different tools: the semi-analytical model developed by Pueyo and a purely numerical model based on the Huygens integral. It is shown that for higher Fresnel numbers, the agreement between the beams computed using both propagators is stronger, and that for too low Fresnel numbers, the contrast computed using the semi-analytical model can be 2 orders of magnitude higher than the one computed by a numerical evaluation of the Huygens integral. We then study the impact of surface aberrations introduced on the mirrors of the PIAA. The surface quality of the mirrors limits the performance of the system, and the IWA increases linearly with the root-mean-square (RMS) of the aberrations. For a typical set of mirrors, errors of 10nm RMS can increase the IWA by 0.5 to 1λ/D for a contrast of 10-10, and, in the case of a contrast of 10-8, the IWA is maintained to 2 λ/D as long as the errors are smaller than 20nm RMS.
Highlights: ► Response of RC structures to macrocell corrosion of a rebar is studied analytically. ► The problem is solved prior to the onset of microcrack propagation. ► Suitable Love's potential functions are used to study the steel-rust-concrete media. ► The role of crucial factors on the time of onset of concrete cracking is examined. ► The effect of vital factors on the maximum radial stress of concrete is explored. - Abstract: Assessment of the macrocell corrosion which deteriorates reinforced concrete (RC) structures have attracted the attention of many researchers during recent years. In this type of rebar corrosion, the reduction in cross-section of the rebar is significantly accelerated due to the large ratio of the cathode's area to the anode's area. In order to examine the problem, an analytical solution is proposed for prediction of the response of the RC structure from the time of steel depassivation to the stage just prior to the onset of microcrack propagation. To this end, a circular cylindrical RC member under axisymmetric macrocell corrosion of the reinforcement is considered. Both cases of the symmetric and asymmetric rebar corrosion along the length of the anode zone are studied. According to the experimentally observed data, corrosion products are modeled as a thin layer with a nonlinear stress–strain relation. The exact expressions of the elastic fields associated with the steel, and concrete media are obtained using Love's potential function. By imposing the boundary conditions, the resulting set of nonlinear equations are solved in each time step by Newton's method. The effects of the key parameters which have dominating role in the time of the onset of concrete cracking and maximum radial stress field of the concrete have been examined.
Valier-Brasier, Tony; Conoir, Jean-Marc; Coulouvrat, François; Thomas, Jean-Louis
2015-10-01
Sound propagation in dilute suspensions of small spheres is studied using two models: a hydrodynamic model based on the coupled phase equations and an acoustic model based on the ECAH (ECAH: Epstein-Carhart-Allegra-Hawley) multiple scattering theory. The aim is to compare both models through the study of three fundamental kinds of particles: rigid particles, elastic spheres, and viscous droplets. The hydrodynamic model is based on a Rayleigh-Plesset-like equation generalized to elastic spheres and viscous droplets. The hydrodynamic forces for elastic spheres are introduced by analogy with those of droplets. The ECAH theory is also modified in order to take into account the velocity of rigid particles. Analytical calculations performed for long wavelength, low dilution, and weak absorption in the ambient fluid show that both models are strictly equivalent for the three kinds of particles studied. The analytical calculations show that dilatational and translational mechanisms are modeled in the same way by both models. The effective parameters of dilute suspensions are also calculated. PMID:26520342
Sergey F Pravdin
Full Text Available We develop a numerical approach based on our recent analytical model of fiber structure in the left ventricle of the human heart. A special curvilinear coordinate system is proposed to analytically include realistic ventricular shape and myofiber directions. With this anatomical model, electrophysiological simulations can be performed on a rectangular coordinate grid. We apply our method to study the effect of fiber rotation and electrical anisotropy of cardiac tissue (i.e., the ratio of the conductivity coefficients along and across the myocardial fibers on wave propagation using the ten Tusscher-Panfilov (2006 ionic model for human ventricular cells. We show that fiber rotation increases the speed of cardiac activation and attenuates the effects of anisotropy. Our results show that the fiber rotation in the heart is an important factor underlying cardiac excitation. We also study scroll wave dynamics in our model and show the drift of a scroll wave filament whose velocity depends non-monotonically on the fiber rotation angle; the period of scroll wave rotation decreases with an increase of the fiber rotation angle; an increase in anisotropy may cause the breakup of a scroll wave, similar to the mother rotor mechanism of ventricular fibrillation.
Analytic result for the one-loop scalar pentagon integral with massless propagators
The method of dimensional recurrences proposed by one of the authors (O. V.Tarasov, 1996) is applied to the evaluation of the pentagon-type scalar integral with on-shell external legs and massless internal lines. For the first time, an analytic result valid for arbitrary space-time dimension d and five arbitrary kinematic variables is presented. An explicit expression in terms of the Appell hypergeometric function F3 and the Gauss hypergeometric function 2F1, both admitting one-fold integral representations, is given. In the case when one kinematic variable vanishes, the integral reduces to a combination of Gauss hypergeometric functions 2F1. For the case when one scalar invariant is large compared to the others, the asymptotic values of the integral in terms of Gauss hypergeometric functions 2F1 are presented in d=2-2ε, 4-2ε, and 6-2ε dimensions. For multi-Regge kinematics, the asymptotic value of the integral in d=4-2ε dimensions is given in terms of the Appell function F3 and the Gauss hypergeometric function 2F1. (orig.)
Post van der Burg, Max; Cullinane Thomas, Catherine; Holcombe, Tracy R.; Nelson, Richard D.
2016-01-01
The Landscape Conservation Cooperatives (LCCs) are a network of partnerships throughout North America that are tasked with integrating science and management to support more effective delivery of conservation at a landscape scale. In order to achieve this integration, some LCCs have adopted the approach of providing their partners with better scientific information in an effort to facilitate more effective and coordinated conservation decisions. Taking this approach has led many LCCs to begin funding research to provide the information for improved decision making. To ensure that funding goes to research projects with the highest likelihood of leading to more integrated broad scale conservation, some LCCs have also developed approaches for prioritizing which information needs will be of most benefit to their partnerships. We describe two case studies in which decision analytic tools were used to quantitatively assess the relative importance of information for decisions made by partners in the Plains and Prairie Potholes LCC. The results of the case studies point toward a few valuable lessons in terms of using these tools with LCCs. Decision analytic tools tend to help shift focus away from research oriented discussions and toward discussions about how information is used in making better decisions. However, many technical experts do not have enough knowledge about decision making contexts to fully inform the latter type of discussion. When assessed in the right decision context, however, decision analyses can point out where uncertainties actually affect optimal decisions and where they do not. This helps technical experts understand that not all research is valuable in improving decision making. But perhaps most importantly, our results suggest that decision analytic tools may be more useful for LCCs as way of developing integrated objectives for coordinating partner decisions across the landscape, rather than simply ranking research priorities.
Karakoylu, E.; Franz, B.
2016-01-01
First attempt at quantifying uncertainties in ocean remote sensing reflectance satellite measurements. Based on 1000 iterations of Monte Carlo. Data source is a SeaWiFS 4-day composite, 2003. The uncertainty is for remote sensing reflectance (Rrs) at 443 nm.
B. Scherllin-Pirscher
2011-05-01
Full Text Available Due to the measurement principle of the radio occultation (RO technique, RO data are highly suitable for climate studies. Single RO profiles can be used to build climatological fields of different atmospheric parameters like bending angle, refractivity, density, pressure, geopotential height, and temperature. RO climatologies are affected by random (statistical errors, sampling errors, and systematic errors, yielding a total climatological error. Based on empirical error estimates, we provide a simple analytical error model for these error components, which accounts for vertical, latitudinal, and seasonal variations. The vertical structure of each error component is modeled constant around the tropopause region. Above this region the error increases exponentially, below the increase follows an inverse height power-law. The statistical error strongly depends on the number of measurements. It is found to be the smallest error component for monthly mean 10° zonal mean climatologies with more than 600 measurements per bin. Due to smallest atmospheric variability, the sampling error is found to be smallest at low latitudes equatorwards of 40°. Beyond 40°, this error increases roughly linearly, with a stronger increase in hemispheric winter than in hemispheric summer. The sampling error model accounts for this hemispheric asymmetry. However, we recommend to subtract the sampling error when using RO climatologies for climate research since the residual sampling error remaining after such subtraction is estimated to be 50 % of the sampling error for bending angle and 30 % or less for the other atmospheric parameters. The systematic error accounts for potential residual biases in the measurements as well as in the retrieval process and generally dominates the total climatological error. Overall the total error in monthly means is estimated to be smaller than 0.07 % in refractivity and 0.15 K in temperature at low to mid latitudes, increasing towards
The uncertainty of the half-life
Pommé, S.
2015-06-01
Half-life measurements of radionuclides are undeservedly perceived as ‘easy’ and the experimental uncertainties are commonly underestimated. Data evaluators, scanning the literature, are faced with bad documentation, lack of traceability, incomplete uncertainty budgets and discrepant results. Poor control of uncertainties has its implications for the end-user community, varying from limitations to the accuracy and reliability of nuclear-based analytical techniques to the fundamental question whether half-lives are invariable or not. This paper addresses some issues from the viewpoints of the user community and of the decay data provider. It addresses the propagation of the uncertainty of the half-life in activity measurements and discusses different types of half-life measurements, typical parameters influencing their uncertainty, a tool to propagate the uncertainties and suggestions for a more complete reporting style. Problems and solutions are illustrated with striking examples from literature.
Pedroni, Nicola; Zio, Enrico
2012-01-01
Risk analysis models describing aleatory (i.e., random) events contain parameters (e.g., probabilities, failure rates, ...) that are epistemically-uncertain, i.e., known with poor precision. Whereas aleatory uncertainty is always described by probability distributions, epistemic uncertainty may be represented in different ways (e.g., probabilistic or possibilistic), depending on the information and data available. The work presented in this paper addresses the issue of accounting for (in)depe...
Jumper, Kevin; Fisher, Robert
2012-03-01
Type Ia supernovae are astronomical events in which a white dwarf, the cold remnant of a star that has exhausted its hydrogen fuel, detonates and briefly produces an explosion brighter than most galaxies. Many researchers think that they could occur as the white dwarf approaches a critical mass of 1.4 solar masses by accreting matter from a companion main sequence star, a scenario that is referred to as the single-degenerate channel. Assuming such a progenitor, we construct a semi-analytic model of the propagation of a flame bubble ignited at a single off-center point within the white dwarf. The bubble then rises under the influences of buoyancy and drag, burning the surrounding fuel material in a process called deflagration. We contrast the behavior of the deflagration phase in the presence of a physically high Reynolds number regime with the low Reynolds number regimes inherent to three-dimensional simulations, which are a consequence of numerical viscosity. Our work may help validate three-dimensional deflagration results over a range of initial conditions.
Goulden, T.; Hopkinson, C.
2013-12-01
The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future
A new methodology, referred to as manufacturing and technological parameters uncertainty quantification (MTUQ), is under development at Paul Scherrer Institut (PSI). Based on uncertainty and global sensitivity analysis methods, MTUQ aims at advancing state-of-the-art for the treatment of geometrical/material uncertainties in light water reactor computations, using the MCNPX Monte Carlo neutron transport code. The development is currently focused primarily on criticality safety evaluations (CSE). In that context, the key components are a dedicated modular interface with the MCNPX code and a user-friendly interface to model functional relationship between system variables. A unique feature is an automatic capability to parameterize variables belonging to so-called “repeated structures” such as to allow for perturbations of each individual element of a given system modelled with MCNPX. Concerning the statistical analysis capabilities, these are currently implemented through an interface with the ROOT platform to handle the random sampling design. This paper presents the current status of the MTUQ methodology development and a first assessment of an ongoing organisation for economic cooperation and development/nuclear energy agency benchmark dedicated to uncertainty analyses for CSE. The presented results illustrate the overall capabilities of MTUQ and underline its relevance in predicting more realistic results compared to a methodology previously applied at PSI for this particular benchmark. (author)
Thelen, Brian J.; Rickerd, Chris J.; Burns, Joseph W.
2014-06-01
With all of the new remote sensing modalities available, with ever increasing capabilities, there is a constant desire to extend the current state of the art in physics-based feature extraction and to introduce new and innovative techniques that enable the exploitation within and across modalities, i.e., fusion. A key component of this process is finding the associated features from the various imaging modalities that provide key information in terms of exploitative fusion. Further, it is desired to have an automatic methodology for assessing the information in the features from the various imaging modalities, in the presence of uncertainty. In this paper we propose a novel approach for assessing, quantifying, and isolating the information in the features via a joint statistical modeling of the features with the Gaussian Copula framework. This framework allows for a very general modeling of distributions on each of the features while still modeling the conditional dependence between the features, and the final output is a relatively accurate estimate of the information-theoretic J-divergence metric, which is directly related to discriminability. A very useful aspect of this approach is that it can be used to assess which features are most informative, and what is the information content as a function of key uncertainties (e.g., geometry) and collection parameters (e.g., SNR and resolution). We show some results of applying the Gaussian Copula framework and estimating the J-Divergence on HRR data as generated from the AFRL public release data set known as the Backhoe Data Dome.
Leaks in pressurized tubes generate acoustic waves that propagate through the walls of these tubes, which can be captured by accelerometers or by acoustic emission sensors. The knowledge of how these walls can vibrate, or in another way, how these acoustic waves propagate in this material is fundamental in the detection and localization process of the leak source. In this work an analytic model was implemented, through the motion equations of a cylindrical shell, with the objective to understand the behavior of the tube surface excited by a point source. Since the cylindrical surface has a closed pattern in the circumferential direction, waves that are beginning their trajectory will meet with another that has already completed the turn over the cylindrical shell, in the clockwise direction as well as in the counter clockwise direction, generating constructive and destructive interferences. After enough time of propagation, peaks and valleys in the shell surface are formed, which can be visualized through a graphic representation of the analytic solution created. The theoretical results were proven through measures accomplished in an experimental setup composed of a steel tube finished in sand box, simulating the condition of infinite tube. To determine the location of the point source on the surface, the process of inverse solution was adopted, that is to say, known the signals of the sensor disposed in the tube surface , it is determined through the theoretical model where the source that generated these signals can be. (author)
Holland, Michael K. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); O' Rourke, Patrick E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-05-04
An SRNL H-Canyon Test Bed performance evaluation project was completed jointly by SRNL and LANL on a prototype monochromatic energy dispersive x-ray fluorescence instrument, the hiRX. A series of uncertainty propagations were generated based upon plutonium and uranium measurements performed using the alpha-prototype hiRX instrument. Data reduction and uncertainty modeling provided in this report were performed by the SRNL authors. Observations and lessons learned from this evaluation were also used to predict the expected uncertainties that should be achievable at multiple plutonium and uranium concentration levels provided instrument hardware and software upgrades being recommended by LANL and SRNL are performed.
Based on the Collins formula in a cylindrical coordinate system and the method of introducing a hard aperture function into a finite sum of complex Gaussian functions, an approximate three-dimensional analytical formula for oblique and off-axis Gaussian beams propagating through a cat-eye optical lens is derived. Numerical results show that a reasonable choice of the obliquity factor would result in a better focus beam with a higher central intensity at the return place than that without obliquity, whereas the previous conclusion based on geometry optics is that the highest central intensity could be obtained when there is no obliquity. (fundamental areas of phenomenology (including applications))
Morales Prieto, M.; Ortega Saiz, P.
2011-07-01
Analysis of analytical uncertainties of the methodology of simulation of processes for obtaining isotopic ending inventory of spent fuel, the ARIANE experiment explores the part of simulation of burning.
Shakas, Alexis; Linde, Niklas
2015-05-01
We propose a new approach to model ground penetrating radar signals that propagate through a homogeneous and isotropic medium, and are scattered at thin planar fractures of arbitrary dip, azimuth, thickness and material filling. We use analytical expressions for the Maxwell equations in a homogeneous space to describe the propagation of the signal in the rock matrix, and account for frequency-dependent dispersion and attenuation through the empirical Jonscher formulation. We discretize fractures into elements that are linearly polarized by the incoming electric field that arrives from the source to each element, locally, as a plane wave. To model the effective source wavelet we use a generalized Gamma distribution to define the antenna dipole moment. We combine microscopic and macroscopic Maxwell's equations to derive an analytic expression for the response of each element, which describes the full electric dipole radiation patterns along with effective reflection coefficients of thin layers. Our results compare favorably with finite-difference time-domain modeling in the case of constant electrical parameters of the rock-matrix and fracture filling. Compared with traditional finite-difference time-domain modeling, the proposed approach is faster and more flexible in terms of fracture orientations. A comparison with published laboratory results suggests that the modeling approach can reproduce the main characteristics of the reflected wavelet.
Mazoyer, Johan; Norman, Colin; N'Diaye, Mamadou; van der Marel, Roeland P; Soummer, Rémi
2015-01-01
The new frontier in the quest for the highest contrast levels in the focal plane of a coronagraph is now the correction of the large diffractive artifacts effects introduced at the science camera by apertures of increasing complexity. The coronagraph for the WFIRST/AFTA mission will be the first of such instruments in space with a two Deformable Mirrors wavefront control system. Regardless of the control algorithm for these multi Deformable Mirrors, they will have to rely on quick and accurate simulation of the propagation effects introduced by the out-of-pupil surface. In the first part of this paper, we present the analytical description of the different approximations to simulate these propagation effects. In Annex A, we prove analytically that, in the special case of surfaces inducing a converging beam, the Fresnel method yields high fidelity for simulations of these effects. We provide numerical simulations showing this effect. In the second part, we use these tools in the framework of the Active Compens...
Cinzia Caliendo
2015-01-01
Full Text Available The propagation of the fundamental symmetric Lamb mode S0 along wz-BN/AlN thin composite plates suitable for telecommunication and sensing applications is studied. The investigation of the acoustic field profile across the plate thickness revealed the presence of modes having longitudinal polarization, the Anisimkin Jr. plate modes (AMs, travelling at a phase velocity close to that of the wz-BN longitudinal bulk acoustic wave propagating in the same direction. The study of the S0 mode phase velocity and coupling coefficient (K2 dispersion curves, for different electrical boundary conditions, has shown that eight different coupling configurations are allowable that exhibit a K2 as high as about 4% and very high phase velocity (up to about 16,700 m/s. The effect of the thickness and material type of the metal floating electrode on the K2 dispersion curves has also been investigated, specifically addressing the design of an enhanced coupling device. The gravimetric sensitivity of the BN/AlN-based acoustic waveguides was then calculated for both the AMs and elliptically polarized S0 modes; the AM-based sensor velocity and attenuation shifts due to the viscosity of a surrounding liquid was theoretically predicted. The performed investigation suggests that wz-BN/AlN is a very promising substrate material suitable for developing GHz band devices with enhanced electroacoustic coupling efficiency and suitable for application in telecommunications and sensing fields.
Arika Ligmann-Zielinska
Full Text Available Agent-based models (ABMs have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1 efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2 conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.
Heydorn, Kaj; Anglov, Thomas
2002-01-01
Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...... uncertainty was verified from independent measurements of the same sample by demonstrating statistical control of analytical results and the absence of bias. The proposed method takes into account uncertainties of the measurement, as well as of the amount of calibrant. It is applicable to all types of...
Hu, Huayu
2015-01-01
Nonperturbative calculation of QED processes participated by a strong electromagnetic field, especially provided by strong laser facilities at present and in the near future, generally resorts to the Furry picture with the usage of analytical solutions of the particle dynamical equation, such as the Klein-Gordon equation and Dirac equation. However only for limited field configurations such as a plane-wave field could the equations be solved analytically. Studies have shown significant interests in QED processes in a strong field composed of two counter-propagating laser waves, but the exact solutions in such a field is out of reach. In this paper, inspired by the observation of the structure of the solutions in a plane-wave field, we develop a new method and obtain the analytical solution for the Klein-Gordon equation and equivalently the action function of the solution for the Dirac equation in this field, under a largest dynamical parameter condition that there exists an inertial frame in which the particl...
Plósz, Benedek Gy; De Clercq, Jeriffa; Nopens, Ingmar; Benedetti, Lorenzo; Vanrolleghem, Peter A
2011-01-01
In WWTP models, the accurate assessment of solids inventory in bioreactors equipped with solid-liquid separators, mostly described using one-dimensional (1-D) secondary settling tank (SST) models, is the most fundamental requirement of any calibration procedure. Scientific knowledge on characterising particulate organics in wastewater and on bacteria growth is well-established, whereas 1-D SST models and their impact on biomass concentration predictions are still poorly understood. A rigorous assessment of two 1-DSST models is thus presented: one based on hyperbolic (the widely used Takács-model) and one based on parabolic (the more recently presented Plósz-model) partial differential equations. The former model, using numerical approximation to yield realistic behaviour, is currently the most widely used by wastewater treatment process modellers. The latter is a convection-dispersion model that is solved in a numerically sound way. First, the explicit dispersion in the convection-dispersion model and the numerical dispersion for both SST models are calculated. Second, simulation results of effluent suspended solids concentration (XTSS,Eff), sludge recirculation stream (XTSS,RAS) and sludge blanket height (SBH) are used to demonstrate the distinct behaviour of the models. A thorough scenario analysis is carried out using SST feed flow rate, solids concentration, and overflow rate as degrees of freedom, spanning a broad loading spectrum. A comparison between the measurements and the simulation results demonstrates a considerably improved 1-D model realism using the convection-dispersion model in terms of SBH, XTSS,RAS and XTSS,Eff. Third, to assess the propagation of uncertainty derived from settler model structure to the biokinetic model, the impact of the SST model as sub-model in a plant-wide model on the general model performance is evaluated. A long-term simulation of a bulking event is conducted that spans temperature evolution throughout a summer
Romero-Garcia, V [Instituto de Ciencia de Materiales de Madrid, Consejo Superior de Investigaciones Cientificas (Spain); Sanchez-Perez, J V [Centro de Tecnologias Fisicas: Acustica, Materiales y Astrofisica, Universidad Politecnica de Valencia (Spain); Garcia-Raffi, L M, E-mail: virogar1@gmail.com [Instituto Universitario de Matematica Pura y Aplicada, Universidad Politecnica de Valencia (Spain)
2011-07-06
The use of sonic crystals (SCs) as environmental noise barriers has certain advantages from both the acoustical and the constructive points of view with regard to conventional ones. However, the interaction between the SCs and the ground has not been studied yet. In this work we are reporting a semi-analytical model, based on the multiple scattering theory and on the method of images, to study this interaction considering the ground as a finite impedance surface. The results obtained here show that this model could be used to design more effective noise barriers based on SCs because the excess attenuation of the ground could be modelled in order to improve the attenuation properties of the array of scatterers. The results are compared with experimental data and numerical predictions thus finding good agreement between them.
Vršnak, B.; Žic, T.; Dumbović, M. [Hvar Observatory, Faculty of Geodesy, University of Zagreb, Kačćeva 26, HR-10000 Zagreb (Croatia); Temmer, M.; Möstl, C.; Veronig, A. M. [Kanzelhöhe Observatory—IGAM, Institute of Physics, University of Graz, Universittsplatz 5, A-8010 Graz (Austria); Taktakishvili, A.; Mays, M. L. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Odstrčil, D., E-mail: bvrsnak@geof.hr, E-mail: tzic@geof.hr, E-mail: mdumbovic@geof.hr, E-mail: manuela.temmer@uni-graz.at, E-mail: christian.moestl@uni-graz.at, E-mail: astrid.veronig@uni-graz.at, E-mail: aleksandre.taktakishvili-1@nasa.gov, E-mail: m.leila.mays@nasa.gov, E-mail: dusan.odstrcil@nasa.gov [George Mason University, Fairfax, VA 22030 (United States)
2014-08-01
Real-time forecasting of the arrival of coronal mass ejections (CMEs) at Earth, based on remote solar observations, is one of the central issues of space-weather research. In this paper, we compare arrival-time predictions calculated applying the numerical ''WSA-ENLIL+Cone model'' and the analytical ''drag-based model'' (DBM). Both models use coronagraphic observations of CMEs as input data, thus providing an early space-weather forecast two to four days before the arrival of the disturbance at the Earth, depending on the CME speed. It is shown that both methods give very similar results if the drag parameter Γ = 0.1 is used in DBM in combination with a background solar-wind speed of w = 400 km s{sup –1}. For this combination, the mean value of the difference between arrival times calculated by ENLIL and DBM is Δ-bar =0.09±9.0 hr with an average of the absolute-value differences of |Δ|-bar =7.1 hr. Comparing the observed arrivals (O) with the calculated ones (C) for ENLIL gives O – C = –0.3 ± 16.9 hr and, analogously, O – C = +1.1 ± 19.1 hr for DBM. Applying Γ = 0.2 with w = 450 km s{sup –1} in DBM, one finds O – C = –1.7 ± 18.3 hr, with an average of the absolute-value differences of 14.8 hr, which is similar to that for ENLIL, 14.1 hr. Finally, we demonstrate that the prediction accuracy significantly degrades with increasing solar activity.
Real-time forecasting of the arrival of coronal mass ejections (CMEs) at Earth, based on remote solar observations, is one of the central issues of space-weather research. In this paper, we compare arrival-time predictions calculated applying the numerical ''WSA-ENLIL+Cone model'' and the analytical ''drag-based model'' (DBM). Both models use coronagraphic observations of CMEs as input data, thus providing an early space-weather forecast two to four days before the arrival of the disturbance at the Earth, depending on the CME speed. It is shown that both methods give very similar results if the drag parameter Γ = 0.1 is used in DBM in combination with a background solar-wind speed of w = 400 km s–1. For this combination, the mean value of the difference between arrival times calculated by ENLIL and DBM is Δ-bar =0.09±9.0 hr with an average of the absolute-value differences of |Δ|-bar =7.1 hr. Comparing the observed arrivals (O) with the calculated ones (C) for ENLIL gives O – C = –0.3 ± 16.9 hr and, analogously, O – C = +1.1 ± 19.1 hr for DBM. Applying Γ = 0.2 with w = 450 km s–1 in DBM, one finds O – C = –1.7 ± 18.3 hr, with an average of the absolute-value differences of 14.8 hr, which is similar to that for ENLIL, 14.1 hr. Finally, we demonstrate that the prediction accuracy significantly degrades with increasing solar activity
Evaluation of Measurement Uncertainty in Neutron Activation Analysis using Research Reactor
Chung, Y. S.; Moon, J. H.; Sun, G. M.; Kim, S. H.; Baek, S. Y.; Lim, J. M.; Lee, Y. N.; Kim, H. R
2007-02-15
This report was summarized a general and technical requirements, methods, results on the measurement uncertainty assessment for a maintenance of quality assurance and traceability which should be performed in NAA technique using the HANARO research reactor. It will be used as a basic information to support effectively an accredited analytical services in the future. That is, for the assessment of measurement uncertainty, environmental certified reference materials are used to apply the analytical results obtained from real experiment using ISO-GUM and Monte Carlo Simulation(MCS) methods. Firstly, standard uncertainty of predominant parameters in a NAA is evaluated for the measured values of elements quantitatively, and then combined uncertainty is calculated applying the rule of uncertainty propagation. In addition, the contribution of individual standard uncertainty for the combined uncertainty are estimated and the way for a minimization of them is reviewed.
A comprehensive study is performed in order to evaluate the impact of activation cross section uncertainties on the actinide composition of the irradiated fuel in representative ADS (Accelerator Driven System) irradiation scenarios. Some of the most recent sources/compilations of uncertainty data are used, and the results obtained from them compared. The ANL covariance matrices are taken as reference data for the calculations. The complete set of cross section uncertainties provided in the EAF2005 data library are also used for comparison purposes. In this study, the inventory code ACAB is used to analyze the following questions: impact of different correlation structures using fixed uncertainties/variances; effect of the irradiation time/burn-up on the concentration uncertainties; and applicability of Monte Carlo (MC) and sensitivity-uncertainty (SU) approaches for all the range of burn-up/irradiation times of interest in ADS designs. When comparing results of calculations using ANL versus EAR2005/UN uncertainty data, we found very significant differences in the concentration uncertainties. The applicability of both MC and SU approaches is found acceptable to deal with all the range of irradiation times
Mazoyer, Johan; Pueyo, Laurent; Norman, Colin; N'Diaye, Mamadou; van der Marel, Roeland P.; Soummer, Rémi
2016-03-01
The new frontier in the quest for the highest contrast levels in the focal plane of a coronagraph is now the correction of the large diffraction artifacts introduced at the science camera by apertures of increasing complexity. Indeed, the future generation of space- and ground-based coronagraphic instruments will be mounted on on-axis and/or segmented telescopes; the design of coronagraphic instruments for such observatories is currently a domain undergoing rapid progress. One approach consists of using two sequential deformable mirrors (DMs) to correct for aberrations introduced by secondary mirror structures and segmentation of the primary mirror. The coronagraph for the WFIRST-AFTA mission will be the first of such instruments in space with a two-DM wavefront control system. Regardless of the control algorithm for these multiple DMs, they will have to rely on quick and accurate simulation of the propagation effects introduced by the out-of-pupil surface. In the first part of this paper, we present the analytical description of the different approximations to simulate these propagation effects. In Appendix A, we prove analytically that in the special case of surfaces inducing a converging beam, the Fresnel method yields high fidelity for simulations of these effects. We provide numerical simulations showing this effect. In the second part, we use these tools in the framework of the active compensation of aperture discontinuities (ACAD) technique applied to pupil geometries similar to WFIRST-AFTA. We present these simulations in the context of the optical layout of the high-contrast imager for complex aperture telescopes, which will test ACAD on a optical bench. The results of this analysis show that using the ACAD method, an apodized pupil Lyot coronagraph, and the performance of our current DMs, we are able to obtain, in numerical simulations, a dark hole with a WFIRST-AFTA-like. Our numerical simulation shows that we can obtain contrast better than 2×10-9 in
Transionospheric Propagation Code (TIPC)
Roussel-Dupre, R.; Kelley, T.A.
1990-10-01
The Transionospheric Propagation Code is a computer program developed at Los Alamos National Lab to perform certain tasks related to the detection of vhf signals following propagation through the ionosphere. The code is written in Fortran 77, runs interactively and was designed to be as machine independent as possible. A menu format in which the user is prompted to supply appropriate parameters for a given task has been adopted for the input while the output is primarily in the form of graphics. The user has the option of selecting from five basic tasks, namely transionospheric propagation, signal filtering, signal processing, DTOA study, and DTOA uncertainty study. For the first task a specified signal is convolved against the impulse response function of the ionosphere to obtain the transionospheric signal. The user is given a choice of four analytic forms for the input pulse or of supplying a tabular form. The option of adding Gaussian-distributed white noise of spectral noise to the input signal is also provided. The deterministic ionosphere is characterized to first order in terms of a total electron content (TEC) along the propagation path. In addition, a scattering model parameterized in terms of a frequency coherence bandwidth is also available. In the second task, detection is simulated by convolving a given filter response against the transionospheric signal. The user is given a choice of a wideband filter or a narrowband Gaussian filter. It is also possible to input a filter response. The third task provides for quadrature detection, envelope detection, and three different techniques for time-tagging the arrival of the transionospheric signal at specified receivers. The latter algorithms can be used to determine a TEC and thus take out the effects of the ionosphere to first order. Task four allows the user to construct a table of delta-times-of-arrival (DTOAs) vs TECs for a specified pair of receivers.
Barbara Mickowska; Anna Sadowska-Rociek; Ewa Cieślik
2013-01-01
The aim of this study was to assess the importance of validation and uncertainty estimation related to the results of amino acid analysis using the ion-exchange chromatography with post-column derivatization technique. The method was validated and the components of standard uncertainty were identified and quantified to recognize the major contributions to uncertainty of analysis. Estimated relative extended uncertainty (k=2, P=95%) varied in range from 9.03% to 12.68%. Quantification of the u...
Leśniewska, Barbara; Kisielewska, Katarzyna; Wiater, Józefa; Godlewska-Żyłkiewicz, Beata
2016-01-01
A new fast method for determination of mobile zinc fractions in soil is proposed in this work. The three-stage modified BCR procedure used for fractionation of zinc in soil was accelerated by using ultrasounds. The working parameters of an ultrasound probe, a power and a time of sonication, were optimized in order to acquire the content of analyte in soil extracts obtained by ultrasound-assisted sequential extraction (USE) consistent with that obtained by conventional modified Community Bureau of Reference (BCR) procedure. The content of zinc in extracts was determined by flame atomic absorption spectrometry. The developed USE procedure allowed for shortening the total extraction time from 48 h to 27 min in comparison to conventional modified BCR procedure. The method was fully validated, and the uncertainty budget was evaluated. The trueness and reproducibility of the developed method was confirmed by analysis of certified reference material of lake sediment BCR-701. The applicability of the procedure for fast, low costs and reliable determination of mobile zinc fraction in soil, which may be useful for assessing of anthropogenic impacts on natural resources and environmental monitoring purposes, was proved by analysis of different types of soil collected from Podlaskie Province (Poland). PMID:26666658
Taming systematic uncertainties at the LHC with the central limit theorem
Fichet, Sylvain
2016-01-01
We study the simplifications occurring in any likelihood function in the presence of a large number of small systematic uncertainties. We find that the marginalisation of these uncertainties can be done analytically by means of second-order error propagation, error combination, the Lyapunov central limit theorem, and under mild approximations which are typically satisfied for LHC likelihoods. The outcomes of this analysis are i) a very light treatment of systematic uncertainties ii) a convenient way of reporting the main effects of systematic uncertainties such as the detector effects occuring in LHC measurements.
More than 150 researchers and engineers from universities and the industrial world met to discuss on the new methodologies developed around assessing uncertainty. About 20 papers were presented and the main topics were: methods to study the propagation of uncertainties, sensitivity analysis, nuclear data covariances or multi-parameter optimisation. This report gathers the contributions of CEA researchers and engineers
One of the most important aspects in relation to the quality assurance in any analytical activity is the estimation of measurement uncertainty. There is general agreement that 'the expression of the result of a measurement is not complete without specifying its associated uncertainty'. An analytical process is the mechanism for obtaining methodological information (measurand) of a material system (population). This implies the need for the definition of the problem, the choice of methods for sampling and measurement and proper execution of these activities for obtaining information. The result of a measurement is only an approximation or estimate of the value of the measurand, which is complete only when accompanied by an estimate of the uncertainty of the analytical process. According to the 'Vocabulary of Basic and General Terms in Metrology' measurement uncertainty' is the parameter associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand (or magnitude). This parameter could be a standard deviation or a confidence interval. The uncertainty evaluation requires detailed look at all possible sources, but not disproportionately. We can make a good estimate of the uncertainty concentrating efforts on the largest contributions. The key steps of the process of determining the uncertainty in the measurements are: - the specification of the measurand; - identification of the sources of uncertainty - the quantification of individual components of uncertainty, - calculate the combined standard uncertainty; - report of uncertainty.
Development of the Calculation Module for Uncertainty of Internal Dose Coefficients
The ICRP (International Commission on Radiological Protection) provides the coefficients as point values without uncertainties, it is important to understand sources of uncertainty in the derivation of the coefficients. When internal dose coefficients are calculated, numerous factors are involved such as transfer rate in biokinetic models, absorption rates and deposition in respiratory tract model, fractional absorption in alimentary tract model, absorbed fractions (AF), nuclide information and organ mass. These factors have uncertainty respectively, which increases the uncertainty of internal dose coefficients by uncertainty propagation. Since the procedure of internal dose coefficients calculation is somewhat complicated, it is difficult to propagate the each uncertainty analytically. The development of module and calculation were performed by MATLAB. In this study, we developed the calculation module for uncertainty of the internal dose coefficient. In this module, uncertainty of various factor used to calculate the internal dose coefficient can be considered using the Monte Carlo sampling method. After developing the module, we calculated the internal dose coefficient for inhalation of 90Sr with the uncertainty and obtained the distribution and percentile values. It is expected that this study will contribute greatly to the uncertainty research on internal dosimetry. In the future, we will update the module to consider more uncertainties
Efficient Quantification of Uncertainties in Complex Computer Code Results Project
National Aeronautics and Space Administration — Propagation of parameter uncertainties through large computer models can be very resource intensive. Frameworks and tools for uncertainty quantification are...
Conroy, Charlie; White, Martin
2008-01-01
The stellar masses, mean ages, metallicities, and star formation histories of galaxies are now commonly estimated via stellar population synthesis (SPS) techniques. SPS relies on stellar evolution calculations from the main sequence to stellar death, stellar spectral libraries, phenomenological dust models, and stellar initial mass functions (IMFs). The present work is the first in a series that explores the impact of uncertainties in key phases of stellar evolution and the IMF on the derived physical properties of galaxies and the expected luminosity evolution for a passively evolving set of stars. A Monte-Carlo Markov-Chain approach is taken to fit near-UV through near-IR photometry of a representative sample of low- and high-redshift galaxies with this new SPS model. Significant results include the following: 1) including uncertainties in stellar evolution, stellar masses at z~0 carry errors of ~0.3 dex at 95% CL with little dependence on luminosity or color, while at z~2, the masses of bright red galaxies...
Rundel, R. D.; Butler, D. M.; Stolarski, R. S.
1978-01-01
The paper discusses the development of a concise stratospheric model which uses iteration to obtain coupling between interacting species. The one-dimensional, steady-state, diurnally-averaged model generates diffusion equations with appropriate sources and sinks for species odd oxygen, H2O, H2, CO, N2O, odd nitrogen, CH4, CH3Cl, CCl4, CF2Cl2, CFCl3, and odd chlorine. The model evaluates steady-state perturbations caused by injections of chlorine and NO(x) and may be used to predict ozone depletion. The model is used in a Monte Carlo study of the propagation of reaction-rate imprecisions by calculating an ozone perturbation caused by the addition of chlorine. Since the model is sensitive to only 10 of the more than 50 reaction rates considered, only about 1000 Monte Carlo cases are required to span the space of possible results.
S. Bönisch
2004-02-01
Full Text Available Este trabalho teve por objetivos utilizar krigagem por indicação para espacializar propriedades de solos expressas por atributos categóricos, gerar uma representação acompanhada de medida espacial de incerteza e modelar a propagação de incerteza pela álgebra de mapas por meio de procedimentos booleanos. Foram estudados os atributos: teores de potássio (K e de alumínio trocáveis, saturação por bases (V, soma de bases (S, capacidade de troca catiônica (CTC, textura (Tx e classes de relevo (CR, de profundidade efetiva do solo, de drenagem interna e de pedregosidade e, ou, rochosidade, extraídos de 222 perfis pedológicos e de 219 amostras extras, referentes a solos do estado de Santa Catarina. A espacialização das incertezas evidenciou a variabilidade espacial dos dados a qual foi relacionada com a origem das amostras e com o comportamento do atributo. Os atributos S, V, K e CR apresentaram grau de incerteza maior do que o de Tx e CTC e houve aumento da incerteza quando representações categóricas foram integradas.The objectives of this work were to generate a representation of soil properties expressed by categorical attributes by kriging indicator; to assess the uncertainty in estimates, and to model the uncertainty propagation of map algebra by means of boolean procedures. The studied attributes were potassium (K and aluminum (Al exchangeable contents, sum of bases (SB, cationic exchange capacity (CEC, base saturation (V, texture (Tx and relief classes (RC, effective soil depth, internal drainage, and stoniness and/or rockiness, extracted from 222 pedologic profiles and 219 extra samples from soils of the State of Santa Catarina, Brazil. The uncertainties evidenced the spatial variability of the data related to the samples' origin and attribute behavior. Attributes SB, V, K, and RC presented higher uncertainties than Tx and CEC, and there was an increase of uncertainty when categorical representations were integrated.
Andres, T.H
2002-05-01
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
Barrado, A. I.; Garcia, S.; Perez, R. M.
2013-06-01
This paper presents an evaluation of uncertainty associated to analytical measurement of eighteen polycyclic aromatic compounds (PACs) in ambient air by liquid chromatography with fluorescence detection (HPLC/FD). The study was focused on analyses of PM{sub 1}0, PM{sub 2}.5 and gas phase fractions. Main analytical uncertainty was estimated for eleven polycyclic aromatic hydrocarbons (PAHs), four nitro polycyclic aromatic hydrocarbons (nitro-PAHs) and two hydroxy polycyclic aromatic hydrocarbons (OH-PAHs) based on the analytical determination, reference material analysis and extraction step. Main contributions reached 15-30% and came from extraction process of real ambient samples, being those for nitro- PAHs the highest (20-30%). Range and mean concentration of PAC mass concentrations measured in gas phase and PM{sub 1}0/PM{sub 2}.5 particle fractions during a full year are also presented. Concentrations of OH-PAHs were about 2-4 orders of magnitude lower than their parent PAHs and comparable to those sparsely reported in literature. (Author) 7 refs.
Bursik, Marcus; Jones, Matthew; Carn, Simon; Dean, Ken; Patra, Abani; Pavolonis, Michael; Pitman, E. Bruce; Singh, Tarunraj; Singla, Puneet; Webley, Peter; Bjornsson, Halldor; Ripepe, Maurizio
2012-12-01
Data on source conditions for the 14 April 2010 paroxysmal phase of the Eyjafjallajökull eruption, Iceland, have been used as inputs to a trajectory-based eruption column model, bent. This model has in turn been adapted to generate output suitable as input to the volcanic ash transport and dispersal model, puff, which was used to propagate the paroxysmal ash cloud toward and over Europe over the following days. Some of the source parameters, specifically vent radius, vent source velocity, mean grain size of ejecta, and standard deviation of ejecta grain size have been assigned probability distributions based on our lack of knowledge of exact conditions at the source. These probability distributions for the input variables have been sampled in a Monte Carlo fashion using a technique that yields what we herein call the polynomial chaos quadrature weighted estimate (PCQWE) of output parameters from the ash transport and dispersal model. The advantage of PCQWE over Monte Carlo is that since it intelligently samples the input parameter space, fewer model runs are needed to yield estimates of moments and probabilities for the output variables. At each of these sample points for the input variables, a model run is performed. Output moments and probabilities are then computed by properly summing the weighted values of the output parameters of interest. Use of a computational eruption column model coupled with known weather conditions as given by radiosonde data gathered near the vent allows us to estimate that initial mass eruption rate on 14 April 2010 may have been as high as 108 kg/s and was almost certainly above 107 kg/s. This estimate is consistent with the probabilistic envelope computed by PCQWE for the downwind plume. The results furthermore show that statistical moments and probabilities can be computed in a reasonable time by using 94 = 6,561 PCQWE model runs as opposed to millions of model runs that might be required by standard Monte Carlo techniques. The
A R Banai-Kashani
1990-01-01
A planning simulation approach sensitive to the behavioral and contextual environment of industrial location decisionmaking is developed. Its conceptual and methodological basis is in sharp contrast to the neoclassical premise of location theory, in dealing with contingency, collectivity, multiplicity, and uncertainty. Industrial location decisionmaking is approximated with a multidimensional simulation of (intraurban) locational choice. The likelihood of the location of high-technology firms...
Thomas, R.E.
1982-03-01
An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software.
Facets of Uncertainty in Digital Elevation and Slope Modeling
ZHANG Jingxiong; LI Deren
2005-01-01
This paper investigates the differences that result from applying different approaches to uncertainty modeling and reports an experimental examining error estimation and propagation in elevation and slope,with the latter derived from the former. It is confirmed that significant differences exist between uncertainty descriptors, and propagation of uncertainty to end products is immensely affected by the specification of source uncertainty.
CANDU lattice uncertainties during burnup
Uncertainties associated with fundamental nuclear data accompany evaluated nuclear data libraries in the form of covariance matrices. As nuclear data are important parameters in reactor physics calculations, any associated uncertainty causes a loss of confidence in the calculation results. The quantification of output uncertainties is necessary to adequately establish safety margins of nuclear facilities. In this work, microscopic cross-section has been propagated through lattice burnup calculations applied to a generic CANDU® model. It was found that substantial uncertainty emerges during burnup even when fission yield fraction and decay rate uncertainties are neglected. (author)
Uncertainty in hydrological signatures
Westerberg, I. K.; McMillan, H. K.
2015-09-01
Information about rainfall-runoff processes is essential for hydrological analyses, modelling and water-management applications. A hydrological, or diagnostic, signature quantifies such information from observed data as an index value. Signatures are widely used, e.g. for catchment classification, model calibration and change detection. Uncertainties in the observed data - including measurement inaccuracy and representativeness as well as errors relating to data management - propagate to the signature values and reduce their information content. Subjective choices in the calculation method are a further source of uncertainty. We review the uncertainties relevant to different signatures based on rainfall and flow data. We propose a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrate it in two catchments for common signatures including rainfall-runoff thresholds, recession analysis and basic descriptive signatures of flow distribution and dynamics. Our intention is to contribute to awareness and knowledge of signature uncertainty, including typical sources, magnitude and methods for its assessment. We found that the uncertainties were often large (i.e. typical intervals of ±10-40 % relative uncertainty) and highly variable between signatures. There was greater uncertainty in signatures that use high-frequency responses, small data subsets, or subsets prone to measurement errors. There was lower uncertainty in signatures that use spatial or temporal averages. Some signatures were sensitive to particular uncertainty types such as rating-curve form. We found that signatures can be designed to be robust to some uncertainty sources. Signature uncertainties of the magnitudes we found have the potential to change the conclusions of hydrological and ecohydrological analyses, such as cross-catchment comparisons or inferences about dominant processes.