Analytical propagation of uncertainties through fault trees
International Nuclear Information System (INIS)
Hauptmanns, Ulrich
2002-01-01
A method is presented which enables one to propagate uncertainties described by uniform probability density functions through fault trees. The approach is analytical. It is based on calculating the expected value and the variance of the top event probability. These two parameters are then equated with the corresponding ones of a beta-distribution. An example calculation comparing the analytically calculated beta-pdf (probability density function) with the top event pdf obtained using the Monte-Carlo method shows excellent agreement at a much lower expense of computing time
Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.
2017-07-01
The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity
DEFF Research Database (Denmark)
Hong, Jinglan; Shaked, Shanna; Rosenbaum, Ralph K.
2010-01-01
to develop and apply to both inventory and impact assessment an explicit and transparent analytical approach to uncertainty. This approach applies Taylor series expansions to the uncertainty propagation of lognormally distributed parameters. Materials and methods We first apply the Taylor series expansion...... determine a range and a best estimate of a) the squared geometric standard deviation on the ratio of the two scenario scores, "A/B", and b) the degree of confidence in the prediction that the impact of scenario A is lower than B (i.e., the probability that A/B75%). For the aluminum panel, the electricity...... and aluminum primary production, as well as the light oil consumption, are the dominant contributors to the uncertainty. The developed approach for scenario comparisons, differentiating between common and independent parameters, leads to results similar to those of a Monte Carlo analysis; for all tested cases...
Analytical Propagation of Uncertainty in Life Cycle Assessment Using Matrix Formulation
DEFF Research Database (Denmark)
Imbeault-Tétreault, Hugues; Jolliet, Olivier; Deschênes, Louise
2013-01-01
with Monte Carlo results. The sensitivity and contribution of input parameters to output uncertainty were also analytically calculated. This article outlines an uncertainty analysis of the comparison between two case study scenarios. We conclude that the analytical method provides a good approximation...... on uncertainty calculation. This article shows the importance of the analytical method in uncertainty calculation, which could lead to a more complete uncertainty analysis in LCA practice....... uncertainty assessment is not a regular step in LCA. An analytical approach based on Taylor series expansion constitutes an effective means to overcome the drawbacks of the Monte Carlo method. This project aimed to test the approach on a real case study, and the resulting analytical uncertainty was compared...
Uncertainty Propagation in OMFIT
Smith, Sterling; Meneghini, Orso; Sung, Choongki
2017-10-01
A rigorous comparison of power balance fluxes and turbulent model fluxes requires the propagation of uncertainties in the kinetic profiles and their derivatives. Making extensive use of the python uncertainties package, the OMFIT framework has been used to propagate covariant uncertainties to provide an uncertainty in the power balance calculation from the ONETWO code, as well as through the turbulent fluxes calculated by the TGLF code. The covariant uncertainties arise from fitting 1D (constant on flux surface) density and temperature profiles and associated random errors with parameterized functions such as a modified tanh. The power balance and model fluxes can then be compared with quantification of the uncertainties. No effort is made at propagating systematic errors. A case study will be shown for the effects of resonant magnetic perturbations on the kinetic profiles and fluxes at the top of the pedestal. A separate attempt at modeling the random errors with Monte Carlo sampling will be compared to the method of propagating the fitting function parameter covariant uncertainties. Work supported by US DOE under DE-FC02-04ER54698, DE-FG2-95ER-54309, DE-SC 0012656.
Methodologies of Uncertainty Propagation Calculation
International Nuclear Information System (INIS)
Chojnacki, Eric
2002-01-01
After recalling the theoretical principle and the practical difficulties of the methodologies of uncertainty propagation calculation, the author discussed how to propagate input uncertainties. He said there were two kinds of input uncertainty: - variability: uncertainty due to heterogeneity, - lack of knowledge: uncertainty due to ignorance. It was therefore necessary to use two different propagation methods. He demonstrated this in a simple example which he generalised, treating the variability uncertainty by the probability theory and the lack of knowledge uncertainty by the fuzzy theory. He cautioned, however, against the systematic use of probability theory which may lead to unjustifiable and illegitimate precise answers. Mr Chojnacki's conclusions were that the importance of distinguishing variability and lack of knowledge increased as the problem was getting more and more complex in terms of number of parameters or time steps, and that it was necessary to develop uncertainty propagation methodologies combining probability theory and fuzzy theory
Propagation of dynamic measurement uncertainty
International Nuclear Information System (INIS)
Hessling, J P
2011-01-01
The time-dependent measurement uncertainty has been evaluated in a number of recent publications, starting from a known uncertain dynamic model. This could be defined as the 'downward' propagation of uncertainty from the model to the targeted measurement. The propagation of uncertainty 'upward' from the calibration experiment to a dynamic model traditionally belongs to system identification. The use of different representations (time, frequency, etc) is ubiquitous in dynamic measurement analyses. An expression of uncertainty in dynamic measurements is formulated for the first time in this paper independent of representation, joining upward as well as downward propagation. For applications in metrology, the high quality of the characterization may be prohibitive for any reasonably large and robust model to pass the whiteness test. This test is therefore relaxed by not directly requiring small systematic model errors in comparison to the randomness of the characterization. Instead, the systematic error of the dynamic model is propagated to the uncertainty of the measurand, analogously but differently to how stochastic contributions are propagated. The pass criterion of the model is thereby transferred from the identification to acceptance of the total accumulated uncertainty of the measurand. This increases the relevance of the test of the model as it relates to its final use rather than the quality of the calibration. The propagation of uncertainty hence includes the propagation of systematic model errors. For illustration, the 'upward' propagation of uncertainty is applied to determine if an appliance box is damaged in an earthquake experiment. In this case, relaxation of the whiteness test was required to reach a conclusive result
Simplified propagation of standard uncertainties
International Nuclear Information System (INIS)
Shull, A.H.
1997-01-01
An essential part of any measurement control program is adequate knowledge of the uncertainties of the measurement system standards. Only with an estimate of the standards'' uncertainties can one determine if the standard is adequate for its intended use or can one calculate the total uncertainty of the measurement process. Purchased standards usually have estimates of uncertainty on their certificates. However, when standards are prepared and characterized by a laboratory, variance propagation is required to estimate the uncertainty of the standard. Traditional variance propagation typically involves tedious use of partial derivatives, unfriendly software and the availability of statistical expertise. As a result, the uncertainty of prepared standards is often not determined or determined incorrectly. For situations meeting stated assumptions, easier shortcut methods of estimation are now available which eliminate the need for partial derivatives and require only a spreadsheet or calculator. A system of simplifying the calculations by dividing into subgroups of absolute and relative uncertainties is utilized. These methods also incorporate the International Standards Organization (ISO) concepts for combining systematic and random uncertainties as published in their Guide to the Expression of Measurement Uncertainty. Details of the simplified methods and examples of their use are included in the paper
Uncertainty propagation in nuclear forensics
International Nuclear Information System (INIS)
Pommé, S.; Jerome, S.M.; Venchiarutti, C.
2014-01-01
Uncertainty propagation formulae are presented for age dating in support of nuclear forensics. The age of radioactive material in this context refers to the time elapsed since a particular radionuclide was chemically separated from its decay product(s). The decay of the parent radionuclide and ingrowth of the daughter nuclide are governed by statistical decay laws. Mathematical equations allow calculation of the age of specific nuclear material through the atom ratio between parent and daughter nuclides, or through the activity ratio provided that the daughter nuclide is also unstable. The derivation of the uncertainty formulae of the age may present some difficulty to the user community and so the exact solutions, some approximations, a graphical representation and their interpretation are presented in this work. Typical nuclides of interest are actinides in the context of non-proliferation commitments. The uncertainty analysis is applied to a set of important parent–daughter pairs and the need for more precise half-life data is examined. - Highlights: • Uncertainty propagation formulae for age dating with nuclear chronometers. • Applied to parent–daughter pairs used in nuclear forensics. • Investigated need for better half-life data
Uncertainty Propagation in an Ecosystem Nutrient Budget.
New aspects and advancements in classical uncertainty propagation methods were used to develop a nutrient budget with associated error for a northern Gulf of Mexico coastal embayment. Uncertainty was calculated for budget terms by propagating the standard error and degrees of fr...
Uncertainty and its propagation in dynamics models
International Nuclear Information System (INIS)
Devooght, J.
1994-01-01
The purpose of this paper is to bring together some characteristics due to uncertainty when we deal with dynamic models and therefore to propagation of uncertainty. The respective role of uncertainty and inaccuracy is examined. A mathematical formalism based on Chapman-Kolmogorov equation allows to define a open-quotes subdynamicsclose quotes where the evolution equation takes the uncertainty into account. The problem of choosing or combining models is examined through a loss function associated to a decision
Uncertainty Propagation in Monte Carlo Depletion Analysis
International Nuclear Information System (INIS)
Shim, Hyung Jin; Kim, Yeong-il; Park, Ho Jin; Joo, Han Gyu; Kim, Chang Hyo
2008-01-01
A new formulation aimed at quantifying uncertainties of Monte Carlo (MC) tallies such as k eff and the microscopic reaction rates of nuclides and nuclide number densities in MC depletion analysis and examining their propagation behaviour as a function of depletion time step (DTS) is presented. It is shown that the variance of a given MC tally used as a measure of its uncertainty in this formulation arises from four sources; the statistical uncertainty of the MC tally, uncertainties of microscopic cross sections and nuclide number densities, and the cross correlations between them and the contribution of the latter three sources can be determined by computing the correlation coefficients between the uncertain variables. It is also shown that the variance of any given nuclide number density at the end of each DTS stems from uncertainties of the nuclide number densities (NND) and microscopic reaction rates (MRR) of nuclides at the beginning of each DTS and they are determined by computing correlation coefficients between these two uncertain variables. To test the viability of the formulation, we conducted MC depletion analysis for two sample depletion problems involving a simplified 7x7 fuel assembly (FA) and a 17x17 PWR FA, determined number densities of uranium and plutonium isotopes and their variances as well as k ∞ and its variance as a function of DTS, and demonstrated the applicability of the new formulation for uncertainty propagation analysis that need be followed in MC depletion computations. (authors)
Stochastic and epistemic uncertainty propagation in LCA
DEFF Research Database (Denmark)
Clavreul, Julie; Guyonnet, Dominique; Tonini, Davide
2013-01-01
of epistemic uncertainty representation using fuzzy intervals. The propagation methods used are the Monte Carlo analysis for probability distribution and an optimisation on alpha-cuts for fuzzy intervals. The proposed method (noted as Independent Random Set, IRS) generalizes the process of random sampling...... to probability distributions as well as fuzzy intervals, thus making the simultaneous use of both representations possible.The results highlight the fundamental difference between the probabilistic and possibilistic representations: while the Monte Carlo analysis generates a single probability distribution...... or expert judgement (epistemic uncertainty). The possibility theory has been developed over the last decades to address this problem. The objective of this study is to present a methodology that combines probability and possibility theories to represent stochastic and epistemic uncertainties in a consistent...
Uncertainty Propagation in Hypersonic Vehicle Aerothermoelastic Analysis
Lamorte, Nicolas Etienne
Hypersonic vehicles face a challenging flight environment. The aerothermoelastic analysis of its components requires numerous simplifying approximations. Identifying and quantifying the effect of uncertainties pushes the limits of the existing deterministic models, and is pursued in this work. An uncertainty quantification framework is used to propagate the effects of identified uncertainties on the stability margins and performance of the different systems considered. First, the aeroelastic stability of a typical section representative of a control surface on a hypersonic vehicle is examined. Variability in the uncoupled natural frequencies of the system is modeled to mimic the effect of aerodynamic heating. Next, the stability of an aerodynamically heated panel representing a component of the skin of a generic hypersonic vehicle is considered. Uncertainty in the location of transition from laminar to turbulent flow and the heat flux prediction is quantified using CFD. In both cases significant reductions of the stability margins are observed. A loosely coupled airframe--integrated scramjet engine is considered next. The elongated body and cowl of the engine flow path are subject to harsh aerothermodynamic loading which causes it to deform. Uncertainty associated with deformation prediction is propagated to the engine performance analysis. The cowl deformation is the main contributor to the sensitivity of the propulsion system performance. Finally, a framework for aerothermoelastic stability boundary calculation for hypersonic vehicles using CFD is developed. The usage of CFD enables one to consider different turbulence conditions, laminar or turbulent, and different models of the air mixture, in particular real gas model which accounts for dissociation of molecules at high temperature. The system is found to be sensitive to turbulence modeling as well as the location of the transition from laminar to turbulent flow. Real gas effects play a minor role in the
Uncertainty propagation in probabilistic risk assessment: A comparative study
International Nuclear Information System (INIS)
Ahmed, S.; Metcalf, D.R.; Pegram, J.W.
1982-01-01
Three uncertainty propagation techniques, namely method of moments, discrete probability distribution (DPD), and Monte Carlo simulation, generally used in probabilistic risk assessment, are compared and conclusions drawn in terms of the accuracy of the results. For small uncertainty in the basic event unavailabilities, the three methods give similar results. For large uncertainty, the method of moments is in error, and the appropriate method is to propagate uncertainty in the discrete form either by DPD method without sampling or by Monte Carlo. (orig.)
Uncertainty propagation in urban hydrology water quality modelling
Torres Matallana, Arturo; Leopold, U.; Heuvelink, G.B.M.
2016-01-01
Uncertainty is often ignored in urban hydrology modelling. Engineering practice typically ignores uncertainties and uncertainty propagation. This can have large impacts, such as the wrong dimensioning of urban drainage systems and the inaccurate estimation of pollution in the environment caused
Quantifying uncertainty in nuclear analytical measurements
International Nuclear Information System (INIS)
2004-07-01
The lack of international consensus on the expression of uncertainty in measurements was recognised by the late 1970s and led, after the issuance of a series of rather generic recommendations, to the publication of a general publication, known as GUM, the Guide to the Expression of Uncertainty in Measurement. This publication, issued in 1993, was based on co-operation over several years by the Bureau International des Poids et Mesures, the International Electrotechnical Commission, the International Federation of Clinical Chemistry, the International Organization for Standardization (ISO), the International Union of Pure and Applied Chemistry, the International Union of Pure and Applied Physics and the Organisation internationale de metrologie legale. The purpose was to promote full information on how uncertainty statements are arrived at and to provide a basis for harmonized reporting and the international comparison of measurement results. The need to provide more specific guidance to different measurement disciplines was soon recognized and the field of analytical chemistry was addressed by EURACHEM in 1995 in the first edition of a guidance report on Quantifying Uncertainty in Analytical Measurements, produced by a group of experts from the field. That publication translated the general concepts of the GUM into specific applications for analytical laboratories and illustrated the principles with a series of selected examples as a didactic tool. Based on feedback from the actual practice, the EURACHEM publication was extensively reviewed in 1997-1999 under the auspices of the Co-operation on International Traceability in Analytical Chemistry (CITAC), and a second edition was published in 2000. Still, except for a single example on the measurement of radioactivity in GUM, the field of nuclear and radiochemical measurements was not covered. The explicit requirement of ISO standard 17025:1999, General Requirements for the Competence of Testing and Calibration
Manufacturing Data Uncertainties Propagation Method in Burn-Up Problems
Directory of Open Access Journals (Sweden)
Thomas Frosio
2017-01-01
Full Text Available A nuclear data-based uncertainty propagation methodology is extended to enable propagation of manufacturing/technological data (TD uncertainties in a burn-up calculation problem, taking into account correlation terms between Boltzmann and Bateman terms. The methodology is applied to reactivity and power distributions in a Material Testing Reactor benchmark. Due to the inherent statistical behavior of manufacturing tolerances, Monte Carlo sampling method is used for determining output perturbations on integral quantities. A global sensitivity analysis (GSA is performed for each manufacturing parameter and allows identifying and ranking the influential parameters whose tolerances need to be better controlled. We show that the overall impact of some TD uncertainties, such as uranium enrichment, or fuel plate thickness, on the reactivity is negligible because the different core areas induce compensating effects on the global quantity. However, local quantities, such as power distributions, are strongly impacted by TD uncertainty propagations. For isotopic concentrations, no clear trends appear on the results.
Calibration and Propagation of Uncertainty for Independence
Energy Technology Data Exchange (ETDEWEB)
Holland, Troy Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kress, Joel David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bhat, Kabekode Ghanasham [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-06-30
This document reports on progress and methods for the calibration and uncertainty quantification of the Independence model developed at UT Austin. The Independence model is an advanced thermodynamic and process model framework for piperazine solutions as a high-performance CO_{2} capture solvent. Progress is presented in the framework of the CCSI standard basic data model inference framework. Recent work has largely focused on the thermodynamic submodels of Independence.
New challenges on uncertainty propagation assessment of flood risk analysis
Martins, Luciano; Aroca-Jiménez, Estefanía; Bodoque, José M.; Díez-Herrero, Andrés
2016-04-01
Natural hazards, such as floods, cause considerable damage to the human life, material and functional assets every year and around the World. Risk assessment procedures has associated a set of uncertainties, mainly of two types: natural, derived from stochastic character inherent in the flood process dynamics; and epistemic, that are associated with lack of knowledge or the bad procedures employed in the study of these processes. There are abundant scientific and technical literature on uncertainties estimation in each step of flood risk analysis (e.g. rainfall estimates, hydraulic modelling variables); but very few experience on the propagation of the uncertainties along the flood risk assessment. Therefore, epistemic uncertainties are the main goal of this work, in particular,understand the extension of the propagation of uncertainties throughout the process, starting with inundability studies until risk analysis, and how far does vary a proper analysis of the risk of flooding. These methodologies, such as Polynomial Chaos Theory (PCT), Method of Moments or Monte Carlo, are used to evaluate different sources of error, such as data records (precipitation gauges, flow gauges...), hydrologic and hydraulic modelling (inundation estimation), socio-demographic data (damage estimation) to evaluate the uncertainties propagation (UP) considered in design flood risk estimation both, in numerical and cartographic expression. In order to consider the total uncertainty and understand what factors are contributed most to the final uncertainty, we used the method of Polynomial Chaos Theory (PCT). It represents an interesting way to handle to inclusion of uncertainty in the modelling and simulation process. PCT allows for the development of a probabilistic model of the system in a deterministic setting. This is done by using random variables and polynomials to handle the effects of uncertainty. Method application results have a better robustness than traditional analysis
International Nuclear Information System (INIS)
Abrahamse, Augusta; Knox, Lloyd; Schmidt, Samuel; Thorman, Paul; Anthony Tyson, J.; Zhan Hu
2011-01-01
The uncertainty in the redshift distributions of galaxies has a significant potential impact on the cosmological parameter values inferred from multi-band imaging surveys. The accuracy of the photometric redshifts measured in these surveys depends not only on the quality of the flux data, but also on a number of modeling assumptions that enter into both the training set and spectral energy distribution (SED) fitting methods of photometric redshift estimation. In this work we focus on the latter, considering two types of modeling uncertainties: uncertainties in the SED template set and uncertainties in the magnitude and type priors used in a Bayesian photometric redshift estimation method. We find that SED template selection effects dominate over magnitude prior errors. We introduce a method for parameterizing the resulting ignorance of the redshift distributions, and for propagating these uncertainties to uncertainties in cosmological parameters.
Quantile arithmetic methodology for uncertainty propagation in fault trees
International Nuclear Information System (INIS)
Abdelhai, M.; Ragheb, M.
1986-01-01
A methodology based on quantile arithmetic, the probabilistic analog to interval analysis, is proposed for the computation of uncertainties propagation in fault tree analysis. The basic events' continuous probability density functions (pdf's) are represented by equivalent discrete distributions by dividing them into a number of quantiles N. Quantile arithmetic is then used to performthe binary arithmetical operations corresponding to the logical gates in the Boolean expression of the top event expression of a given fault tree. The computational advantage of the present methodology as compared with the widely used Monte Carlo method was demonstrated for the cases of summation of M normal variables through the efficiency ratio defined as the product of the labor and error ratios. The efficiency ratio values obtained by the suggested methodology for M = 2 were 2279 for N = 5, 445 for N = 25, and 66 for N = 45 when compared with the results for 19,200 Monte Carlo samples at the 40th percentile point. Another advantage of the approach is that the exact analytical value of the median is always obtained for the top event
Stochastic Systems Uncertainty Quantification and Propagation
Grigoriu, Mircea
2012-01-01
Uncertainty is an inherent feature of both properties of physical systems and the inputs to these systems that needs to be quantified for cost effective and reliable designs. The states of these systems satisfy equations with random entries, referred to as stochastic equations, so that they are random functions of time and/or space. The solution of stochastic equations poses notable technical difficulties that are frequently circumvented by heuristic assumptions at the expense of accuracy and rigor. The main objective of Stochastic Systems is to promoting the development of accurate and efficient methods for solving stochastic equations and to foster interactions between engineers, scientists, and mathematicians. To achieve these objectives Stochastic Systems presents: · A clear and brief review of essential concepts on probability theory, random functions, stochastic calculus, Monte Carlo simulation, and functional analysis · Probabilistic models for random variables an...
Investigation of Free Particle Propagator with Generalized Uncertainty Problem
International Nuclear Information System (INIS)
Hassanabadi, H.; Ghobakhloo, F.
2016-01-01
We consider the Schrödinger equation with a generalized uncertainty principle for a free particle. We then transform the problem into a second-order ordinary differential equation and thereby obtain the corresponding propagator. The result of ordinary quantum mechanics is recovered for vanishing minimal length parameter.
Assessing performance of flaw characterization methods through uncertainty propagation
Miorelli, R.; Le Bourdais, F.; Artusi, X.
2018-04-01
In this work, we assess the inversion performance in terms of crack characterization and localization based on synthetic signals associated to ultrasonic and eddy current physics. More precisely, two different standard iterative inversion algorithms are used to minimize the discrepancy between measurements (i.e., the tested data) and simulations. Furthermore, in order to speed up the computational time and get rid of the computational burden often associated to iterative inversion algorithms, we replace the standard forward solver by a suitable metamodel fit on a database built offline. In a second step, we assess the inversion performance by adding uncertainties on a subset of the database parameters and then, through the metamodel, we propagate these uncertainties within the inversion procedure. The fast propagation of uncertainties enables efficiently evaluating the impact due to the lack of knowledge on some parameters employed to describe the inspection scenarios, which is a situation commonly encountered in the industrial NDE context.
Use of probability tables for propagating uncertainties in neutronics
International Nuclear Information System (INIS)
Coste-Delclaux, M.; Diop, C.M.; Lahaye, S.
2017-01-01
Highlights: • Moment-based probability table formalism is described. • Representation by probability tables of any uncertainty distribution is established. • Multiband equations for two kinds of uncertainty propagation problems are solved. • Numerical examples are provided and validated against Monte Carlo simulations. - Abstract: Probability tables are a generic tool that allows representing any random variable whose probability density function is known. In the field of nuclear reactor physics, this tool is currently used to represent the variation of cross-sections versus energy (neutron transport codes TRIPOLI4®, MCNP, APOLLO2, APOLLO3®, ECCO/ERANOS…). In the present article we show how we can propagate uncertainties, thanks to a probability table representation, through two simple physical problems: an eigenvalue problem (neutron multiplication factor) and a depletion problem.
Uncertainty propagation through dynamic models of assemblies of mechanical structures
International Nuclear Information System (INIS)
Daouk, Sami
2016-01-01
When studying the behaviour of mechanical systems, mathematical models and structural parameters are usually considered deterministic. Return on experience shows however that these elements are uncertain in most cases, due to natural variability or lack of knowledge. Therefore, quantifying the quality and reliability of the numerical model of an industrial assembly remains a major question in low-frequency dynamics. The purpose of this thesis is to improve the vibratory design of bolted assemblies through setting up a dynamic connector model that takes account of different types and sources of uncertainty on stiffness parameters, in a simple, efficient and exploitable in industrial context. This work has been carried out in the framework of the SICODYN project, led by EDF R and D, that aims to characterise and quantify, numerically and experimentally, the uncertainties in the dynamic behaviour of bolted industrial assemblies. Comparative studies of several numerical methods of uncertainty propagation demonstrate the advantage of using the Lack-Of-Knowledge theory. An experimental characterisation of uncertainties in bolted structures is performed on a dynamic test rig and on an industrial assembly. The propagation of many small and large uncertainties through different dynamic models of mechanical assemblies leads to the assessment of the efficiency of the Lack-Of-Knowledge theory and its applicability in an industrial environment. (author)
Analysis of uncertainty propagation in nuclear fuel cycle scenarios
International Nuclear Information System (INIS)
Krivtchik, Guillaume
2014-01-01
Nuclear scenario studies model nuclear fleet over a given period. They enable the comparison of different options for the reactor fleet evolution, and the management of the future fuel cycle materials, from mining to disposal, based on criteria such as installed capacity per reactor technology, mass inventories and flows, in the fuel cycle and in the waste. Uncertainties associated with nuclear data and scenario parameters (fuel, reactors and facilities characteristics) propagate along the isotopic chains in depletion calculations, and through out the scenario history, which reduces the precision of the results. The aim of this work is to develop, implement and use a stochastic uncertainty propagation methodology adapted to scenario studies. The method chosen is based on development of depletion computation surrogate models, which reduce the scenario studies computation time, and whose parameters include perturbations of the depletion model; and fabrication of equivalence model which take into account cross-sections perturbations for computation of fresh fuel enrichment. Then the uncertainty propagation methodology is applied to different scenarios of interest, considering different options of evolution for the French PWR fleet with SFR deployment. (author) [fr
Uncertainty propagation for statistical impact prediction of space debris
Hoogendoorn, R.; Mooij, E.; Geul, J.
2018-01-01
Predictions of the impact time and location of space debris in a decaying trajectory are highly influenced by uncertainties. The traditional Monte Carlo (MC) method can be used to perform accurate statistical impact predictions, but requires a large computational effort. A method is investigated that directly propagates a Probability Density Function (PDF) in time, which has the potential to obtain more accurate results with less computational effort. The decaying trajectory of Delta-K rocket stages was used to test the methods using a six degrees-of-freedom state model. The PDF of the state of the body was propagated in time to obtain impact-time distributions. This Direct PDF Propagation (DPP) method results in a multi-dimensional scattered dataset of the PDF of the state, which is highly challenging to process. No accurate results could be obtained, because of the structure of the DPP data and the high dimensionality. Therefore, the DPP method is less suitable for practical uncontrolled entry problems and the traditional MC method remains superior. Additionally, the MC method was used with two improved uncertainty models to obtain impact-time distributions, which were validated using observations of true impacts. For one of the two uncertainty models, statistically more valid impact-time distributions were obtained than in previous research.
Methods for the calculation of uncertainty in analytical chemistry
Energy Technology Data Exchange (ETDEWEB)
Suh, M. Y.; Sohn, S. C.; Park, Y. J.; Park, K. K.; Jee, K. Y.; Joe, K. S.; Kim, W. H
2000-07-01
This report describes the statistical rules for evaluating and expressing uncertainty in analytical chemistry. The procedures for the evaluation of uncertainty in chemical analysis are illustrated by worked examples. This report, in particular, gives guidance on how uncertainty can be estimated from various chemical analyses. This report can be also used for planning the experiments which will provide the information required to obtain an estimate of uncertainty for the method.
Uncertainty propagation in a multiscale model of nanocrystalline plasticity
International Nuclear Information System (INIS)
Koslowski, M.; Strachan, Alejandro
2011-01-01
We characterize how uncertainties propagate across spatial and temporal scales in a physics-based model of nanocrystalline plasticity of fcc metals. Our model combines molecular dynamics (MD) simulations to characterize atomic-level processes that govern dislocation-based-plastic deformation with a phase field approach to dislocation dynamics (PFDD) that describes how an ensemble of dislocations evolve and interact to determine the mechanical response of the material. We apply this approach to a nanocrystalline Ni specimen of interest in micro-electromechanical (MEMS) switches. Our approach enables us to quantify how internal stresses that result from the fabrication process affect the properties of dislocations (using MD) and how these properties, in turn, affect the yield stress of the metallic membrane (using the PFMM model). Our predictions show that, for a nanocrystalline sample with small grain size (4 nm), a variation in residual stress of 20 MPa (typical in today's microfabrication techniques) would result in a variation on the critical resolved shear yield stress of approximately 15 MPa, a very small fraction of the nominal value of approximately 9 GPa. - Highlights: → Quantify how fabrication uncertainties affect yield stress in a microswitch component. → Propagate uncertainties in a multiscale model of single crystal plasticity. → Molecular dynamics quantifies how fabrication variations affect dislocations. → Dislocation dynamics relate variations in dislocation properties to yield stress.
Uncertainty propagation in probabilistic safety analysis of nuclear power plants
International Nuclear Information System (INIS)
Fleming, P.V.
1981-09-01
The uncertainty propagation in probabilistic safety analysis of nuclear power plants, is done. The methodology of the minimal cut is implemented in the computer code SVALON and the results for several cases are compared with corresponding results obtained with the SAMPLE code, which employs the Monte Carlo method to propagate the uncertanties. The results have show that, for a relatively small number of dominant minimal cut sets (n approximately 25) and error factors (r approximately 5) the SVALON code yields results which are comparable to those obtained with SAMPLE. An analysis of the unavailability of the low pressure recirculation system of Angra 1 for both the short and long term recirculation phases, are presented. The results for the short term phase are in good agreement with the corresponding one given in WASH-1400. (E.G.) [pt
Propagation of radar rainfall uncertainty in urban flood simulations
Liguori, Sara; Rico-Ramirez, Miguel
2013-04-01
This work discusses the results of the implementation of a novel probabilistic system designed to improve ensemble sewer flow predictions for the drainage network of a small urban area in the North of England. The probabilistic system has been developed to model the uncertainty associated to radar rainfall estimates and propagate it through radar-based ensemble sewer flow predictions. The assessment of this system aims at outlining the benefits of addressing the uncertainty associated to radar rainfall estimates in a probabilistic framework, to be potentially implemented in the real-time management of the sewer network in the study area. Radar rainfall estimates are affected by uncertainty due to various factors [1-3] and quality control and correction techniques have been developed in order to improve their accuracy. However, the hydrological use of radar rainfall estimates and forecasts remains challenging. A significant effort has been devoted by the international research community to the assessment of the uncertainty propagation through probabilistic hydro-meteorological forecast systems [4-5], and various approaches have been implemented for the purpose of characterizing the uncertainty in radar rainfall estimates and forecasts [6-11]. A radar-based ensemble stochastic approach, similar to the one implemented for use in the Southern-Alps by the REAL system [6], has been developed for the purpose of this work. An ensemble generator has been calibrated on the basis of the spatial-temporal characteristics of the residual error in radar estimates assessed with reference to rainfall records from around 200 rain gauges available for the year 2007, previously post-processed and corrected by the UK Met Office [12-13]. Each ensemble member is determined by summing a perturbation field to the unperturbed radar rainfall field. The perturbations are generated by imposing the radar error spatial and temporal correlation structure to purely stochastic fields. A
Reduction of Uncertainty Propagation in the Airport Operations Network
Energy Technology Data Exchange (ETDEWEB)
Rodriguez Sanz, A.; Gomez Comendador, F.; Arnaldo Valdes, R.
2016-07-01
Airport operations are a complex system involving multiple elements (ground access, landside, airside and airspace), stakeholders (ANS providers, airlines, airport managers, policy makers and ground handling companies) and interrelated processes. To ensure appropriate and safe operation it is necessary to understand these complex relationships and how the effects of potential incidents, failures and delays (due to unexpected events or capacity constraints) may propagate throughout the different stages of the system. An incident may easily ripple through the network and affect the operation of the airport as a whole, making the entire system vulnerable. A holistic view of the processes that also takes all of the parties (and the connections between them) into account would significantly reduce the risks associated with airport operations, while at the same time improving efficiency. Therefore, this paper proposes a framework to integrate all relevant stakeholders and reduce uncertainty in delay propagation, thereby lowering the cause-effect chain probability of the airport system (which is crucial for the operation and development of air transport). Firstly, we developed a model (map) to identify the functional relationships and interdependencies between the different stakeholders and processes that make up the airport operations network. This will act as a conceptual framework. Secondly, we reviewed and characterised the main causes of delay. Finally, we extended the system map to create a probabilistic graphical model, using a Bayesian Network approach and influence diagrams, in order to predict the propagation of unexpected delays across the airport operations network. This will enable us to learn how potential incidents may spread throughout the network creating unreliable, uncertain system states. Policy makers, regulators and airport managers may use this conceptual framework (and the associated indicators) to understand how delays propagate across the airport
Analytic uncertainty and sensitivity analysis of models with input correlations
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
Uncertainty propagation of p-boxes using sparse polynomial chaos expansions
Energy Technology Data Exchange (ETDEWEB)
Schöbi, Roland, E-mail: schoebi@ibk.baug.ethz.ch; Sudret, Bruno, E-mail: sudret@ibk.baug.ethz.ch
2017-06-15
In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions to surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.
Uncertainty propagation of p-boxes using sparse polynomial chaos expansions
Schöbi, Roland; Sudret, Bruno
2017-06-01
In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions to surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.
Risk classification and uncertainty propagation for virtual water distribution systems
International Nuclear Information System (INIS)
Torres, Jacob M.; Brumbelow, Kelly; Guikema, Seth D.
2009-01-01
While the secrecy of real water distribution system data is crucial, it poses difficulty for research as results cannot be publicized. This data includes topological layouts of pipe networks, pump operation schedules, and water demands. Therefore, a library of virtual water distribution systems can be an important research tool for comparative development of analytical methods. A virtual city, 'Micropolis', has been developed, including a comprehensive water distribution system, as a first entry into such a library. This virtual city of 5000 residents is fully described in both geographic information systems (GIS) and EPANet hydraulic model frameworks. A risk classification scheme and Monte Carlo analysis are employed for an attempted water supply contamination attack. Model inputs to be considered include uncertainties in: daily water demand, seasonal demand, initial storage tank levels, the time of day a contamination event is initiated, duration of contamination event, and contaminant quantity. Findings show that reasonable uncertainties in model inputs produce high variability in exposure levels. It is also shown that exposure level distributions experience noticeable sensitivities to population clusters within the contaminant spread area. High uncertainties in exposure patterns lead to greater resources needed for more effective mitigation strategies.
Quantifying the measurement uncertainty of results from environmental analytical methods.
Moser, J; Wegscheider, W; Sperka-Gottlieb, C
2001-07-01
The Eurachem-CITAC Guide Quantifying Uncertainty in Analytical Measurement was put into practice in a public laboratory devoted to environmental analytical measurements. In doing so due regard was given to the provisions of ISO 17025 and an attempt was made to base the entire estimation of measurement uncertainty on available data from the literature or from previously performed validation studies. Most environmental analytical procedures laid down in national or international standards are the result of cooperative efforts and put into effect as part of a compromise between all parties involved, public and private, that also encompasses environmental standards and statutory limits. Central to many procedures is the focus on the measurement of environmental effects rather than on individual chemical species. In this situation it is particularly important to understand the measurement process well enough to produce a realistic uncertainty statement. Environmental analytical methods will be examined as far as necessary, but reference will also be made to analytical methods in general and to physical measurement methods where appropriate. This paper describes ways and means of quantifying uncertainty for frequently practised methods of environmental analysis. It will be shown that operationally defined measurands are no obstacle to the estimation process as described in the Eurachem/CITAC Guide if it is accepted that the dominating component of uncertainty comes from the actual practice of the method as a reproducibility standard deviation.
Directory of Open Access Journals (Sweden)
Krivtchik Guillaume
2017-01-01
Full Text Available Scenario studies simulate the whole fuel cycle over a period of time, from extraction of natural resources to geological storage. Through the comparison of different reactor fleet evolutions and fuel management options, they constitute a decision-making support. Consequently uncertainty propagation studies, which are necessary to assess the robustness of the studies, are strategic. Among numerous types of physical model in scenario computation that generate uncertainty, the equivalence models, built for calculating fresh fuel enrichment (for instance plutonium content in PWR MOX so as to be representative of nominal fuel behavior, are very important. The equivalence condition is generally formulated in terms of end-of-cycle mean core reactivity. As this results from a physical computation, it is therefore associated with an uncertainty. A state-of-the-art of equivalence models is exposed and discussed. It is shown that the existing equivalent models implemented in scenario codes, such as COSI6, are not suited to uncertainty propagation computation, for the following reasons: (i existing analytical models neglect irradiation, which has a strong impact on the result and its uncertainty; (ii current black-box models are not suited to cross-section perturbations management; and (iii models based on transport and depletion codes are too time-consuming for stochastic uncertainty propagation. A new type of equivalence model based on Artificial Neural Networks (ANN has been developed, constructed with data calculated with neutron transport and depletion codes. The model inputs are the fresh fuel isotopy, the irradiation parameters (burnup, core fractionation, etc., cross-sections perturbations and the equivalence criterion (for instance the core target reactivity in pcm at the end of the irradiation cycle. The model output is the fresh fuel content such that target reactivity is reached at the end of the irradiation cycle. Those models are built and
International Nuclear Information System (INIS)
Pimentel, B.M.; Suzuki, A.T.; Tomazelli, J.L.
1992-01-01
The principle of analytic continuation can be used to derive causal distributions for covariant propagators. We apply this principle as a basis for deriving analytically continued causal distributions for algebraic non-covariant propagators. (author)
Propagation of nuclear data uncertainty: Exact or with covariances
Directory of Open Access Journals (Sweden)
van Veen D.
2010-10-01
Full Text Available Two distinct methods of propagation for basic nuclear data uncertainties to large scale systems will be presented and compared. The “Total Monte Carlo” method is using a statistical ensemble of nuclear data libraries randomly generated by means of a Monte Carlo approach with the TALYS system. These libraries are then directly used in a large number of reactor calculations (for instance with MCNP after which the exact probability distribution for the reactor parameter is obtained. The second method makes use of available covariance files and can be done in a single reactor calculation (by using the perturbation method. In this exercise, both methods are using consistent sets of data files, which implies that covariance files used in the second method are directly obtained from the randomly generated nuclear data libraries from the first method. This is a unique and straightforward comparison allowing to directly apprehend advantages and drawbacks of each method. Comparisons for different reactions and criticality-safety benchmarks from 19F to actinides will be presented. We can thus conclude whether current methods for using covariance data are good enough or not.
Spin-Stabilized Spacecrafts: Analytical Attitude Propagation Using Magnetic Torques
Directory of Open Access Journals (Sweden)
Roberta Veloso Garcia
2009-01-01
Full Text Available An analytical approach for spin-stabilized satellites attitude propagation is presented, considering the influence of the residual magnetic torque and eddy currents torque. It is assumed two approaches to examine the influence of external torques acting during the motion of the satellite, with the Earth's magnetic field described by the quadripole model. In the first approach is included only the residual magnetic torque in the motion equations, with the satellites in circular or elliptical orbit. In the second approach only the eddy currents torque is analyzed, with the satellite in circular orbit. The inclusion of these torques on the dynamic equations of spin stabilized satellites yields the conditions to derive an analytical solution. The solutions show that residual torque does not affect the spin velocity magnitude, contributing only for the precession and the drift of the spacecraft's spin axis and the eddy currents torque causes an exponential decay of the angular velocity magnitude. Numerical simulations performed with data of the Brazilian Satellites (SCD1 and SCD2 show the period that analytical solution can be used to the attitude propagation, within the dispersion range of the attitude determination system performance of Satellite Control Center of Brazil National Research Institute.
2017-12-18
AFRL-RV-PS- AFRL-RV-PS- TR-2017-0177 TR-2017-0177 NONLINEAR UNCERTAINTY PROPAGATION OF SATELLITE STATE ERROR FOR TRACKING AND CONJUNCTION RISK...Uncertainty Propagation of Satellite State Error for Tracking and Conjunction Risk Assessment 5a. CONTRACT NUMBER FA9453-16-1-0084 5b. GRANT NUMBER...prediction and satellite conjunction analysis. Statistical approach utilizes novel methods to build better uncertainty state characterization in the context
Interlaboratory analytical performance studies; a way to estimate measurement uncertainty
Directory of Open Access Journals (Sweden)
El¿bieta £ysiak-Pastuszak
2004-09-01
Full Text Available Comparability of data collected within collaborative programmes became the key challenge of analytical chemistry in the 1990s, including monitoring of the marine environment. To obtain relevant and reliable data, the analytical process has to proceed under a well-established Quality Assurance (QA system with external analytical proficiency tests as an inherent component. A programme called Quality Assurance in Marine Monitoring in Europe (QUASIMEME was established in 1993 and evolved over the years as the major provider of QA proficiency tests for nutrients, trace metals and chlorinated organic compounds in marine environment studies. The article presents an evaluation of results obtained in QUASIMEME Laboratory Performance Studies by the monitoring laboratory of the Institute of Meteorology and Water Management (Gdynia, Poland in exercises on nutrient determination in seawater. The measurement uncertainty estimated from routine internal quality control measurements and from results of analytical performance exercises is also presented in the paper.
Uncertainties in workplace external dosimetry - An analytical approach
International Nuclear Information System (INIS)
Ambrosi, P.
2006-01-01
The uncertainties associated with external dosimetry measurements at workplaces depend on the type of dosemeter used together with its performance characteristics and the information available on the measurement conditions. Performance characteristics were determined in the course of a type test and information about the measurement conditions can either be general, e.g. 'research' and 'medicine', or specific, e.g. 'X-ray testing equipment for aluminium wheel rims'. This paper explains an analytical approach to determine the measurement uncertainty. It is based on the Draft IEC Technical Report IEC 62461 Radiation Protection Instrumentation - Determination of Uncertainty in Measurement. Both this paper and the report cannot eliminate the fact that the determination of the uncertainty requires a larger effort than performing the measurement itself. As a counterbalance, the process of determining the uncertainty results not only in a numerical value of the uncertainty but also produces the best estimate of the quantity to be measured, which may differ from the indication of the instrument. Thus it also improves the result of the measurement. (authors)
Energy Technology Data Exchange (ETDEWEB)
Díez, C.J., E-mail: cj.diez@upm.es [Dpto. de Ingeníera Nuclear, Universidad Politécnica de Madrid, 28006 Madrid (Spain); Cabellos, O. [Dpto. de Ingeníera Nuclear, Universidad Politécnica de Madrid, 28006 Madrid (Spain); Instituto de Fusión Nuclear, Universidad Politécnica de Madrid, 28006 Madrid (Spain); Martínez, J.S. [Dpto. de Ingeníera Nuclear, Universidad Politécnica de Madrid, 28006 Madrid (Spain)
2015-01-15
Several approaches have been developed in last decades to tackle nuclear data uncertainty propagation problems of burn-up calculations. One approach proposed was the Hybrid Method, where uncertainties in nuclear data are propagated only on the depletion part of a burn-up problem. Because only depletion is addressed, only one-group cross sections are necessary, and hence, their collapsed one-group uncertainties. This approach has been applied successfully in several advanced reactor systems like EFIT (ADS-like reactor) or ESFR (Sodium fast reactor) to assess uncertainties on the isotopic composition. However, a comparison with using multi-group energy structures was not carried out, and has to be performed in order to analyse the limitations of using one-group uncertainties.
International Nuclear Information System (INIS)
Díez, C.J.; Cabellos, O.; Martínez, J.S.
2015-01-01
Several approaches have been developed in last decades to tackle nuclear data uncertainty propagation problems of burn-up calculations. One approach proposed was the Hybrid Method, where uncertainties in nuclear data are propagated only on the depletion part of a burn-up problem. Because only depletion is addressed, only one-group cross sections are necessary, and hence, their collapsed one-group uncertainties. This approach has been applied successfully in several advanced reactor systems like EFIT (ADS-like reactor) or ESFR (Sodium fast reactor) to assess uncertainties on the isotopic composition. However, a comparison with using multi-group energy structures was not carried out, and has to be performed in order to analyse the limitations of using one-group uncertainties
Díez, C. J.; Cabellos, O.; Martínez, J. S.
2015-01-01
Several approaches have been developed in last decades to tackle nuclear data uncertainty propagation problems of burn-up calculations. One approach proposed was the Hybrid Method, where uncertainties in nuclear data are propagated only on the depletion part of a burn-up problem. Because only depletion is addressed, only one-group cross sections are necessary, and hence, their collapsed one-group uncertainties. This approach has been applied successfully in several advanced reactor systems like EFIT (ADS-like reactor) or ESFR (Sodium fast reactor) to assess uncertainties on the isotopic composition. However, a comparison with using multi-group energy structures was not carried out, and has to be performed in order to analyse the limitations of using one-group uncertainties.
International Nuclear Information System (INIS)
Rocco Sanseverino, Claudio M.; Ramirez-Marquez, José Emmanuel
2014-01-01
The reliability of a system, notwithstanding it intended function, can be significantly affected by the uncertainty in the reliability estimate of the components that define the system. This paper implements the Unscented Transformation to quantify the effects of the uncertainty of component reliability through two approaches. The first approach is based on the concept of uncertainty propagation, which is the assessment of the effect that the variability of the component reliabilities produces on the variance of the system reliability. This assessment based on UT has been previously considered in the literature but only for system represented through series/parallel configuration. In this paper the assessment is extended to systems whose reliability cannot be represented through analytical expressions and require, for example, Monte Carlo Simulation. The second approach consists on the evaluation of the importance of components, i.e., the evaluation of the components that most contribute to the variance of the system reliability. An extension of the UT is proposed to evaluate the so called “main effects” of each component, as well to assess high order component interaction. Several examples with excellent results illustrate the proposed approach. - Highlights: • Simulation based approach for computing reliability estimates. • Computation of reliability variance via 2n+1 points. • Immediate computation of component importance. • Application to network systems
Propagation of nuclear data uncertainties for fusion power measurements
Directory of Open Access Journals (Sweden)
Sjöstrand Henrik
2017-01-01
Full Text Available Neutron measurements using neutron activation systems are an essential part of the diagnostic system at large fusion machines such as JET and ITER. Nuclear data is used to infer the neutron yield. Consequently, high-quality nuclear data is essential for the proper determination of the neutron yield and fusion power. However, uncertainties due to nuclear data are not fully taken into account in uncertainty analysis for neutron yield calibrations using activation foils. This paper investigates the neutron yield uncertainty due to nuclear data using the so-called Total Monte Carlo Method. The work is performed using a detailed MCNP model of the JET fusion machine; the uncertainties due to the cross-sections and angular distributions in JET structural materials, as well as the activation cross-sections in the activation foils, are analysed. It is found that a significant contribution to the neutron yield uncertainty can come from uncertainties in the nuclear data.
Propagation of nuclear data uncertainties in fuel cycle calculations using Monte-Carlo technique
International Nuclear Information System (INIS)
Diez, C.J.; Cabellos, O.; Martinez, J.S.
2011-01-01
Nowadays, the knowledge of uncertainty propagation in depletion calculations is a critical issue because of the safety and economical performance of fuel cycles. Response magnitudes such as decay heat, radiotoxicity and isotopic inventory and their uncertainties should be known to handle spent fuel in present fuel cycles (e.g. high burnup fuel programme) and furthermore in new fuel cycles designs (e.g. fast breeder reactors and ADS). To deal with this task, there are different error propagation techniques, deterministic (adjoint/forward sensitivity analysis) and stochastic (Monte-Carlo technique) to evaluate the error in response magnitudes due to nuclear data uncertainties. In our previous works, cross-section uncertainties were propagated using a Monte-Carlo technique to calculate the uncertainty of response magnitudes such as decay heat and neutron emission. Also, the propagation of decay data, fission yield and cross-section uncertainties was performed, but only isotopic composition was the response magnitude calculated. Following the previous technique, the nuclear data uncertainties are taken into account and propagated to response magnitudes, decay heat and radiotoxicity. These uncertainties are assessed during cooling time. To evaluate this Monte-Carlo technique, two different applications are performed. First, a fission pulse decay heat calculation is carried out to check the Monte-Carlo technique, using decay data and fission yields uncertainties. Then, the results, experimental data and reference calculation (JEFF Report20), are compared. Second, we assess the impact of basic nuclear data (activation cross-section, decay data and fission yields) uncertainties on relevant fuel cycle parameters (decay heat and radiotoxicity) for a conceptual design of a modular European Facility for Industrial Transmutation (EFIT) fuel cycle. After identifying which time steps have higher uncertainties, an assessment of which uncertainties have more relevance is performed
Pragmatic aspects of uncertainty propagation: A conceptual review
Thacker, W.Carlisle; Iskandarani, Mohamad; Gonç alves, Rafael C.; Srinivasan, Ashwanth; Knio, Omar
2015-01-01
When quantifying the uncertainty of the response of a computationally costly oceanographic or meteorological model stemming from the uncertainty of its inputs, practicality demands getting the most information using the fewest simulations. It is widely recognized that, by interpolating the results of a small number of simulations, results of additional simulations can be inexpensively approximated to provide a useful estimate of the variability of the response. Even so, as computing the simulations to be interpolated remains the biggest expense, the choice of these simulations deserves attention. When making this choice, two requirement should be considered: (i) the nature of the interpolation and ii) the available information about input uncertainty. Examples comparing polynomial interpolation and Gaussian process interpolation are presented for three different views of input uncertainty.
Pragmatic aspects of uncertainty propagation: A conceptual review
Thacker, W.Carlisle
2015-09-11
When quantifying the uncertainty of the response of a computationally costly oceanographic or meteorological model stemming from the uncertainty of its inputs, practicality demands getting the most information using the fewest simulations. It is widely recognized that, by interpolating the results of a small number of simulations, results of additional simulations can be inexpensively approximated to provide a useful estimate of the variability of the response. Even so, as computing the simulations to be interpolated remains the biggest expense, the choice of these simulations deserves attention. When making this choice, two requirement should be considered: (i) the nature of the interpolation and ii) the available information about input uncertainty. Examples comparing polynomial interpolation and Gaussian process interpolation are presented for three different views of input uncertainty.
Modelling and propagation of uncertainties in the German Risk Study
International Nuclear Information System (INIS)
Hofer, E.; Krzykacz, B.
1982-01-01
Risk assessments are generally subject to uncertainty considerations. This is because of the various estimates that are involved. The paper points out those estimates in the so-called phase A of the German Risk Study, for which uncertainties were quantified. It explains the probabilistic models applied in the assessment to their impact on the findings of the study. Finally the resulting subjective confidence intervals of the study results are presented and their sensitivity to these probabilistic models is investigated
Preliminary Results on Uncertainty Quantification for Pattern Analytics
Energy Technology Data Exchange (ETDEWEB)
Stracuzzi, David John [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Brost, Randolph [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Chen, Maximillian Gene [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Malinas, Rebecca [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Peterson, Matthew Gregor [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Phillips, Cynthia A. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robinson, David G. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Woodbridge, Diane [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
This report summarizes preliminary research into uncertainty quantification for pattern ana- lytics within the context of the Pattern Analytics to Support High-Performance Exploitation and Reasoning (PANTHER) project. The primary focus of PANTHER was to make large quantities of remote sensing data searchable by analysts. The work described in this re- port adds nuance to both the initial data preparation steps and the search process. Search queries are transformed from does the specified pattern exist in the data? to how certain is the system that the returned results match the query? We show example results for both data processing and search, and discuss a number of possible improvements for each.
An analytical approach for the Propagation Saw Test
Benedetti, Lorenzo; Fischer, Jan-Thomas; Gaume, Johan
2016-04-01
The Propagation Saw Test (PST) [1, 2] is an experimental in-situ technique that has been introduced to assess crack propagation propensity in weak snowpack layers buried below cohesive snow slabs. This test attracted the interest of a large number of practitioners, being relatively easy to perform and providing useful insights for the evaluation of snow instability. The PST procedure requires isolating a snow column of 30 centimeters of width and -at least-1 meter in the downslope direction. Then, once the stratigraphy is known (e.g. from a manual snow profile), a saw is used to cut a weak layer which could fail, potentially leading to the release of a slab avalanche. If the length of the saw cut reaches the so-called critical crack length, the onset of crack propagation occurs. Furthermore, depending on snow properties, the crack in the weak layer can initiate the fracture and detachment of the overlying slab. Statistical studies over a large set of field data confirmed the relevance of the PST, highlighting the positive correlation between test results and the likelihood of avalanche release [3]. Recent works provided key information on the conditions for the onset of crack propagation [4] and on the evolution of slab displacement during the test [5]. In addition, experimental studies [6] and simplified models [7] focused on the qualitative description of snowpack properties leading to different failure types, namely full propagation or fracture arrest (with or without slab fracture). However, beside current numerical studies utilizing discrete elements methods [8], only little attention has been devoted to a detailed analytical description of the PST able to give a comprehensive mechanical framework of the sequence of processes involved in the test. Consequently, this work aims to give a quantitative tool for an exhaustive interpretation of the PST, stressing the attention on important parameters that influence the test outcomes. First, starting from a pure
An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method
International Nuclear Information System (INIS)
Campolina, Daniel; Lima, Paulo Rubens I.; Pereira, Claubia; Veloso, Maria Auxiliadora F.
2015-01-01
Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)
Analytical probabilistic proton dose calculation and range uncertainties
Bangert, M.; Hennig, P.; Oelfke, U.
2014-03-01
We introduce the concept of analytical probabilistic modeling (APM) to calculate the mean and the standard deviation of intensity-modulated proton dose distributions under the influence of range uncertainties in closed form. For APM, range uncertainties are modeled with a multivariate Normal distribution p(z) over the radiological depths z. A pencil beam algorithm that parameterizes the proton depth dose d(z) with a weighted superposition of ten Gaussians is used. Hence, the integrals ∫ dz p(z) d(z) and ∫ dz p(z) d(z)2 required for the calculation of the expected value and standard deviation of the dose remain analytically tractable and can be efficiently evaluated. The means μk, widths δk, and weights ωk of the Gaussian components parameterizing the depth dose curves are found with least squares fits for all available proton ranges. We observe less than 0.3% average deviation of the Gaussian parameterizations from the original proton depth dose curves. Consequently, APM yields high accuracy estimates for the expected value and standard deviation of intensity-modulated proton dose distributions for two dimensional test cases. APM can accommodate arbitrary correlation models and account for the different nature of random and systematic errors in fractionated radiation therapy. Beneficial applications of APM in robust planning are feasible.
International Nuclear Information System (INIS)
Berne, A.
2001-01-01
Quantitative determinations of many radioactive analytes in environmental samples are based on a process in which several independent measurements of different properties are taken. The final results that are calculated using the data have to be evaluated for accuracy and precision. The estimate of the standard deviation, s, also called the combined standard uncertainty (CSU) associated with the result of this combined measurement can be used to evaluate the precision of the result. The CSU can be calculated by applying the law of propagation of uncertainty, which is based on the Taylor series expansion of the equation used to calculate the analytical result. The estimate of s can also be obtained from a Monte Carlo simulation. The data used in this simulation includes the values resulting from the individual measurements, the estimate of the variance of each value, including the type of distribution, and the equation used to calculate the analytical result. A comparison is made between these two methods of estimating the uncertainty of the calculated result. (author)
Sonic Boom Pressure Signature Uncertainty Calculation and Propagation to Ground Noise
West, Thomas K., IV; Bretl, Katherine N.; Walker, Eric L.; Pinier, Jeremy T.
2015-01-01
The objective of this study was to outline an approach for the quantification of uncertainty in sonic boom measurements and to investigate the effect of various near-field uncertainty representation approaches on ground noise predictions. These approaches included a symmetric versus asymmetric uncertainty band representation and a dispersion technique based on a partial sum Fourier series that allows for the inclusion of random error sources in the uncertainty. The near-field uncertainty was propagated to the ground level, along with additional uncertainty in the propagation modeling. Estimates of perceived loudness were obtained for the various types of uncertainty representation in the near-field. Analyses were performed on three configurations of interest to the sonic boom community: the SEEB-ALR, the 69o DeltaWing, and the LM 1021-01. Results showed that representation of the near-field uncertainty plays a key role in ground noise predictions. Using a Fourier series based dispersion approach can double the amount of uncertainty in the ground noise compared to a pure bias representation. Compared to previous computational fluid dynamics results, uncertainty in ground noise predictions were greater when considering the near-field experimental uncertainty.
Directory of Open Access Journals (Sweden)
J. Schwarz
2018-05-01
Full Text Available Global Navigation Satellite System (GNSS radio occultation (RO observations are highly accurate, long-term stable data sets and are globally available as a continuous record from 2001. Essential climate variables for the thermodynamic state of the free atmosphere – such as pressure, temperature, and tropospheric water vapor profiles (involving background information – can be derived from these records, which therefore have the potential to serve as climate benchmark data. However, to exploit this potential, atmospheric profile retrievals need to be very accurate and the remaining uncertainties quantified and traced throughout the retrieval chain from raw observations to essential climate variables. The new Reference Occultation Processing System (rOPS at the Wegener Center aims to deliver such an accurate RO retrieval chain with integrated uncertainty propagation. Here we introduce and demonstrate the algorithms implemented in the rOPS for uncertainty propagation from excess phase to atmospheric bending angle profiles, for estimated systematic and random uncertainties, including vertical error correlations and resolution estimates. We estimated systematic uncertainty profiles with the same operators as used for the basic state profiles retrieval. The random uncertainty is traced through covariance propagation and validated using Monte Carlo ensemble methods. The algorithm performance is demonstrated using test day ensembles of simulated data as well as real RO event data from the satellite missions CHAllenging Minisatellite Payload (CHAMP; Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC; and Meteorological Operational Satellite A (MetOp. The results of the Monte Carlo validation show that our covariance propagation delivers correct uncertainty quantification from excess phase to bending angle profiles. The results from the real RO event ensembles demonstrate that the new uncertainty estimation chain performs
Schwarz, Jakob; Kirchengast, Gottfried; Schwaerz, Marc
2018-05-01
Global Navigation Satellite System (GNSS) radio occultation (RO) observations are highly accurate, long-term stable data sets and are globally available as a continuous record from 2001. Essential climate variables for the thermodynamic state of the free atmosphere - such as pressure, temperature, and tropospheric water vapor profiles (involving background information) - can be derived from these records, which therefore have the potential to serve as climate benchmark data. However, to exploit this potential, atmospheric profile retrievals need to be very accurate and the remaining uncertainties quantified and traced throughout the retrieval chain from raw observations to essential climate variables. The new Reference Occultation Processing System (rOPS) at the Wegener Center aims to deliver such an accurate RO retrieval chain with integrated uncertainty propagation. Here we introduce and demonstrate the algorithms implemented in the rOPS for uncertainty propagation from excess phase to atmospheric bending angle profiles, for estimated systematic and random uncertainties, including vertical error correlations and resolution estimates. We estimated systematic uncertainty profiles with the same operators as used for the basic state profiles retrieval. The random uncertainty is traced through covariance propagation and validated using Monte Carlo ensemble methods. The algorithm performance is demonstrated using test day ensembles of simulated data as well as real RO event data from the satellite missions CHAllenging Minisatellite Payload (CHAMP); Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC); and Meteorological Operational Satellite A (MetOp). The results of the Monte Carlo validation show that our covariance propagation delivers correct uncertainty quantification from excess phase to bending angle profiles. The results from the real RO event ensembles demonstrate that the new uncertainty estimation chain performs robustly. Together
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Propagation of uncertainties in problems of structural reliability
International Nuclear Information System (INIS)
Mazumdar, M.; Marshall, J.A.; Chay, S.C.
1978-01-01
The problem of controlling a variable Y such that the probability of its exceeding a specified design limit L is very small, is treated. This variable is related to a set of random variables Xsub(i) by means of a known function Y=f(Xsub(i)). The following approximate methods are considered for estimating the propagation of error in the Xsub(i)'s through the function f(-): linearization; method of moments; Monte Carlo methods; numerical integration. Response surface and associated design of experiments problems as well as statistical inference problems are discussed. (Auth.)
International Nuclear Information System (INIS)
Frosio, Thomas; Bonaccorsi, Thomas; Blaise, Patrick
2016-01-01
Highlights: • Hybrid methods are developed for uncertainty propagation. • These methods take into account the flux perturbation in the coupled problem. • We show that OAT and MC methods give coherent results, except for Pearson correlations. • Local sensitivity analysis is performed. - Abstract: A novel method has been developed to calculate sensitivity coefficients in coupled Boltzmann/Bateman problem for nuclear data (ND) uncertainties propagation on the reactivity. Different uncertainty propagation methodologies, such as One-At-a-Time (OAT) and hybrid Monte-Carlo/deterministic methods have been tested and are discussed on an actual example of ND uncertainty problem on a Material Testing Reactor (MTR) benchmark. Those methods, unlike total Monte Carlo (MC) sampling for uncertainty propagation and quantification (UQ), allow obtaining sensitivity coefficients, as well as Bravais–Pearson correlations values between Boltzmann and Bateman, during the depletion calculation for global neutronics parameters such as the effective multiplication coefficient. The methodologies are compared to a pure MC sampling method, usually considered as the “reference” method. It is shown that methodologies can seriously underestimate propagated variances, when Bravais–Pearson correlations on ND are not taken into account in the UQ process.
Analytical and Numerical Modeling of Tsunami Wave Propagation for double layer state in Bore
Yuvaraj, V.; Rajasekaran, S.; Nagarajan, D.
2018-04-01
Tsunami wave enters into the river bore in the landslide. Tsunami wave propagation are described in two-layer states. The velocity and amplitude of the tsunami wave propagation are calculated using the double layer. The numerical and analytical solutions are given for the nonlinear equation of motion of the wave propagation in a bore.
International Nuclear Information System (INIS)
Campolina, Daniel de A.M.; Pereira, Claubia; Veloso, Maria Auxiliadora F.
2013-01-01
For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using sampling based method is recent because of the huge computational effort required. In this work a sample space of MCNP calculations were used as a black box model to propagate the uncertainty of system parameters. The efficiency of the method was compared to a conservative method. Uncertainties in input parameters of the reactor considered non-neutronic uncertainties, including geometry dimensions and density. The effect of the uncertainties on the effective multiplication factor of the system was analyzed respect to the possibility of using many uncertainties in the same input. If the case includes more than 46 parameters with uncertainty in the same input, the sampling based method is proved to be more efficient than the conservative method. (author)
Epistemic uncertainty propagation in energy flows between structural vibrating systems
Xu, Menghui; Du, Xiaoping; Qiu, Zhiping; Wang, Chong
2016-03-01
A dimension-wise method for predicting fuzzy energy flows between structural vibrating systems coupled by joints with epistemic uncertainties is established. Based on its Legendre polynomial approximation at α=0, both the minimum and maximum point vectors of the energy flow of interest are calculated dimension by dimension within the space spanned by the interval parameters determined by fuzzy those at α=0 and the resulted interval bounds are used to assemble the concerned fuzzy energy flows. Besides the proposed method, vertex method as well as two current methods is also applied. Comparisons among results by different methods are accomplished by two numerical examples and the accuracy of all methods is simultaneously verified by Monte Carlo simulation.
A simplified analysis of uncertainty propagation in inherently controlled ATWS events
International Nuclear Information System (INIS)
Wade, D.C.
1987-01-01
The quasi static approach can be used to provide useful insight concerning the propagation of uncertainties in the inherent response to ATWS events. At issue is how uncertainties in the reactivity coefficients and in the thermal-hydraulics and materials properties propagate to yield uncertainties in the asymptotic temperatures attained upon inherent shutdown. The basic notion to be quantified is that many of the same physical phenomena contribute to both the reactivity increase of power reduction and the reactivity decrease of core temperature rise. Since these reactivities cancel by definition, a good deal of uncertainty cancellation must also occur of necessity. For example, if the Doppler coefficient is overpredicted, too large a positive reactivity insertion is predicted upon power reduction and collapse of the ΔT across the fuel pin. However, too large a negative reactivity is also predicted upon the compensating increase in the isothermal core average temperature - which includes the fuel Doppler effect
Servin, Christian
2015-01-01
On various examples ranging from geosciences to environmental sciences, this book explains how to generate an adequate description of uncertainty, how to justify semiheuristic algorithms for processing uncertainty, and how to make these algorithms more computationally efficient. It explains in what sense the existing approach to uncertainty as a combination of random and systematic components is only an approximation, presents a more adequate three-component model with an additional periodic error component, and explains how uncertainty propagation techniques can be extended to this model. The book provides a justification for a practically efficient heuristic technique (based on fuzzy decision-making). It explains how the computational complexity of uncertainty processing can be reduced. The book also shows how to take into account that in real life, the information about uncertainty is often only partially known, and, on several practical examples, explains how to extract the missing information about uncer...
Car, Nicholas; Cox, Simon; Fitch, Peter
2015-04-01
With earth-science datasets increasingly being published to enable re-use in projects disassociated from the original data acquisition or generation, there is an urgent need for associated metadata to be connected, in order to guide their application. In particular, provenance traces should support the evaluation of data quality and reliability. However, while standards for describing provenance are emerging (e.g. PROV-O), these do not include the necessary statistical descriptors and confidence assessments. UncertML has a mature conceptual model that may be used to record uncertainty metadata. However, by itself UncertML does not support the representation of uncertainty of multi-part datasets, and provides no direct way of associating the uncertainty information - metadata in relation to a dataset - with dataset objects.We present a method to address both these issues by combining UncertML with PROV-O, and delivering resulting uncertainty-enriched provenance traces through the Linked Data API. UncertProv extends the PROV-O provenance ontology with an RDF formulation of the UncertML conceptual model elements, adds further elements to support uncertainty representation without a conceptual model and the integration of UncertML through links to documents. The Linked ID API provides a systematic way of navigating from dataset objects to their UncertProv metadata and back again. The Linked Data API's 'views' capability enables access to UncertML and non-UncertML uncertainty metadata representations for a dataset. With this approach, it is possible to access and navigate the uncertainty metadata associated with a published dataset using standard semantic web tools, such as SPARQL queries. Where the uncertainty data follows the UncertML model it can be automatically interpreted and may also support automatic uncertainty propagation . Repositories wishing to enable uncertainty propagation for all datasets must ensure that all elements that are associated with uncertainty
Multi-Fidelity Uncertainty Propagation for Cardiovascular Modeling
Fleeter, Casey; Geraci, Gianluca; Schiavazzi, Daniele; Kahn, Andrew; Marsden, Alison
2017-11-01
Hemodynamic models are successfully employed in the diagnosis and treatment of cardiovascular disease with increasing frequency. However, their widespread adoption is hindered by our inability to account for uncertainty stemming from multiple sources, including boundary conditions, vessel material properties, and model geometry. In this study, we propose a stochastic framework which leverages three cardiovascular model fidelities: 3D, 1D and 0D models. 3D models are generated from patient-specific medical imaging (CT and MRI) of aortic and coronary anatomies using the SimVascular open-source platform, with fluid structure interaction simulations and Windkessel boundary conditions. 1D models consist of a simplified geometry automatically extracted from the 3D model, while 0D models are obtained from equivalent circuit representations of blood flow in deformable vessels. Multi-level and multi-fidelity estimators from Sandia's open-source DAKOTA toolkit are leveraged to reduce the variance in our estimated output quantities of interest while maintaining a reasonable computational cost. The performance of these estimators in terms of computational cost reductions is investigated for a variety of output quantities of interest, including global and local hemodynamic indicators. Sandia National Labs is a multimission laboratory managed and operated by NTESS, LLC, for the U.S. DOE under contract DE-NA0003525. Funding for this project provided by NIH-NIBIB R01 EB018302.
Real-Time Optimal Flood Control Decision Making and Risk Propagation Under Multiple Uncertainties
Zhu, Feilin; Zhong, Ping-An; Sun, Yimeng; Yeh, William W.-G.
2017-12-01
Multiple uncertainties exist in the optimal flood control decision-making process, presenting risks involving flood control decisions. This paper defines the main steps in optimal flood control decision making that constitute the Forecast-Optimization-Decision Making (FODM) chain. We propose a framework for supporting optimal flood control decision making under multiple uncertainties and evaluate risk propagation along the FODM chain from a holistic perspective. To deal with uncertainties, we employ stochastic models at each link of the FODM chain. We generate synthetic ensemble flood forecasts via the martingale model of forecast evolution. We then establish a multiobjective stochastic programming with recourse model for optimal flood control operation. The Pareto front under uncertainty is derived via the constraint method coupled with a two-step process. We propose a novel SMAA-TOPSIS model for stochastic multicriteria decision making. Then we propose the risk assessment model, the risk of decision-making errors and rank uncertainty degree to quantify the risk propagation process along the FODM chain. We conduct numerical experiments to investigate the effects of flood forecast uncertainty on optimal flood control decision making and risk propagation. We apply the proposed methodology to a flood control system in the Daduhe River basin in China. The results indicate that the proposed method can provide valuable risk information in each link of the FODM chain and enable risk-informed decisions with higher reliability.
International Nuclear Information System (INIS)
Campolina, Daniel de Almeida Magalhães
2015-01-01
There is an uncertainty for all the components that comprise the model of a nuclear system. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a realistic calculation that has been replacing conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. By analyzing the propagated uncertainty to the effective neutron multiplication factor (k eff ), the effects of the sample size, computational uncertainty and efficiency of a random number generator to represent the distributions that characterize physical uncertainty in a light water reactor was investigated. A program entitled GB s ample was implemented to enable the application of the random sampling method, which requires an automated process and robust statistical tools. The program was based on the black box model and the MCNPX code was used in and parallel processing for the calculation of particle transport. The uncertainties considered were taken from a benchmark experiment in which the effects in k eff due to physical uncertainties is done through a conservative method. In this work a script called GB s ample was implemented to automate the sampling based method, use multiprocessing and assure the necessary robustness. It has been found the possibility of improving the efficiency of the random sampling method by selecting distributions obtained from a random number generator in order to obtain a better representation of uncertainty figures. After the convergence of the method is achieved, in order to reduce the variance of the uncertainty propagated without increase in computational time, it was found the best number o components to be sampled. It was also observed that if the sampling method is used to calculate the effect on k eff due to physical uncertainties reported by
Understanding uncertainty propagation in life cycle assessments of waste management systems
DEFF Research Database (Denmark)
Bisinella, Valentina; Conradsen, Knut; Christensen, Thomas Højlund
2015-01-01
Uncertainty analysis in Life Cycle Assessments (LCAs) of waste management systems often results obscure and complex, with key parameters rarely determined on a case-by-case basis. The paper shows an application of a simplified approach to uncertainty coupled with a Global Sensitivity Analysis (GSA......) perspective on three alternative waste management systems for Danish single-family household waste. The approach provides a fast and systematic method to select the most important parameters in the LCAs, understand their propagation and contribution to uncertainty....
Ruiz, Rafael O.; Meruane, Viviana
2017-06-01
The goal of this work is to describe a framework to propagate uncertainties in piezoelectric energy harvesters (PEHs). These uncertainties are related to the incomplete knowledge of the model parameters. The framework presented could be employed to conduct prior robust stochastic predictions. The prior analysis assumes a known probability density function for the uncertain variables and propagates the uncertainties to the output voltage. The framework is particularized to evaluate the behavior of the frequency response functions (FRFs) in PEHs, while its implementation is illustrated by the use of different unimorph and bimorph PEHs subjected to different scenarios: free of uncertainties, common uncertainties, and uncertainties as a product of imperfect clamping. The common variability associated with the PEH parameters are tabulated and reported. A global sensitivity analysis is conducted to identify the Sobol indices. Results indicate that the elastic modulus, density, and thickness of the piezoelectric layer are the most relevant parameters of the output variability. The importance of including the model parameter uncertainties in the estimation of the FRFs is revealed. In this sense, the present framework constitutes a powerful tool in the robust design and prediction of PEH performance.
International Nuclear Information System (INIS)
Frosio, T.; Bonaccorsi, T.; Blaise, P.
2017-01-01
The precise estimation of Pearson correlations, also called 'representativity' coefficients, between core configurations is a fundamental quantity for properly assessing the nuclear data (ND) uncertainties propagation on integral parameters such as k-eff, power distributions, or reactivity coefficients. In this paper, a traditional adjoint method is used to propagate ND uncertainty on reactivity and reactivity coefficients and estimate correlations between different states of the core. We show that neglecting those correlations induces a loss of information in the final uncertainty. We also show that using approximate values of Pearson does not lead to an important error of the model. This calculation is made for reactivity at the beginning of life and can be extended to other parameters during depletion calculations. (authors)
Mangado, Nerea; Piella, Gemma; Noailly, Jérôme; Pons-Prats, Jordi; Ballester, Miguel Ángel González
2016-01-01
Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering.
Energy Technology Data Exchange (ETDEWEB)
Rahman, M.M.; Hossain, M.M.; Crosby, D.G.; Rahman, M.K.; Rahman, S.S. [School of Petroleum Engineering, The University of New South Wales, 2052 Sydney (Australia)
2002-08-01
This paper presents results of a comprehensive study involving analytical, numerical and experimental investigations into transverse fracture propagation from horizontal wells. The propagation of transverse hydraulic fractures from horizontal wells is simulated and investigated in the laboratory using carefully designed experimental setups. Closed-form analytical theories for Mode I (opening) stress intensity factors for idealized fracture geometries are reviewed, and a boundary element-based model is used herein to investigate non-planar propagation of fractures. Using the mixed mode fracture propagation criterion of the model, a reasonable agreement is found with respect to fracture geometry, net fracture pressures and fracture propagation paths between the modeled fractures and the laboratory tested fractures. These results suggest that the propagation of multiple fractures requires higher net pressures than a single fracture, the underlying reason of which is theoretically justified on the basis of local stress distribution.
Analytical Lie-algebraic solution of a 3D sound propagation problem in the ocean
Energy Technology Data Exchange (ETDEWEB)
Petrov, P.S., E-mail: petrov@poi.dvo.ru [Il' ichev Pacific Oceanological Institute, 43 Baltiyskaya str., Vladivostok, 690041 (Russian Federation); Prants, S.V., E-mail: prants@poi.dvo.ru [Il' ichev Pacific Oceanological Institute, 43 Baltiyskaya str., Vladivostok, 690041 (Russian Federation); Petrova, T.N., E-mail: petrova.tn@dvfu.ru [Far Eastern Federal University, 8 Sukhanova str., 690950, Vladivostok (Russian Federation)
2017-06-21
The problem of sound propagation in a shallow sea with variable bottom slope is considered. The sound pressure field produced by a time-harmonic point source in such inhomogeneous 3D waveguide is expressed in the form of a modal expansion. The expansion coefficients are computed using the adiabatic mode parabolic equation theory. The mode parabolic equations are solved explicitly, and the analytical expressions for the modal coefficients are obtained using a Lie-algebraic technique. - Highlights: • A group-theoretical approach is applied to a problem of sound propagation in a shallow sea with variable bottom slope. • An analytical solution of this problem is obtained in the form of modal expansion with analytical expressions of the coefficients. • Our result is the only analytical solution of the 3D sound propagation problem with no translational invariance. • This solution can be used for the validation of the numerical propagation models.
International Nuclear Information System (INIS)
Mullor, R.; Sanchez, A.; Martorell, S.; Martinez-Alzamora, N.
2011-01-01
Safety related systems performance optimization is classically based on quantifying the effects that testing and maintenance activities have on reliability and cost (R+C). However, R+C quantification is often incomplete in the sense that important uncertainties may not be considered. An important number of studies have been published in the last decade in the field of R+C based optimization considering uncertainties. They have demonstrated that inclusion of uncertainties in the optimization brings the decision maker insights concerning how uncertain the R+C results are and how this uncertainty does matter as it can result in differences in the outcome of the decision making process. Several methods of uncertainty propagation based on the theory of tolerance regions have been proposed in the literature depending on the particular characteristics of the variables in the output and their relations. In this context, the objective of this paper focuses on the application of non-parametric and parametric methods to analyze uncertainty propagation, which will be implemented on a multi-objective optimization problem where reliability and cost act as decision criteria and maintenance intervals act as decision variables. Finally, a comparison of results of these applications and the conclusions obtained are presented.
Energy Technology Data Exchange (ETDEWEB)
Mullor, R. [Dpto. Estadistica e Investigacion Operativa, Universidad Alicante (Spain); Sanchez, A., E-mail: aisanche@eio.upv.e [Dpto. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica Valencia, Camino de Vera s/n 46022 (Spain); Martorell, S. [Dpto. Ingenieria Quimica y Nuclear, Universidad Politecnica Valencia (Spain); Martinez-Alzamora, N. [Dpto. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica Valencia, Camino de Vera s/n 46022 (Spain)
2011-06-15
Safety related systems performance optimization is classically based on quantifying the effects that testing and maintenance activities have on reliability and cost (R+C). However, R+C quantification is often incomplete in the sense that important uncertainties may not be considered. An important number of studies have been published in the last decade in the field of R+C based optimization considering uncertainties. They have demonstrated that inclusion of uncertainties in the optimization brings the decision maker insights concerning how uncertain the R+C results are and how this uncertainty does matter as it can result in differences in the outcome of the decision making process. Several methods of uncertainty propagation based on the theory of tolerance regions have been proposed in the literature depending on the particular characteristics of the variables in the output and their relations. In this context, the objective of this paper focuses on the application of non-parametric and parametric methods to analyze uncertainty propagation, which will be implemented on a multi-objective optimization problem where reliability and cost act as decision criteria and maintenance intervals act as decision variables. Finally, a comparison of results of these applications and the conclusions obtained are presented.
Analytical solution for the correlator with Gribov propagators
Czech Academy of Sciences Publication Activity Database
Šauli, Vladimír
2016-01-01
Roč. 14, č. 1 (2016), s. 570-578 E-ISSN 2391-5471 Institutional support: RVO:61389005 Keywords : confinement * Gribov propagator * Quantum Chromodynamics * dispersion relations * quantum field theory * Green's functions Subject RIV: BE - Theoretical Physics Impact factor: 0.745, year: 2016
International Nuclear Information System (INIS)
Sabouri, Pouya
2013-01-01
This thesis presents a comprehensive study of sensitivity/uncertainty analysis for reactor performance parameters (e.g. the k-effective) to the base nuclear data from which they are computed. The analysis starts at the fundamental step, the Evaluated Nuclear Data File and the uncertainties inherently associated with the data they contain, available in the form of variance/covariance matrices. We show that when a methodical and consistent computation of sensitivity is performed, conventional deterministic formalisms can be sufficient to propagate nuclear data uncertainties with the level of accuracy obtained by the most advanced tools, such as state-of-the-art Monte Carlo codes. By applying our developed methodology to three exercises proposed by the OECD (Uncertainty Analysis for Criticality Safety Assessment Benchmarks), we provide insights of the underlying physical phenomena associated with the used formalisms. (author)
Propagation of statistical and nuclear data uncertainties in Monte Carlo burn-up calculations
International Nuclear Information System (INIS)
Garcia-Herranz, Nuria; Cabellos, Oscar; Sanz, Javier; Juan, Jesus; Kuijper, Jim C.
2008-01-01
Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB. A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants. This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files
Propagation of statistical and nuclear data uncertainties in Monte Carlo burn-up calculations
Energy Technology Data Exchange (ETDEWEB)
Garcia-Herranz, Nuria [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain)], E-mail: nuria@din.upm.es; Cabellos, Oscar [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain); Sanz, Javier [Departamento de Ingenieria Energetica, Universidad Nacional de Educacion a Distancia, UNED (Spain); Juan, Jesus [Laboratorio de Estadistica, Universidad Politecnica de Madrid, UPM (Spain); Kuijper, Jim C. [NRG - Fuels, Actinides and Isotopes Group, Petten (Netherlands)
2008-04-15
Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB. A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants. This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files.
New strategies for quantifying and propagating nuclear data uncertainty in CUSA
International Nuclear Information System (INIS)
Zhao, Qiang; Zhang, Chunyan; Hao, Chen; Li, Fu; Wang, Dongyong; Yu, Yan
2016-01-01
Highlights: • An efficient sampling method based on LHS combined with Cholesky decomposition conversion is proposed. • A code of generating multi-group covariance matrices has been developed. • The uncertainty and sensitivity results of CUSA agree well with TSUNAMI-1D. - Abstract: The uncertainties of nuclear cross sections are propagated to the key parameters of nuclear reactor core through transport calculation. The statistical sampling method can be used to quantify and propagate nuclear data uncertainty in nuclear reactor physics calculations. In order to use statistical sampling method two key technical problems, method of generating multi-group covariance matrices and sampling method, should be considered reasonably and efficiently. In this paper, a method of transforming nuclear cross section covariance matrix in multi-group form into users' group structures based on the flat-flux approximation has been studied in depth. And most notably, an efficient sampling method has been proposed, which is based on Latin Hypercube Sampling (LHS) combined with Cholesky decomposition conversion. Based on those method, two modules named T-COCCO and GUIDE have been developed and have been successfully added into the code for uncertainty and sensitivity analysis (CUSA). The new modules have been verified respectively. Numerical results for the TMI-1 pin-cell case are presented and compared to TSUNAMI-1D. The comparison of the results further support that the methods and the computational tool developed in this work can be used to conduct sensitivity and uncertainty analysis for nuclear cross sections.
New strategies for quantifying and propagating nuclear data uncertainty in CUSA
Energy Technology Data Exchange (ETDEWEB)
Zhao, Qiang; Zhang, Chunyan [Fundamental Science on Nuclear Safety and Simulation Technology Laboratory, College of Nuclear Science and Technology, Harbin Engineering University, Harbin (China); Hao, Chen, E-mail: haochen.heu@163.com [Fundamental Science on Nuclear Safety and Simulation Technology Laboratory, College of Nuclear Science and Technology, Harbin Engineering University, Harbin (China); Li, Fu [Institute of Nuclear and New Energy Technology(INET), Collaborative Innovation Center of Advanced Nuclear Energy Technology, Key Laboratory of Advanced Reactor Engineering and Safety of Ministry of Education, Tsinghua University, Beijing (China); Wang, Dongyong [Fundamental Science on Nuclear Safety and Simulation Technology Laboratory, College of Nuclear Science and Technology, Harbin Engineering University, Harbin (China); School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an (China); Yu, Yan [Fundamental Science on Nuclear Safety and Simulation Technology Laboratory, College of Nuclear Science and Technology, Harbin Engineering University, Harbin (China)
2016-10-15
Highlights: • An efficient sampling method based on LHS combined with Cholesky decomposition conversion is proposed. • A code of generating multi-group covariance matrices has been developed. • The uncertainty and sensitivity results of CUSA agree well with TSUNAMI-1D. - Abstract: The uncertainties of nuclear cross sections are propagated to the key parameters of nuclear reactor core through transport calculation. The statistical sampling method can be used to quantify and propagate nuclear data uncertainty in nuclear reactor physics calculations. In order to use statistical sampling method two key technical problems, method of generating multi-group covariance matrices and sampling method, should be considered reasonably and efficiently. In this paper, a method of transforming nuclear cross section covariance matrix in multi-group form into users' group structures based on the flat-flux approximation has been studied in depth. And most notably, an efficient sampling method has been proposed, which is based on Latin Hypercube Sampling (LHS) combined with Cholesky decomposition conversion. Based on those method, two modules named T-COCCO and GUIDE have been developed and have been successfully added into the code for uncertainty and sensitivity analysis (CUSA). The new modules have been verified respectively. Numerical results for the TMI-1 pin-cell case are presented and compared to TSUNAMI-1D. The comparison of the results further support that the methods and the computational tool developed in this work can be used to conduct sensitivity and uncertainty analysis for nuclear cross sections.
International Nuclear Information System (INIS)
Amendola, A.; Astolfi, M.; Lisanti, B.
1983-01-01
The report describes the how-to-use of the codes: MUP (Monte Carlo Uncertainty Propagation) for uncertainty analysis by Monte Carlo simulation, including correlation analysis, extreme value identification and study of selected ranges of the variable space; CEC-DES (Central Composite Design) for building experimental matrices according to the requirements of Central Composite and Factorial Experimental Designs; and, STRADE (Stratified Random Design) for experimental designs based on the Latin Hypercube Sampling Techniques. Application fields, of the codes are probabilistic risk assessment, experimental design, sensitivity analysis and system identification problems
An analytical solution for tidal propagation in the Yangtze Estuary, China
Zhang, E.F.; Savenije, H.H.G.; Chen, S.L.; Mao, X.H.
2012-01-01
An analytical model for tidal dynamics has been applied to the Yangtze Estuary for the first time, to describe the tidal propagation in this large and typically branched estuary with three-order branches and four outlets to the sea. This study shows that the analytical model developed for a
Propagation of Nuclear Data Uncertainties in Integral Measurements by Monte-Carlo Calculations
Energy Technology Data Exchange (ETDEWEB)
Noguere, G.; Bernard, D.; De Saint-Jean, C. [CEA Cadarache, 13 - Saint Paul lez Durance (France)
2006-07-01
Full text of the publication follows: The generation of Multi-group cross sections together with relevant uncertainties is fundamental to assess the quality of integral data. The key information that are needed to propagate the microscopic experimental uncertainties to macroscopic reactor calculations are (1) the experimental covariance matrices, (2) the correlations between the parameters of the model and (3) the covariance matrices for the multi-group cross sections. The propagation of microscopic errors by Monte-Carlo technique was applied to determine the accuracy of the integral trends provided by the OSMOSE experiment carried out in the MINERVE reactor of the CEA Cadarache. The technique consists in coupling resonance shape analysis and deterministic codes. The integral trend and its accuracy obtained on the {sup 237}Np(n,{gamma}) reaction will be presented. (author)
International Nuclear Information System (INIS)
Larson, N.M.
1984-02-01
This report describes a computer code (ALEX) developed to assist in AnaLysis of EXperimental data at the Oak Ridge Electron Linear Accelerator (ORELA). Reduction of data from raw numbers (counts per channel) to physically meaningful quantities (such as cross sections) is in itself a complicated procedure; propagation of experimental uncertainties through that reduction procedure has in the past been viewed as even more difficult - if not impossible. The purpose of the code ALEX is to correctly propagate all experimental uncertainties through the entire reduction procedure, yielding the complete covariance matrix for the reduced data, while requiring little additional input from the eperimentalist beyond that which is required for the data reduction itself. This report describes ALEX in detail, with special attention given to the case of transmission measurements (the code itself is applicable, with few changes, to any type of data). Application to the natural iron measurements of D.C. Larson et al. is described in some detail
Nuclear-data uncertainty propagations in burnup calculation for the PWR assembly
International Nuclear Information System (INIS)
Wan, Chenghui; Cao, Liangzhi; Wu, Hongchun; Shen, Wei
2017-01-01
Highlights: • The DRAGON 5.0 and NECP-CACTI have been implemented in UNICORN. • The effects of different neutronics methods on S&U results were quantified. • Uncertainty analysis has been applied to burnup calculation of PWR assembly. • The uncertainties of eigenvalue and few-group constants have been quantified. - Abstract: In this paper, our home-developed lattice code NECP-CACTI has been implemented into our UNICORN code to perform sensitivity and uncertainty analysis for the lattice calculations. The verified multigroup cross-section perturbation model and methods of the sensitivity and uncertainty analysis are established and applied to different lattice codes in UNICORN. As DRAGON5.0 and NECP-CACTI are available for the lattice calculations in UNICORN now, the effects of different neutronics methods (including methods for the neutron-transport and resonance self-shielding calculations) on the results of sensitivity and uncertainty analysis were studied in this paper. Based on NECP-CACTI, uncertainty analysis using the statistical sampling method has been performed to the burnup calculation for the fresh-fueled TMI-1 assembly, propagating the nuclear-data uncertainties to k_∞ and two-group constants of the lattice calculation with depletions. As results shown, for different neutronics methods, it can be observed that different methods of the neutron-transport calculation introduce no differences to the results of sensitivity and uncertainty analysis, while different methods of the resonance self-shielding calculation would impact the results. With depletions of the TMI-1 assembly, for k_∞, the relative uncertainty varies between 0.45% and 0.60%; for two-group constants, the largest variation is between 0.35% and 2.56% for vΣ_f_,_2. Moreover, the most significant contributors to the uncertainty of k_∞ and two-group constants varied with depletions are determined.
Analytical Model for Fictitious Crack Propagation in Concrete Beams
DEFF Research Database (Denmark)
Ulfkjær, J. P.; Krenk, Steen; Brincker, Rune
1995-01-01
An analytical model for load-displacement curves of concrete beams is presented. The load-displacement curve is obtained by combining two simple models. The fracture is modeled by a fictitious crack in an elastic layer around the midsection of the beam. Outside the elastic layer the deformations...... are modeled by beam theory. The state of stress in the elastic layer is assumed to depend bilinearly on local elongation corresponding to a linear softening relation for the fictitious crack. Results from the analytical model are compared with results from a more detailed model based on numerical methods...... for different beam sizes. The analytical model is shown to be in agreement with the numerical results if the thickness of the elastic layer is taken as half the beam depth. It is shown that the point on the load-displacement curve where the fictitious crack starts to develop and the point where the real crack...
Analytical Model for Fictitious Crack Propagation in Concrete Beams
DEFF Research Database (Denmark)
Ulfkjær, J. P.; Krenk, S.; Brincker, Rune
An analytical model for load-displacement curves of unreinforced notched and un-notched concrete beams is presented. The load displacement-curve is obtained by combining two simple models. The fracture is modelled by a fictitious crack in an elastic layer around the mid-section of the beam. Outside...... the elastic layer the deformations are modelled by the Timoshenko beam theory. The state of stress in the elastic layer is assumed to depend bi-lineary on local elongation corresponding to a linear softening relation for the fictitious crack. For different beam size results from the analytical model...... is compared with results from a more accurate model based on numerical methods. The analytical model is shown to be in good agreement with the numerical results if the thickness of the elastic layer is taken as half the beam depth. Several general results are obtained. It is shown that the point on the load...
Non-parametric order statistics method applied to uncertainty propagation in fuel rod calculations
International Nuclear Information System (INIS)
Arimescu, V.E.; Heins, L.
2001-01-01
Advances in modeling fuel rod behavior and accumulations of adequate experimental data have made possible the introduction of quantitative methods to estimate the uncertainty of predictions made with best-estimate fuel rod codes. The uncertainty range of the input variables is characterized by a truncated distribution which is typically a normal, lognormal, or uniform distribution. While the distribution for fabrication parameters is defined to cover the design or fabrication tolerances, the distribution of modeling parameters is inferred from the experimental database consisting of separate effects tests and global tests. The final step of the methodology uses a Monte Carlo type of random sampling of all relevant input variables and performs best-estimate code calculations to propagate these uncertainties in order to evaluate the uncertainty range of outputs of interest for design analysis, such as internal rod pressure and fuel centerline temperature. The statistical method underlying this Monte Carlo sampling is non-parametric order statistics, which is perfectly suited to evaluate quantiles of populations with unknown distribution. The application of this method is straightforward in the case of one single fuel rod, when a 95/95 statement is applicable: 'with a probability of 95% and confidence level of 95% the values of output of interest are below a certain value'. Therefore, the 0.95-quantile is estimated for the distribution of all possible values of one fuel rod with a statistical confidence of 95%. On the other hand, a more elaborate procedure is required if all the fuel rods in the core are being analyzed. In this case, the aim is to evaluate the following global statement: with 95% confidence level, the expected number of fuel rods which are not exceeding a certain value is all the fuel rods in the core except only a few fuel rods. In both cases, the thresholds determined by the analysis should be below the safety acceptable design limit. An indirect
Propagation of registration uncertainty during multi-fraction cervical cancer brachytherapy
Amir-Khalili, A.; Hamarneh, G.; Zakariaee, R.; Spadinger, I.; Abugharbieh, R.
2017-10-01
Multi-fraction cervical cancer brachytherapy is a form of image-guided radiotherapy that heavily relies on 3D imaging during treatment planning, delivery, and quality control. In this context, deformable image registration can increase the accuracy of dosimetric evaluations, provided that one can account for the uncertainties associated with the registration process. To enable such capability, we propose a mathematical framework that first estimates the registration uncertainty and subsequently propagates the effects of the computed uncertainties from the registration stage through to the visualizations, organ segmentations, and dosimetric evaluations. To ensure the practicality of our proposed framework in real world image-guided radiotherapy contexts, we implemented our technique via a computationally efficient and generalizable algorithm that is compatible with existing deformable image registration software. In our clinical context of fractionated cervical cancer brachytherapy, we perform a retrospective analysis on 37 patients and present evidence that our proposed methodology for computing and propagating registration uncertainties may be beneficial during therapy planning and quality control. Specifically, we quantify and visualize the influence of registration uncertainty on dosimetric analysis during the computation of the total accumulated radiation dose on the bladder wall. We further show how registration uncertainty may be leveraged into enhanced visualizations that depict the quality of the registration and highlight potential deviations from the treatment plan prior to the delivery of radiation treatment. Finally, we show that we can improve the transfer of delineated volumetric organ segmentation labels from one fraction to the next by encoding the computed registration uncertainties into the segmentation labels.
Riva, Fabio; Milanese, Lucio; Ricci, Paolo
2017-10-01
To reduce the computational cost of the uncertainty propagation analysis, which is used to study the impact of input parameter variations on the results of a simulation, a general and simple to apply methodology based on decomposing the solution to the model equations in terms of Chebyshev polynomials is discussed. This methodology, based on the work by Scheffel [Am. J. Comput. Math. 2, 173-193 (2012)], approximates the model equation solution with a semi-analytic expression that depends explicitly on time, spatial coordinates, and input parameters. By employing a weighted residual method, a set of nonlinear algebraic equations for the coefficients appearing in the Chebyshev decomposition is then obtained. The methodology is applied to a two-dimensional Braginskii model used to simulate plasma turbulence in basic plasma physics experiments and in the scrape-off layer of tokamaks, in order to study the impact on the simulation results of the input parameter that describes the parallel losses. The uncertainty that characterizes the time-averaged density gradient lengths, time-averaged densities, and fluctuation density level are evaluated. A reasonable estimate of the uncertainty of these distributions can be obtained with a single reduced-cost simulation.
Uncertainty propagation in a 3-D thermal code for performance assessment of a nuclear waste disposal
International Nuclear Information System (INIS)
Dutfoy, A.; Ritz, J.B.
2001-01-01
Given the very large time scale involved, the performance assessment of a nuclear waste repository requires numerical modelling. Because we are uncertain of the exact value of the input parameters, we have to analyse the impact of these uncertainties on the outcome of the physical models. The EDF Division Research and Development has set a reliability method to propagate these uncertainties or variability through models which requires much less physical simulations than the usual simulation methods. We apply the reliability method MEFISTO to a base case modelling the heat transfers in a virtual disposal in the future site of the French underground research laboratory, in the East of France. This study is led in collaboration with ANDRA which is the French Nuclear Waste Management Agency. With this exercise, we want to evaluate the thermal behaviour of a concept related to the variation of physical parameters and their uncertainty. (author)
International Nuclear Information System (INIS)
Benoit, J.-C.
2012-01-01
This PhD study is in the field of nuclear energy, the back end of nuclear fuel cycle and uncertainty calculations. The CEA must design the prototype ASTRID, a sodium cooled fast reactor (SFR) and one of the selected concepts of the Generation IV forum, for which the calculation of the value and the uncertainty of the decay heat have a significant impact. In this study is developed a code of propagation of uncertainties of nuclear data on the decay heat in SFR. The process took place in three stages. The first step has limited the number of parameters involved in the calculation of the decay heat. For this, an experiment on decay heat on the reactor PHENIX (PUIREX 2008) was studied to validate experimentally the DARWIN package for SFR and quantify the source terms of the decay heat. The second step was aimed to develop a code of propagation of uncertainties: CyRUS (Cycle Reactor Uncertainty and Sensitivity). A deterministic propagation method was chosen because calculations are fast and reliable. Assumptions of linearity and normality have been validated theoretically. The code has also been successfully compared with a stochastic code on the example of the thermal burst fission curve of 235 U. The last part was an application of the code on several experiments: decay heat of a reactor, isotopic composition of a fuel pin and the burst fission curve of 235 U. The code has demonstrated the possibility of feedback on nuclear data impacting the uncertainty of this problem. Two main results were highlighted. Firstly, the simplifying assumptions of deterministic codes are compatible with a precise calculation of the uncertainty of the decay heat. Secondly, the developed method is intrusive and allows feedback on nuclear data from experiments on the back end of nuclear fuel cycle. In particular, this study showed how important it is to measure precisely independent fission yields along with their covariance matrices in order to improve the accuracy of the calculation of
International Nuclear Information System (INIS)
Morales Prieto, M.; Ortega Saiz, P.
2011-01-01
Analysis of analytical uncertainties of the methodology of simulation of processes for obtaining isotopic ending inventory of spent fuel, the ARIANE experiment explores the part of simulation of burning.
Rodríguez-Rincón, J. P.; Pedrozo-Acuña, A.; Breña-Naranjo, J. A.
2015-07-01
This investigation aims to study the propagation of meteorological uncertainty within a cascade modelling approach to flood prediction. The methodology was comprised of a numerical weather prediction (NWP) model, a distributed rainfall-runoff model and a 2-D hydrodynamic model. The uncertainty evaluation was carried out at the meteorological and hydrological levels of the model chain, which enabled the investigation of how errors that originated in the rainfall prediction interact at a catchment level and propagate to an estimated inundation area and depth. For this, a hindcast scenario is utilised removing non-behavioural ensemble members at each stage, based on the fit with observed data. At the hydrodynamic level, an uncertainty assessment was not incorporated; instead, the model was setup following guidelines for the best possible representation of the case study. The selected extreme event corresponds to a flood that took place in the southeast of Mexico during November 2009, for which field data (e.g. rain gauges; discharge) and satellite imagery were available. Uncertainty in the meteorological model was estimated by means of a multi-physics ensemble technique, which is designed to represent errors from our limited knowledge of the processes generating precipitation. In the hydrological model, a multi-response validation was implemented through the definition of six sets of plausible parameters from past flood events. Precipitation fields from the meteorological model were employed as input in a distributed hydrological model, and resulting flood hydrographs were used as forcing conditions in the 2-D hydrodynamic model. The evolution of skill within the model cascade shows a complex aggregation of errors between models, suggesting that in valley-filling events hydro-meteorological uncertainty has a larger effect on inundation depths than that observed in estimated flood inundation extents.
International Nuclear Information System (INIS)
Heo, Jaeseok; Kim, Kyung Doo
2015-01-01
Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper
Energy Technology Data Exchange (ETDEWEB)
Heo, Jaeseok, E-mail: jheo@kaeri.re.kr; Kim, Kyung Doo, E-mail: kdkim@kaeri.re.kr
2015-10-15
Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper.
Fuzzy probability based fault tree analysis to propagate and quantify epistemic uncertainty
International Nuclear Information System (INIS)
Purba, Julwan Hendry; Sony Tjahyani, D.T.; Ekariansyah, Andi Sofrany; Tjahjono, Hendro
2015-01-01
Highlights: • Fuzzy probability based fault tree analysis is to evaluate epistemic uncertainty in fuzzy fault tree analysis. • Fuzzy probabilities represent likelihood occurrences of all events in a fault tree. • A fuzzy multiplication rule quantifies epistemic uncertainty of minimal cut sets. • A fuzzy complement rule estimate epistemic uncertainty of the top event. • The proposed FPFTA has successfully evaluated the U.S. Combustion Engineering RPS. - Abstract: A number of fuzzy fault tree analysis approaches, which integrate fuzzy concepts into the quantitative phase of conventional fault tree analysis, have been proposed to study reliabilities of engineering systems. Those new approaches apply expert judgments to overcome the limitation of the conventional fault tree analysis when basic events do not have probability distributions. Since expert judgments might come with epistemic uncertainty, it is important to quantify the overall uncertainties of the fuzzy fault tree analysis. Monte Carlo simulation is commonly used to quantify the overall uncertainties of conventional fault tree analysis. However, since Monte Carlo simulation is based on probability distribution, this technique is not appropriate for fuzzy fault tree analysis, which is based on fuzzy probabilities. The objective of this study is to develop a fuzzy probability based fault tree analysis to overcome the limitation of fuzzy fault tree analysis. To demonstrate the applicability of the proposed approach, a case study is performed and its results are then compared to the results analyzed by a conventional fault tree analysis. The results confirm that the proposed fuzzy probability based fault tree analysis is feasible to propagate and quantify epistemic uncertainties in fault tree analysis
DEFF Research Database (Denmark)
Sin, Gürkan; Gernaey, Krist; Eliasson Lantz, Anna
2009-01-01
The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model-building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input...... compared to the large uncertainty observed in the antibiotic and off-gas CO2 predictions. The output uncertainty was observed to be lower during the exponential growth phase, while higher in the stationary and death phases - meaning the model describes some periods better than others. To understand which...... promising for helping to build reliable mechanistic models and to interpret the model outputs properly. These tools make part of good modeling practice, which can contribute to successful PAT applications for increased process understanding, operation and control purposes. © 2009 American Institute...
A general method dealing with correlations in uncertainty propagation in fault trees
International Nuclear Information System (INIS)
Qin Zhang
1989-01-01
This paper deals with the correlations among the failure probabilities (frequencies) of not only the identical basic events but also other basic events in a fault tree. It presents a general and simple method to include these correlations in uncertainty propagation. Two examples illustrate this method and show that neglecting these correlations results in large underestimation of the top event failure probability (frequency). One is the failure of the primary pump in a chemical reactor cooling system, the other example is an accident to a road transport truck carrying toxic waste. (author)
Study of Monte Carlo approach to experimental uncertainty propagation with MSTW 2008 PDFs
Watt, G.
2012-01-01
We investigate the Monte Carlo approach to propagation of experimental uncertainties within the context of the established 'MSTW 2008' global analysis of parton distribution functions (PDFs) of the proton at next-to-leading order in the strong coupling. We show that the Monte Carlo approach using replicas of the original data gives PDF uncertainties in good agreement with the usual Hessian approach using the standard Delta(chi^2) = 1 criterion, then we explore potential parameterisation bias by increasing the number of free parameters, concluding that any parameterisation bias is likely to be small, with the exception of the valence-quark distributions at low momentum fractions x. We motivate the need for a larger tolerance, Delta(chi^2) > 1, by making fits to restricted data sets and idealised consistent or inconsistent pseudodata. Instead of using data replicas, we alternatively produce PDF sets randomly distributed according to the covariance matrix of fit parameters including appropriate tolerance values,...
Directory of Open Access Journals (Sweden)
Haileyesus B. Endeshaw
2017-11-01
Full Text Available Failure prediction of wind turbine gearboxes (WTGs is especially important since the maintenance of these components is not only costly but also causes the longest downtime. One of the most common causes of the premature fault of WTGs is attributed to the fatigue fracture of gear teeth due to fluctuating and cyclic torque, resulting from stochastic wind loading, transmitted to the gearbox. Moreover, the fluctuation of the torque, as well as the inherent uncertainties of the material properties, results in uncertain life prediction for WTGs. It is therefore essential to quantify these uncertainties in the life estimation of gears. In this paper, a framework, constituted by a dynamic model of a one-stage gearbox, a finite element method, and a degradation model for the estimation of fatigue crack propagation in gear, is presented. Torque time history data of a wind turbine rotor was scaled and used to simulate the stochastic characteristic of the loading and uncertainties in the material constants of the degradation model were also quantified. It was demonstrated that uncertainty quantification of load and material constants provides a reasonable estimation of the distribution of the crack length in the gear tooth at any time step.
Accuracy of semi-analytical finite elements for modelling wave propagation in rails
CSIR Research Space (South Africa)
Andhavarapu, EV
2010-01-01
Full Text Available The semi-analytical finite element method (SAFE) is a popular method for analysing guided wave propagation in elastic waveguides of complex cross-section such as rails. The convergence of these models has previously been studied for linear...
Wijnant, Ysbrand H.; Spiering, R.M.E.J.; Blijderveen, M.; de Boer, Andries
2006-01-01
Previous research has shown that viscothermal wave propagation in narrow gaps can efficiently be described by means of the low reduced frequency model. For simple geometries and boundary conditions, analytical solutions are available. For example, Beltman [4] gives the acoustic pressure in the gap
Propagation of neutron-reaction uncertainties through multi-physics models of novel LWR's
Directory of Open Access Journals (Sweden)
Hernandez-Solis Augusto
2017-01-01
Full Text Available The novel design of the renewable boiling water reactor (RBWR allows a breeding ratio greater than unity and thus, it aims at providing for a self-sustained fuel cycle. The neutron reactions that compose the different microscopic cross-sections and angular distributions are uncertain, so when they are employed in the determination of the spatial distribution of the neutron flux in a nuclear reactor, a methodology should be employed to account for these associated uncertainties. In this work, the Total Monte Carlo (TMC method is used to propagate the different neutron-reactions (as well as angular distributions covariances that are part of the TENDL-2014 nuclear data (ND library. The main objective is to propagate them through coupled neutronic and thermal-hydraulic models in order to assess the uncertainty of important safety parameters related to multi-physics, such as peak cladding temperature along the axial direction of an RBWR fuel assembly. The objective of this study is to quantify the impact that ND covariances of important nuclides such as U-235, U-238, Pu-239 and the thermal scattering of hydrogen in H2O have in the deterministic safety analysis of novel nuclear reactors designs.
Quantification of Dynamic Model Validation Metrics Using Uncertainty Propagation from Requirements
Brown, Andrew M.; Peck, Jeffrey A.; Stewart, Eric C.
2018-01-01
The Space Launch System, NASA's new large launch vehicle for long range space exploration, is presently in the final design and construction phases, with the first launch scheduled for 2019. A dynamic model of the system has been created and is critical for calculation of interface loads and natural frequencies and mode shapes for guidance, navigation, and control (GNC). Because of the program and schedule constraints, a single modal test of the SLS will be performed while bolted down to the Mobile Launch Pad just before the first launch. A Monte Carlo and optimization scheme will be performed to create thousands of possible models based on given dispersions in model properties and to determine which model best fits the natural frequencies and mode shapes from modal test. However, the question still remains as to whether this model is acceptable for the loads and GNC requirements. An uncertainty propagation and quantification (UP and UQ) technique to develop a quantitative set of validation metrics that is based on the flight requirements has therefore been developed and is discussed in this paper. There has been considerable research on UQ and UP and validation in the literature, but very little on propagating the uncertainties from requirements, so most validation metrics are "rules-of-thumb;" this research seeks to come up with more reason-based metrics. One of the main assumptions used to achieve this task is that the uncertainty in the modeling of the fixed boundary condition is accurate, so therefore that same uncertainty can be used in propagating the fixed-test configuration to the free-free actual configuration. The second main technique applied here is the usage of the limit-state formulation to quantify the final probabilistic parameters and to compare them with the requirements. These techniques are explored with a simple lumped spring-mass system and a simplified SLS model. When completed, it is anticipated that this requirements-based validation
International Nuclear Information System (INIS)
Iooss, B.
2009-01-01
The present document constitutes my Habilitation thesis report. It recalls my scientific activity of the twelve last years, since my PhD thesis until the works completed as a research engineer at CEA Cadarache. The two main chapters of this document correspond to two different research fields both referring to the uncertainty treatment in engineering problems. The first chapter establishes a synthesis of my work on high frequency wave propagation in random medium. It more specifically relates to the study of the statistical fluctuations of acoustic wave travel-times in random and/or turbulent media. The new results mainly concern the introduction of the velocity field statistical anisotropy in the analytical expressions of the travel-time statistical moments according to those of the velocity field. This work was primarily carried by requirements in geophysics (oil exploration and seismology). The second chapter is concerned by the probabilistic techniques to study the effect of input variables uncertainties in numerical models. My main applications in this chapter relate to the nuclear engineering domain which offers a large variety of uncertainty problems to be treated. First of all, a complete synthesis is carried out on the statistical methods of sensitivity analysis and global exploration of numerical models. The construction and the use of a meta-model (inexpensive mathematical function replacing an expensive computer code) are then illustrated by my work on the Gaussian process model (kriging). Two additional topics are finally approached: the high quantile estimation of a computer code output and the analysis of stochastic computer codes. We conclude this memory with some perspectives about the numerical simulation and the use of predictive models in industry. This context is extremely positive for future researches and application developments. (author)
Uncertainty Propagation Analysis for the Monte Carlo Time-Dependent Simulations
International Nuclear Information System (INIS)
Shaukata, Nadeem; Shim, Hyung Jin
2015-01-01
In this paper, a conventional method to control the neutron population for super-critical systems is implemented. Instead of considering the cycles, the simulation is divided in time intervals. At the end of each time interval, neutron population control is applied on the banked neutrons. Randomly selected neutrons are discarded, until the size of neutron population matches the initial neutron histories at the beginning of time simulation. A time-dependent simulation mode has also been implemented in the development version of SERPENT 2 Monte Carlo code. In this mode, sequential population control mechanism has been proposed for modeling of prompt super-critical systems. A Monte Carlo method has been properly used in TART code for dynamic criticality calculations. For super-critical systems, the neutron population is allowed to grow over a period of time. The neutron population is uniformly combed to return it to the neutron population started with at the beginning of time boundary. In this study, conventional time-dependent Monte Carlo (TDMC) algorithm is implemented. There is an exponential growth of neutron population in estimation of neutron density tally for super-critical systems and the number of neutrons being tracked exceed the memory of the computer. In order to control this exponential growth at the end of each time boundary, a conventional time cut-off controlling population strategy is included in TDMC. A scale factor is introduced to tally the desired neutron density at the end of each time boundary. The main purpose of this paper is the quantification of uncertainty propagation in neutron densities at the end of each time boundary for super-critical systems. This uncertainty is caused by the uncertainty resulting from the introduction of scale factor. The effectiveness of TDMC is examined for one-group infinite homogeneous problem (the rod model) and two-group infinite homogeneous problem. The desired neutron density is tallied by the introduction of
Uncertainty Propagation Analysis for the Monte Carlo Time-Dependent Simulations
Energy Technology Data Exchange (ETDEWEB)
Shaukata, Nadeem; Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of)
2015-10-15
In this paper, a conventional method to control the neutron population for super-critical systems is implemented. Instead of considering the cycles, the simulation is divided in time intervals. At the end of each time interval, neutron population control is applied on the banked neutrons. Randomly selected neutrons are discarded, until the size of neutron population matches the initial neutron histories at the beginning of time simulation. A time-dependent simulation mode has also been implemented in the development version of SERPENT 2 Monte Carlo code. In this mode, sequential population control mechanism has been proposed for modeling of prompt super-critical systems. A Monte Carlo method has been properly used in TART code for dynamic criticality calculations. For super-critical systems, the neutron population is allowed to grow over a period of time. The neutron population is uniformly combed to return it to the neutron population started with at the beginning of time boundary. In this study, conventional time-dependent Monte Carlo (TDMC) algorithm is implemented. There is an exponential growth of neutron population in estimation of neutron density tally for super-critical systems and the number of neutrons being tracked exceed the memory of the computer. In order to control this exponential growth at the end of each time boundary, a conventional time cut-off controlling population strategy is included in TDMC. A scale factor is introduced to tally the desired neutron density at the end of each time boundary. The main purpose of this paper is the quantification of uncertainty propagation in neutron densities at the end of each time boundary for super-critical systems. This uncertainty is caused by the uncertainty resulting from the introduction of scale factor. The effectiveness of TDMC is examined for one-group infinite homogeneous problem (the rod model) and two-group infinite homogeneous problem. The desired neutron density is tallied by the introduction of
International Nuclear Information System (INIS)
Fiorito, L.; Piedra, D.; Cabellos, O.; Diez, C.J.
2015-01-01
Highlights: • We performed burnup calculations of PWR and BWR benchmarks using ALEPH and SCALE. • We propagated nuclear data uncertainty and correlations using different procedures and code. • Decay data uncertainties have negligible impact on nuclide densities. • Uncorrelated fission yields play a major role on the uncertainties of fission products. • Fission yields impact is strongly reduced by the introduction of correlations. - Abstract: Two fuel assemblies, one belonging to the Takahama-3 PWR and the other to the Fukushima-Daini-2 BWR, were modelled and the fuel irradiation was simulated with the TRITON module of SCALE 6.2 and with the ALEPH-2 code. Our results were compared to the experimental measurements of four samples: SF95-4 and SF96-4 were taken from the Takahama-3 reactor, while samples SF98-6 and SF99-6 belonged to the Fukushima-Daini-2. Then, we propagated the uncertainties coming from the nuclear data to the isotopic inventory of sample SF95-4. We used the ALEPH-2 adjoint procedure to propagate the decay constant uncertainties. The impact was inappreciable. The cross-section covariance information was propagated with the SAMPLER module of the beta3 version of SCALE 6.2. This contribution mostly affected the uncertainties of the actinides. Finally, the uncertainties of the fission yields were propagated both through ALEPH-2 and TRITON with a Monte Carlo sampling approach and appeared to have the largest impact on the uncertainties of the fission products. However, the lack of fission yield correlations results is a serious overestimation of the response uncertainties
International Nuclear Information System (INIS)
Heo, Jaeseok; Kim, Kyung Doo
2015-01-01
Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM
Energy Technology Data Exchange (ETDEWEB)
Heo, Jaeseok; Kim, Kyung Doo [KAERI, Daejeon (Korea, Republic of)
2015-05-15
Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM.
DEFF Research Database (Denmark)
Quinonero, Joaquin; Girard, Agathe; Larsen, Jan
2003-01-01
The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models such as the Gaus......The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models...... such as the Gaussian process and the relevance vector machine. We derive novel analytic expressions for the predictive mean and variance for Gaussian kernel shapes under the assumption of a Gaussian input distribution in the static case, and of a recursive Gaussian predictive density in iterative forecasting...
Bilcke, Joke; Beutels, Philippe; Brisson, Marc; Jit, Mark
2011-01-01
Accounting for uncertainty is now a standard part of decision-analytic modeling and is recommended by many health technology agencies and published guidelines. However, the scope of such analyses is often limited, even though techniques have been developed for presenting the effects of methodological, structural, and parameter uncertainty on model results. To help bring these techniques into mainstream use, the authors present a step-by-step guide that offers an integrated approach to account for different kinds of uncertainty in the same model, along with a checklist for assessing the way in which uncertainty has been incorporated. The guide also addresses special situations such as when a source of uncertainty is difficult to parameterize, resources are limited for an ideal exploration of uncertainty, or evidence to inform the model is not available or not reliable. for identifying the sources of uncertainty that influence results most are also described. Besides guiding analysts, the guide and checklist may be useful to decision makers who need to assess how well uncertainty has been accounted for in a decision-analytic model before using the results to make a decision.
Analyticity of effective coupling and propagators in massless models of quantum field theory
International Nuclear Information System (INIS)
Oehme, R.
1982-01-01
For massless models of quantum field theory, some general theorems are proved concerning the analytic continuation of the renormalization group functions as well as the effective coupling and the propagators. Starting points are analytic properties of the effective coupling and the propagators in the momentum variable k 2 , which can be converted into analyticity of β- and γ-functions in the coupling parameter lambda. It is shown that the β-function can have branch point singularities related to stationary points of the effective coupling as a function of k 2 . The type of these singularities of β(lambda) can be determined explicitly. Examples of possible physical interest are extremal values of the effective coupling at space-like points in the momentum variable, as well as complex conjugate stationary points close to the real k 2 -axis. The latter may be related to the sudden transition between weak and strong coupling regimes of the system. Finally, for the effective coupling and for the propagators, the analytic continuation in both variables k 2 and lambda is discussed. (orig.)
A Framework for Propagation of Uncertainties in the Kepler Data Analysis Pipeline
Clarke, Bruce D.; Allen, Christopher; Bryson, Stephen T.; Caldwell, Douglas A.; Chandrasekaran, Hema; Cote, Miles T.; Girouard, Forrest; Jenkins, Jon M.; Klaus, Todd C.; Li, Jie;
2010-01-01
The Kepler space telescope is designed to detect Earth-like planets around Sun-like stars using transit photometry by simultaneously observing 100,000 stellar targets nearly continuously over a three and a half year period. The 96-megapixel focal plane consists of 42 charge-coupled devices (CCD) each containing two 1024 x 1100 pixel arrays. Cross-correlations between calibrated pixels are introduced by common calibrations performed on each CCD requiring downstream data products access to the calibrated pixel covariance matrix in order to properly estimate uncertainties. The prohibitively large covariance matrices corresponding to the 75,000 calibrated pixels per CCD preclude calculating and storing the covariance in standard lock-step fashion. We present a novel framework used to implement standard propagation of uncertainties (POU) in the Kepler Science Operations Center (SOC) data processing pipeline. The POU framework captures the variance of the raw pixel data and the kernel of each subsequent calibration transformation allowing the full covariance matrix of any subset of calibrated pixels to be recalled on-the-fly at any step in the calibration process. Singular value decomposition (SVD) is used to compress and low-pass filter the raw uncertainty data as well as any data dependent kernels. The combination of POU framework and SVD compression provide downstream consumers of the calibrated pixel data access to the full covariance matrix of any subset of the calibrated pixels traceable to pixel level measurement uncertainties without having to store, retrieve and operate on prohibitively large covariance matrices. We describe the POU Framework and SVD compression scheme and its implementation in the Kepler SOC pipeline.
Introducing uncertainty analysis of nucleation and crystal growth models in Process Analytical Technology (PAT) system design of crystallization processes
DEFF Research Database (Denmark)
Abdul Samad, Noor Asma Fazli Bin; Sin, Gürkan; Gernaey, Krist
2013-01-01
This paper presents the application of uncertainty and sensitivity analysis as part of a systematic modelbased process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty...
Reginatto, M; Neumann, S
2002-01-01
MAXED was developed to apply the maximum entropy principle to the unfolding of neutron spectrometric measurements. The approach followed in MAXED has several features that make it attractive: it permits inclusion of a priori information in a well-defined and mathematically consistent way, the algorithm used to derive the solution spectrum is not ad hoc (it can be justified on the basis of arguments that originate in information theory), and the solution spectrum is a non-negative function that can be written in closed form. This last feature permits the use of standard methods for the sensitivity analysis and propagation of uncertainties of MAXED solution spectra. We illustrate its use with unfoldings of NE 213 scintillation detector measurements of photon calibration spectra, and of multisphere neutron spectrometer measurements of cosmic-ray induced neutrons at high altitude (approx 20 km) in the atmosphere.
Uncertainty propagation for the coulometric measurement of the plutonium concentration in MOX-PU4.
Energy Technology Data Exchange (ETDEWEB)
None, None
2017-11-07
This GUM WorkbenchTM propagation of uncertainty is for the coulometric measurement of the plutonium concentration in a Pu standard material (C126) supplied as individual aliquots that were prepared by mass. The C126 solution had been prepared and as aliquoted as standard material. Samples are aliquoted into glass vials and heated to dryness for distribution as dried nitrate. The individual plutonium aliquots were not separated chemically or otherwise purified prior to measurement by coulometry in the F/H Laboratory. Hydrogen peroxide was used for valence adjustment. The Pu assay measurement results were corrected for the interference from trace iron in the solution measured for assay. Aliquot mass measurements were corrected for air buoyancy. The relative atomic mass (atomic weight) of the plutonium from X126 certoficate was used. The isotopic composition was determined by thermal ionization mass spectrometry (TIMS) for comparison but not used in calculations.
International Nuclear Information System (INIS)
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-01-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-09-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Energy Technology Data Exchange (ETDEWEB)
Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu
2016-09-15
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Birrell, Jane; Meares, Kevin; Wilkinson, Andrew; Freeston, Mark
2011-11-01
Since its emergence in the early 1990s, a narrow but concentrated body of research has developed examining the role of intolerance of uncertainty (IU) in worry, and yet we still know little about its phenomenology. In an attempt to clarify our understanding of this construct, this paper traces the way in which our understanding and definition of IU have evolved throughout the literature. This paper also aims to further our understanding of IU by exploring the latent variables measures by the Intolerance of Uncertainty Scale (IUS; Freeston, Rheaume, Letarte, Dugas & Ladouceur, 1994). A review of the literature surrounding IU confirmed that the current definitions are categorical and lack specificity. A critical review of existing factor analytic studies was carried out in order to determine the underlying factors measured by the IUS. Systematic searches yielded 9 papers for review. Two factors with 12 consistent items emerged throughout the exploratory studies, and the stability of models containing these two factors was demonstrated in subsequent confirmatory studies. It is proposed that these factors represent (i) desire for predictability and an active engagement in seeking certainty, and (ii) paralysis of cognition and action in the face of uncertainty. It is suggested that these factors may represent approach and avoidance responses to uncertainty. Further research is required to confirm the construct validity of these factors and to determine the stability of this structure within clinical samples. Copyright © 2011 Elsevier Ltd. All rights reserved.
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
de'Michieli Vitturi, Mattia; Pardini, Federica; Spanu, Antonio; Neri, Augusto; Vittoria Salvetti, Maria
2015-04-01
Volcanic ash clouds represent a major hazard for populations living nearby volcanic centers producing a risk for humans and a potential threat to crops, ground infrastructures, and aviation traffic. Lagrangian particle dispersal models are commonly used for tracking ash particles emitted from volcanic plumes and transported under the action of atmospheric wind fields. In this work, we present the results of an uncertainty propagation analysis applied to volcanic ash dispersal from weak plumes with specific focus on the uncertainties related to the grain-size distribution of the mixture. To this aim, the Eulerian fully compressible mesoscale non-hydrostatic model WRF was used to generate the driving wind, representative of the atmospheric conditions occurring during the event of November 24, 2006 at Mt. Etna. Then, the Lagrangian particle model LPAC (de' Michieli Vitturi et al., JGR 2010) was used to simulate the transport of mass particles under the action of atmospheric conditions. The particle motion equations were derived by expressing the Lagrangian particle acceleration as the sum of the forces acting along its trajectory, with drag forces calculated as a function of particle diameter, density, shape and Reynolds number. The simulations were representative of weak plume events of Mt. Etna and aimed to quantify the effect on the dispersal process of the uncertainty in the particle sphericity and in the mean and variance of a log-normal distribution function describing the grain-size of ash particles released from the eruptive column. In order to analyze the sensitivity of particle dispersal to these uncertain parameters with a reasonable number of simulations, and therefore with affordable computational costs, response surfaces in the parameter space were built by using the generalized polynomial chaos technique. The uncertainty analysis allowed to quantify the most probable values, as well as their pdf, of the number of particles as well as of the mean and
Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
International Nuclear Information System (INIS)
Helgesson, P.; Sjöstrand, H.; Koning, A.J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.
2016-01-01
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also
Antoshchenkova, Ekaterina; Imbert, David; Richet, Yann; Bardet, Lise; Duluc, Claire-Marie; Rebour, Vincent; Gailler, Audrey; Hébert, Hélène
2016-04-01
The aim of this study is to assess evaluation the tsunamigenic potential of the Azores-Gibraltar Fracture Zone (AGFZ). This work is part of the French project TANDEM (Tsunamis in the Atlantic and English ChaNnel: Definition of the Effects through numerical Modeling; www-tandem.cea.fr), special attention is paid to French Atlantic coasts. Structurally, the AGFZ region is complex and not well understood. However, a lot of its faults produce earthquakes with significant vertical slip, of a type that can result in tsunami. We use the major tsunami event of the AGFZ on purpose to have a regional estimation of the tsunamigenic potential of this zone. The major reported event for this zone is the 1755 Lisbon event. There are large uncertainties concerning source location and focal mechanism of this earthquake. Hence, simple deterministic approach is not sufficient to cover on the one side the whole AGFZ with its geological complexity and on the other side the lack of information concerning the 1755 Lisbon tsunami. A parametric modeling environment Promethée (promethee.irsn.org/doku.php) was coupled to tsunami simulation software based on shallow water equations with the aim of propagation of uncertainties. Such a statistic point of view allows us to work with multiple hypotheses simultaneously. In our work we introduce the seismic source parameters in a form of distributions, thus giving a data base of thousands of tsunami scenarios and tsunami wave height distributions. Exploring our tsunami scenarios data base we present preliminary results for France. Tsunami wave heights (within one standard deviation of the mean) can be about 0.5 m - 1 m for the Atlantic coast and approaching 0.3 m for the English Channel.
Directory of Open Access Journals (Sweden)
M. E. Gorbunov
2018-01-01
Full Text Available A new reference occultation processing system (rOPS will include a Global Navigation Satellite System (GNSS radio occultation (RO retrieval chain with integrated uncertainty propagation. In this paper, we focus on wave-optics bending angle (BA retrieval in the lower troposphere and introduce (1 an empirically estimated boundary layer bias (BLB model then employed to reduce the systematic uncertainty of excess phases and bending angles in about the lowest 2 km of the troposphere and (2 the estimation of (residual systematic uncertainties and their propagation together with random uncertainties from excess phase to bending angle profiles. Our BLB model describes the estimated bias of the excess phase transferred from the estimated bias of the bending angle, for which the model is built, informed by analyzing refractivity fluctuation statistics shown to induce such biases. The model is derived from regression analysis using a large ensemble of Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC RO observations and concurrent European Centre for Medium-Range Weather Forecasts (ECMWF analysis fields. It is formulated in terms of predictors and adaptive functions (powers and cross products of predictors, where we use six main predictors derived from observations: impact altitude, latitude, bending angle and its standard deviation, canonical transform (CT amplitude, and its fluctuation index. Based on an ensemble of test days, independent of the days of data used for the regression analysis to establish the BLB model, we find the model very effective for bias reduction and capable of reducing bending angle and corresponding refractivity biases by about a factor of 5. The estimated residual systematic uncertainty, after the BLB profile subtraction, is lower bounded by the uncertainty from the (indirect use of ECMWF analysis fields but is significantly lower than the systematic uncertainty without BLB correction. The
Gorbunov, Michael E.; Kirchengast, Gottfried
2018-01-01
A new reference occultation processing system (rOPS) will include a Global Navigation Satellite System (GNSS) radio occultation (RO) retrieval chain with integrated uncertainty propagation. In this paper, we focus on wave-optics bending angle (BA) retrieval in the lower troposphere and introduce (1) an empirically estimated boundary layer bias (BLB) model then employed to reduce the systematic uncertainty of excess phases and bending angles in about the lowest 2 km of the troposphere and (2) the estimation of (residual) systematic uncertainties and their propagation together with random uncertainties from excess phase to bending angle profiles. Our BLB model describes the estimated bias of the excess phase transferred from the estimated bias of the bending angle, for which the model is built, informed by analyzing refractivity fluctuation statistics shown to induce such biases. The model is derived from regression analysis using a large ensemble of Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) RO observations and concurrent European Centre for Medium-Range Weather Forecasts (ECMWF) analysis fields. It is formulated in terms of predictors and adaptive functions (powers and cross products of predictors), where we use six main predictors derived from observations: impact altitude, latitude, bending angle and its standard deviation, canonical transform (CT) amplitude, and its fluctuation index. Based on an ensemble of test days, independent of the days of data used for the regression analysis to establish the BLB model, we find the model very effective for bias reduction and capable of reducing bending angle and corresponding refractivity biases by about a factor of 5. The estimated residual systematic uncertainty, after the BLB profile subtraction, is lower bounded by the uncertainty from the (indirect) use of ECMWF analysis fields but is significantly lower than the systematic uncertainty without BLB correction. The systematic and
Directory of Open Access Journals (Sweden)
Nerea Mangado
2016-11-01
Full Text Available Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering.
Energy Technology Data Exchange (ETDEWEB)
Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.; Halappanavar, Mahantesh
2016-09-16
Securing cyber-systems on a continual basis against a multitude of adverse events is a challenging undertaking. Game-theoretic approaches, that model actions of strategic decision-makers, are increasingly being applied to address cybersecurity resource allocation challenges. Such game-based models account for multiple player actions and represent cyber attacker payoffs mostly as point utility estimates. Since a cyber-attacker’s payoff generation mechanism is largely unknown, appropriate representation and propagation of uncertainty is a critical task. In this paper we expand on prior work and focus on operationalizing the probabilistic uncertainty quantification framework, for a notional cyber system, through: 1) representation of uncertain attacker and system-related modeling variables as probability distributions and mathematical intervals, and 2) exploration of uncertainty propagation techniques including two-phase Monte Carlo sampling and probability bounds analysis.
Gronewold, A. D.; Wolpert, R. L.; Reckhow, K. H.
2007-12-01
Most probable number (MPN) and colony-forming-unit (CFU) are two estimates of fecal coliform bacteria concentration commonly used as measures of water quality in United States shellfish harvesting waters. The MPN is the maximum likelihood estimate (or MLE) of the true fecal coliform concentration based on counts of non-sterile tubes in serial dilution of a sample aliquot, indicating bacterial metabolic activity. The CFU is the MLE of the true fecal coliform concentration based on the number of bacteria colonies emerging on a growth plate after inoculation from a sample aliquot. Each estimating procedure has intrinsic variability and is subject to additional uncertainty arising from minor variations in experimental protocol. Several versions of each procedure (using different sized aliquots or different numbers of tubes, for example) are in common use, each with its own levels of probabilistic and experimental error and uncertainty. It has been observed empirically that the MPN procedure is more variable than the CFU procedure, and that MPN estimates are somewhat higher on average than CFU estimates, on split samples from the same water bodies. We construct a probabilistic model that provides a clear theoretical explanation for the observed variability in, and discrepancy between, MPN and CFU measurements. We then explore how this variability and uncertainty might propagate into shellfish harvesting area management decisions through a two-phased modeling strategy. First, we apply our probabilistic model in a simulation-based analysis of future water quality standard violation frequencies under alternative land use scenarios, such as those evaluated under guidelines of the total maximum daily load (TMDL) program. Second, we apply our model to water quality data from shellfish harvesting areas which at present are closed (either conditionally or permanently) to shellfishing, to determine if alternative laboratory analysis procedures might have led to different
Statistical analysis tolerance using jacobian torsor model based on uncertainty propagation method
Directory of Open Access Journals (Sweden)
W Ghie
2016-04-01
Full Text Available One risk inherent in the use of assembly components is that the behaviourof these components is discovered only at the moment an assembly isbeing carried out. The objective of our work is to enable designers to useknown component tolerances as parameters in models that can be usedto predict properties at the assembly level. In this paper we present astatistical approach to assemblability evaluation, based on tolerance andclearance propagations. This new statistical analysis method for toleranceis based on the Jacobian-Torsor model and the uncertainty measurementapproach. We show how this can be accomplished by modeling thedistribution of manufactured dimensions through applying a probabilitydensity function. By presenting an example we show how statisticaltolerance analysis should be used in the Jacobian-Torsor model. This workis supported by previous efforts aimed at developing a new generation ofcomputational tools for tolerance analysis and synthesis, using theJacobian-Torsor approach. This approach is illustrated on a simple threepartassembly, demonstrating the method’s capability in handling threedimensionalgeometry.
Uncertainty propagation in modeling of plasma-assisted hydrogen production from biogas
Zaherisarabi, Shadi; Venkattraman, Ayyaswamy
2016-10-01
With the growing concern of global warming and the resulting emphasis on decreasing greenhouse gas emissions, there is an ever-increasing need to utilize energy-production strategies that can decrease the burning of fossil fuels. In this context, hydrogen remains an attractive clean-energy fuel that can be oxidized to produce water as a by-product. In spite of being an abundant species, hydrogen is seldom found in a form that is directly usable for energy-production. While steam reforming of methane is one popular technique for hydrogen production, plasma-assisted conversion of biogas (carbon dioxide + methane) to hydrogen is an attractive alternative. Apart from producing hydrogen, the other advantage of using biogas as raw material is the fact that two potent greenhouse gases are consumed. In this regard, modeling is an important tool to understand and optimize plasma-assisted conversion of biogas. The primary goal of this work is to perform a comprehensive statistical study that quantifies the influence of uncertain rate constants thereby determining the key reaction pathways. A 0-D chemical kinetics solver in the OpenFOAM suite is used to perform a series of simulations to propagate the uncertainty in rate constants and the resulting mean and standard deviation of outcomes.
Analytic model of electron pulse propagation in ultrafast electron diffraction experiments
International Nuclear Information System (INIS)
Michalik, A.M.; Sipe, J.E.
2006-01-01
We present a mean-field analytic model to study the propagation of electron pulses used in ultrafast electron diffraction experiments (UED). We assume a Gaussian form to characterize the electron pulse, and derive a system of ordinary differential equations that are solved quickly and easily to give the pulse dynamics. We compare our model to an N-body numerical simulation and are able to show excellent agreement between the two result sets. This model is a convenient alternative to time consuming and computationally intense N-body simulations in exploring the dynamics of UED electron pulses, and as a tool for refining UED experimental designs
Analytical Study on Propagation Dynamics of Optical Beam in Parity-Time Symmetric Optical Couplers
International Nuclear Information System (INIS)
Zhou Zheng; Zhang Li-Juan; Zhu Bo
2015-01-01
We present exact analytical solutions to parity-time (PT) symmetric optical system describing light transport in PT-symmetric optical couplers. We show that light intensity oscillates periodically between two waveguides for unbroken PT-symmetric phase, whereas light always leaves the system from the waveguide experiencing gain when light is initially input at either waveguide experiencing gain or waveguide experiencing loss for broken PT-symmetric phase. These analytical results agree with the recent experimental observation reported by Rüter et al. [Nat. Phys. 6 (2010) 192]. Besides, we present a scheme for manipulating PT symmetry by applying a periodic modulation. Our results provide an efficient way to control light propagation in periodically modulated PT-symmetric system by tuning the modulation amplitude and frequency. (paper)
An analytical solution for tidal propagation in the Yangtze Estuary, China
Directory of Open Access Journals (Sweden)
E. F. Zhang
2012-09-01
Full Text Available An analytical model for tidal dynamics has been applied to the Yangtze Estuary for the first time, to describe the tidal propagation in this large and typically branched estuary with three-order branches and four outlets to the sea. This study shows that the analytical model developed for a single-channel estuary can also accurately describe the tidal dynamics in a branched estuary, particularly in the downstream part. Within the same estuary system, the North Branch and the South Branches have a distinct tidal behaviour: the former being amplified demonstrating a marine character and the latter being damped with a riverine character. The satisfactory results for the South Channel and the South Branch using both separate and combined topographies confirm that the branched estuary system functions as an entity. To further test these results, it is suggested to collect more accurate and dense bathymetric and tidal information.
Directory of Open Access Journals (Sweden)
S.V. Bystrov
2016-05-01
Full Text Available Subject of Research.We present research results for the signal uncertainty problem that naturally arises for the developers of servomechanisms, including analytical design of serial compensators, delivering the required quality indexes for servomechanisms. Method. The problem was solved with the use of Besekerskiy engineering approach, formulated in 1958. This gave the possibility to reduce requirements for input signal composition of servomechanisms by using only two of their quantitative characteristics, such as maximum speed and acceleration. Information about input signal maximum speed and acceleration allows entering into consideration the equivalent harmonic input signal with calculated amplitude and frequency. In combination with requirements for maximum tracking error, the amplitude and frequency of the equivalent harmonic effects make it possible to estimate analytically the value of the amplitude characteristics of the system by error and then convert it to amplitude characteristic of open-loop system transfer function. While previously Besekerskiy approach was mainly used in relation to the apparatus of logarithmic characteristics, we use this approach for analytical synthesis of consecutive compensators. Main Results. Proposed technique is used to create analytical representation of "input–output" and "error–output" polynomial dynamic models of the designed system. In turn, the desired model of the designed system in the "error–output" form of analytical representation of transfer functions is the basis for the design of consecutive compensator, that delivers the desired placement of state matrix eigenvalues and, consequently, the necessary set of dynamic indexes for the designed system. The given procedure of consecutive compensator analytical design on the basis of Besekerskiy engineering approach under conditions of signal uncertainty is illustrated by an example. Practical Relevance. The obtained theoretical results are
Directory of Open Access Journals (Sweden)
Soheil Salahshour
2015-02-01
Full Text Available In this paper, we apply the concept of Caputo’s H-differentiability, constructed based on the generalized Hukuhara difference, to solve the fuzzy fractional differential equation (FFDE with uncertainty. This is in contrast to conventional solutions that either require a quantity of fractional derivatives of unknown solution at the initial point (Riemann–Liouville or a solution with increasing length of their support (Hukuhara difference. Then, in order to solve the FFDE analytically, we introduce the fuzzy Laplace transform of the Caputo H-derivative. To the best of our knowledge, there is limited research devoted to the analytical methods to solve the FFDE under the fuzzy Caputo fractional differentiability. An analytical solution is presented to confirm the capability of the proposed method.
International Nuclear Information System (INIS)
Barrado, A. I.; Garcia, S.; Perez, R. M.
2013-01-01
This paper presents an evaluation of uncertainty associated to analytical measurement of eighteen polycyclic aromatic compounds (PACs) in ambient air by liquid chromatography with fluorescence detection (HPLC/FD). The study was focused on analyses of PM 1 0, PM 2 .5 and gas phase fractions. Main analytical uncertainty was estimated for eleven polycyclic aromatic hydrocarbons (PAHs), four nitro polycyclic aromatic hydrocarbons (nitro-PAHs) and two hydroxy polycyclic aromatic hydrocarbons (OH-PAHs) based on the analytical determination, reference material analysis and extraction step. Main contributions reached 15-30% and came from extraction process of real ambient samples, being those for nitro- PAHs the highest (20-30%). Range and mean concentration of PAC mass concentrations measured in gas phase and PM 1 0/PM 2 .5 particle fractions during a full year are also presented. Concentrations of OH-PAHs were about 2-4 orders of magnitude lower than their parent PAHs and comparable to those sparsely reported in literature. (Author)
International Nuclear Information System (INIS)
Vinai, Paolo; Macian-Juan, Rafael; Chawla, Rakesh
2011-01-01
The paper describes the propagation of void fraction uncertainty, as quantified by employing a novel methodology developed at Paul Scherrer Institut, in the RETRAN-3D simulation of the Peach Bottom turbine trip test. Since the transient considered is characterized by a strong coupling between thermal-hydraulics and neutronics, the accuracy in the void fraction model has a very important influence on the prediction of the power history and, in particular, of the maximum power reached. It has been shown that the objective measures used for the void fraction uncertainty, based on the direct comparison between experimental and predicted values extracted from a database of appropriate separate-effect tests, provides power uncertainty bands that are narrower and more realistic than those based, for example, on expert opinion. The applicability of such an approach to best estimate, nuclear power plant transient analysis has thus been demonstrated.
DEFF Research Database (Denmark)
Diky, Vladimir; Chirico, Robert D.; Muzny, Chris
ThermoData Engine (TDE, NIST Standard Reference Databases 103a and 103b) is the first product that implements the concept of Dynamic Data Evaluation in the fields of thermophysics and thermochemistry, which includes maintaining the comprehensive and up-to-date database of experimentally measured ...... uncertainties, curve deviations, and inadequacies of the models. Uncertainty analysis shows relative contributions to the total uncertainty from each component and pair of components....
Evaluation of uncertainty sources and propagation from irradiance sensors to PV yield
Mariottini, Francesco; Gottschalg, Ralph; Betts, Tom; Zhu, Jiang
2018-01-01
This work quantifies the uncertainties of a pyranometer. Sensitivity to errors is analysed regarding the effects generated by adopting different time resolutions. Estimation of irradiance measurand and error is extended throughout an annual data set. This study represents an attempt to provide a more exhaustive overview of both systematic (i.e. physical) and random uncertainties in the evaluation of pyranometer measurements. Starting from expanded uncertainty in a monitored ...
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
Energy Technology Data Exchange (ETDEWEB)
Kuijper, J.C.; Oppe, J.; Klein Meulekamp, R.; Koning, H. [NRG - Fuels, Actinides and Isotopes group, Petten (Netherlands)
2005-07-01
Some years ago a methodology was developed at NRG for the calculation of 'density-to-density' and 'one-group cross section-to-density' sensitivity matrices and covariance matrices for final nuclide densities for burnup schemes consisting of multiple sets of flux/spectrum and burnup calculations. The applicability of the methodology was then demonstrated by calculations of BR3 MOX pin irradiation experiments employing multi-group cross section uncertainty data from the EAF4 data library. A recent development is the extension of this methodology to enable its application in combination with the OCTOPUS-MCNP-FISPACT/ORIGEN Monte Carlo burnup scheme. This required some extensions to the sensitivity matrix calculation tool CASEMATE. The extended methodology was applied on the 'HTR Plutonium Cell Burnup Benchmark' to calculate the uncertainties (covariances) in the final densities, as far as these uncertainties are caused by uncertainties in cross sections. Up to 600 MWd/kg these uncertainties are larger than the differences between the code systems. However, it should be kept in mind that the calculated uncertainties are based on EAF4 uncertainty data. It is not exactly clear on beforehand what a proper set of associated (MCNP) cross sections and covariances would yield in terms of final uncertainties in calculated densities. This will be investigated, by the same formalism, once these data becomes available. It should be noted that the studies performed up till the present date are mainly concerned with the influence of uncertainties in cross sections. The influence of uncertainties in the decay constants, although included in the formalism, is not considered further. Also the influence of other uncertainties (such as -geometrical- modelling approximations) has been left out of consideration for the time being. (authors)
International Nuclear Information System (INIS)
Kuijper, J.C.; Oppe, J.; Klein Meulekamp, R.; Koning, H.
2005-01-01
Some years ago a methodology was developed at NRG for the calculation of 'density-to-density' and 'one-group cross section-to-density' sensitivity matrices and covariance matrices for final nuclide densities for burnup schemes consisting of multiple sets of flux/spectrum and burnup calculations. The applicability of the methodology was then demonstrated by calculations of BR3 MOX pin irradiation experiments employing multi-group cross section uncertainty data from the EAF4 data library. A recent development is the extension of this methodology to enable its application in combination with the OCTOPUS-MCNP-FISPACT/ORIGEN Monte Carlo burnup scheme. This required some extensions to the sensitivity matrix calculation tool CASEMATE. The extended methodology was applied on the 'HTR Plutonium Cell Burnup Benchmark' to calculate the uncertainties (covariances) in the final densities, as far as these uncertainties are caused by uncertainties in cross sections. Up to 600 MWd/kg these uncertainties are larger than the differences between the code systems. However, it should be kept in mind that the calculated uncertainties are based on EAF4 uncertainty data. It is not exactly clear on beforehand what a proper set of associated (MCNP) cross sections and covariances would yield in terms of final uncertainties in calculated densities. This will be investigated, by the same formalism, once these data becomes available. It should be noted that the studies performed up till the present date are mainly concerned with the influence of uncertainties in cross sections. The influence of uncertainties in the decay constants, although included in the formalism, is not considered further. Also the influence of other uncertainties (such as -geometrical- modelling approximations) has been left out of consideration for the time being. (authors)
'spup' - An R package for uncertainty propagation in spatial environmental modelling
Sawicka, K.; Heuvelink, G.B.M.
2016-01-01
Computer models are crucial tools in engineering and environmental sciences for simulating the behaviour of complex systems. While many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Advances in uncertainty analysis
Uncertainty propagation analysis of an N2O emission model at the plot and landscape scale
Nol, L.; Heuvelink, G.B.M.; Veldkamp, A.; Vries, de W.; Kros, J.
2010-01-01
Nitrous oxide (N2O) emission from agricultural land is an important component of the total annual greenhouse gas (GHG) budget. In addition, uncertainties associated with agricultural N2O emissions are large. The goals of this work were (i) to quantify the uncertainties of modelled N2O emissions
International Nuclear Information System (INIS)
Cho, Soo Yong; Park, Chan Woo
2004-01-01
Uncertainties generated from the individual measured variables have an influence on the uncertainty of the experimental result through a data reduction equation. In this study, a performance test of a single stage axial type turbine is conducted, and total-to-total efficiencies are measured at the various off-design points in the low pressure and cold state. Based on an experimental apparatus, a data reduction equation for turbine efficiency is formulated and six measured variables are selected. Codes are written to calculate the efficiency, the uncertainty of the efficiency, and the sensitivity of the efficiency uncertainty by each of the measured quantities. The influence of each measured variable on the experimental result is figured out. Results show that the largest Uncertainty Magnification Factor (UMF) value is obtained by the inlet total pressure among the six measured variables, and its value is always greater than one. The UMF values of the inlet total temperature, the torque, and the RPM are always one. The Uncertainty Percentage Contribution (UPC) of the RPM shows the lowest influence on the uncertainty of the turbine efficiency, but the UPC of the torque has the largest influence to the result among the measured variables. These results are applied to find the correct direction for meeting an uncertainty requirement of the experimental result in the planning or development phase of experiment, and also to offer ideas for preparing a measurement system in the planning phase
Propagation of uncertainties from basic data to key parameters of nuclear reactors
International Nuclear Information System (INIS)
Kodeli, I.
2010-01-01
The author reports the development of a set of computing software (SUSD3D) and of libraries of nuclear data covariance matrices to assess sensitivities of parameters with respect to basic nuclear data, and the corresponding uncertainties, notably for radiation transport for which uncertainty has various origins: reactivity coefficients or neutron and gamma ray flows. He reports the application to fusion and fission reactors
Energy Technology Data Exchange (ETDEWEB)
Pal Verma, Mahendra [Instituto de Investigaciones Electricas, Cuernavaca, Morelos (Mexico)
2008-07-01
A procedure was developed to consider the analytical uncertainty in each parameter of geochemical analysis of geothermal fluid. The estimation of the uncertainty is based on the results of the geochemical analyses of geothermal fluids (numbered from the 0 to the 14), obtained within the framework of the comparisons program among the geochemical laboratories in the last 30 years. Also the propagation of the analytical uncertainty was realized in the calculation of the parameters of the geothermal fluid in the reservoir, through the methods of interval of uncertainty and GUM (Guide to the expression of Uncertainty of Measurement). The application of the methods is illustrated in the pH calculation of the geothermal fluid in the reservoir, considering samples 10 and 11 as separated waters at atmospheric conditions. [Spanish] Se desarrollo un procedimiento para estimar la incertidumbre analitica en cada parametro de analisis geoquimico de fluido geotermico. La estimacion de la incertidumbre esta basada en los resultados de los analisis geoquimicos de fluidos geotermicos (numerados del 0 al 14), obtenidos en el marco del programa de comparaciones entre los laboratorios geoquimicos en los ultimos 30 anos. Tambien se realizo la propagacion de la incertidumbre analitica en el calculo de los parametros del fluido geotermico en el yacimiento, a traves de los metodos de intervalo de incertidumbre y GUM (Guide to the expression of Uncertainty of Measurement). La aplicacion de los metodos se ilustra en el calculo de pH del fluido geotermico en el yacimiento, considerando las muestras 10 y 11 como aguas separadas a las condiciones atmosfericas.
Energy Technology Data Exchange (ETDEWEB)
Le Pallec, J. C.; Crouzet, N.; Bergeaud, V.; Delavaud, C. [CEA/DEN/DM2S, CEA/Saclay, 91191 Gif sur Yvette Cedex (France)
2012-07-01
The control of uncertainties in the field of reactor physics and their propagation in best-estimate modeling are a major issue in safety analysis. In this framework, the CEA develops a methodology to perform multi-physics simulations including uncertainties analysis. The present paper aims to present and apply this methodology for the analysis of an accidental situation such as REA (Rod Ejection Accident). This accident is characterized by a strong interaction between the different areas of the reactor physics (neutronic, fuel thermal and thermal hydraulic). The modeling is performed with CRONOS2 code. The uncertainties analysis has been conducted with the URANIE platform developed by the CEA: For each identified response from the modeling (output) and considering a set of key parameters with their uncertainties (input), a surrogate model in the form of a neural network has been produced. The set of neural networks is then used to carry out a sensitivity analysis which consists on a global variance analysis with the determination of the Sobol indices for all responses. The sensitivity indices are obtained for the input parameters by an approach based on the use of polynomial chaos. The present exercise helped to develop a methodological flow scheme, to consolidate the use of URANIE tool in the framework of parallel calculations. Finally, the use of polynomial chaos allowed computing high order sensitivity indices and thus highlighting and classifying the influence of identified uncertainties on each response of the analysis (single and interaction effects). (authors)
Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.
2014-01-01
Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.
Wang, S.; Huang, G. H.; Huang, W.; Fan, Y. R.; Li, Z.
2015-10-01
In this study, a fractional factorial probabilistic collocation method is proposed to reveal statistical significance of hydrologic model parameters and their multi-level interactions affecting model outputs, facilitating uncertainty propagation in a reduced dimensional space. The proposed methodology is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability, as well as its capability of revealing complex and dynamic parameter interactions. A set of reduced polynomial chaos expansions (PCEs) only with statistically significant terms can be obtained based on the results of factorial analysis of variance (ANOVA), achieving a reduction of uncertainty in hydrologic predictions. The predictive performance of reduced PCEs is verified by comparing against standard PCEs and the Monte Carlo with Latin hypercube sampling (MC-LHS) method in terms of reliability, sharpness, and Nash-Sutcliffe efficiency (NSE). Results reveal that the reduced PCEs are able to capture hydrologic behaviors of the Xiangxi River watershed, and they are efficient functional representations for propagating uncertainties in hydrologic predictions.
International Nuclear Information System (INIS)
Segre, S. E.
2001-01-01
The known analytic expressions for the evolution of the polarization of electromagnetic waves propagating in a plasma with uniformly sheared magnetic field are extended to the case where the shear is not constant. Exact analytic expressions are found for the case when the space variations of the medium are such that the magnetic field components and the plasma density satisfy a particular condition (eq. 13), possibly in a convenient reference frame of polarization space [it
Setty, Srinivas J.; Cefola, Paul J.; Montenbruck, Oliver; Fiedler, Hauke
2016-05-01
Catalog maintenance for Space Situational Awareness (SSA) demands an accurate and computationally lean orbit propagation and orbit determination technique to cope with the ever increasing number of observed space objects. As an alternative to established numerical and analytical methods, we investigate the accuracy and computational load of the Draper Semi-analytical Satellite Theory (DSST). The standalone version of the DSST was enhanced with additional perturbation models to improve its recovery of short periodic motion. The accuracy of DSST is, for the first time, compared to a numerical propagator with fidelity force models for a comprehensive grid of low, medium, and high altitude orbits with varying eccentricity and different inclinations. Furthermore, the run-time of both propagators is compared as a function of propagation arc, output step size and gravity field order to assess its performance for a full range of relevant use cases. For use in orbit determination, a robust performance of DSST is demonstrated even in the case of sparse observations, which is most sensitive to mismodeled short periodic perturbations. Overall, DSST is shown to exhibit adequate accuracy at favorable computational speed for the full set of orbits that need to be considered in space surveillance. Along with the inherent benefits of a semi-analytical orbit representation, DSST provides an attractive alternative to the more common numerical orbit propagation techniques.
Nuclear Data Uncertainty Propagation to Reactivity Coefficients of a Sodium Fast Reactor
Herrero, J. J.; Ochoa, R.; Martínez, J. S.; Díez, C. J.; García-Herranz, N.; Cabellos, O.
2014-04-01
The assessment of the uncertainty levels on the design and safety parameters for the innovative European Sodium Fast Reactor (ESFR) is mandatory. Some of these relevant safety quantities are the Doppler and void reactivity coefficients, whose uncertainties are quantified. Besides, the nuclear reaction data where an improvement will certainly benefit the design accuracy are identified. This work has been performed with the SCALE 6.1 codes suite and its multigroups cross sections library based on ENDF/B-VII.0 evaluation.
International Nuclear Information System (INIS)
Fiorito, L.; Diez, C.J.; Cabellos, O.; Stankovskiy, A.; Van den Eynde, G.; Labeau, P.E.
2014-01-01
Highlights: • Fission yield data and uncertainty comparison between major nuclear data libraries. • Fission yield covariance generation through Bayesian technique. • Study of the effect of fission yield correlations on decay heat calculations. • Covariance information contribute to reduce fission pulse decay heat uncertainty. - Abstract: Fission product yields are fundamental parameters in burnup/activation calculations and the impact of their uncertainties was widely studied in the past. Evaluations of these uncertainties were released, still without covariance data. Therefore, the nuclear community expressed the need of full fission yield covariance matrices to be able to produce inventory calculation results that take into account the complete uncertainty data. State-of-the-art fission yield data and methodologies for fission yield covariance generation were researched in this work. Covariance matrices were generated and compared to the original data stored in the library. Then, we focused on the effect of fission yield covariance information on fission pulse decay heat results for thermal fission of 235 U. Calculations were carried out using different libraries and codes (ACAB and ALEPH-2) after introducing the new covariance values. Results were compared with those obtained with the uncertainty data currently provided by the libraries. The uncertainty quantification was performed first with Monte Carlo sampling and then compared with linear perturbation. Indeed, correlations between fission yields strongly affect the uncertainty of decay heat. Eventually, a sensitivity analysis of fission product yields to fission pulse decay heat was performed in order to provide a full set of the most sensitive nuclides for such a calculation
Energy Technology Data Exchange (ETDEWEB)
Morales-Arteaga, Maria [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-11-07
This GUM WorkbenchTM propagation of uncertainty is for the coulometric measurement of the plutonium concentration in a Pu standard material (C126) supplied as individual aliquots that were prepared by mass. The C126 solution had been prepared and as aliquoted as standard material. Samples are aliquoted into glass vials and heated to dryness for distribution as dried nitrate. The individual plutonium aliquots were not separated chemically or otherwise purified prior to measurement by coulometry in the F/H Laboratory. Hydrogen peroxide was used for valence adjustment.
Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model
Wang, Shitao
2016-05-27
Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model\\'s estimates of the plume\\'s trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate\\'s contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.
Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model
Wang, Shitao; Iskandarani, Mohamed; Srinivasan, Ashwanth; Thacker, W. Carlisle; Winokur, Justin; Knio, Omar
2016-01-01
Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model's estimates of the plume's trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate's contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.
International Nuclear Information System (INIS)
Frosio, Thomas; Bonaccorsi, Thomas; Blaise, Patrick
2016-01-01
Highlights: • Nuclear data uncertainty propagation for neutronic quantities in coupled problems. • Uncertainties are detailed for local isotopic concentrations and local power maps. • Correlations are built between space areas of the core and for different burnups. - Abstract: In a previous paper, a method was investigated to calculate sensitivity coefficients in coupled Boltzmann/Bateman problem for nuclear data (ND) uncertainties propagation on the reactivity. Different methodologies were discussed and applied on an actual example of multigroup cross section uncertainty problem for a 2D Material Testing Reactor (MTR) benchmark. It was shown that differences between methods arose from correlations between input parameters, as far as the method enables to take them into account. Those methods, unlike Monte Carlo (MC) sampling for uncertainty propagation and quantification (UQ), allow obtaining sensitivity coefficients, as well as correlations values between nuclear data, during the depletion calculation for the parameters of interest. This work is here extended to local parameters such as power factors and isotopic concentrations. It also includes fission yield (FY) uncertainty propagation, on both reactivity and power factors. Furthermore, it introduces a new methodology enabling to decorrelate direct and transmutation terms for local quantities: a Monte-Carlo method using built samples from a multidimensional Gaussian law is used to extend the previous studies, and propagate fission yield uncertainties from the CEA’s COMAC covariance file. It is shown that, for power factors, the most impacting ND are the scattering reactions, principally coming from 27 Al and (bounded hydrogen in) H 2 O. The overall effect is a reduction of the propagated uncertainties throughout the cycle thanks to negatively correlated terms. For fission yield (FY), the results show that neither reactivity nor local power factors are strongly affected by uncertainties. However, they
Xu, Yanlong
2015-08-01
The coupled mode theory with coupling of diffraction modes and waveguide modes is usually used on the calculations of transmission and reflection coefficients for electromagnetic waves traveling through periodic sub-wavelength structures. In this paper, I extend this method to derive analytical solutions of high-order dispersion relations for shear horizontal (SH) wave propagation in elastic plates with periodic stubs. In the long wavelength regime, the explicit expression is obtained by this theory and derived specially by employing an effective medium. This indicates that the periodical stubs are equivalent to an effective homogenous layer in the long wavelength. Notably, in the short wavelength regime, high-order diffraction modes in the plate and high-order waveguide modes in the stubs are considered with modes coupling to compute the band structures. Numerical results of the coupled mode theory fit pretty well with the results of the finite element method (FEM). In addition, the band structures\\' evolution with the height of the stubs and the thickness of the plate shows clearly that the method can predict well the Bragg band gaps, locally resonant band gaps and high-order symmetric and anti-symmetric thickness-twist modes for the periodically structured plates. © 2015 Elsevier B.V.
Propagation of uncertainty by Monte Carlo simulations in case of basic geodetic computations
Directory of Open Access Journals (Sweden)
Wyszkowska Patrycja
2017-12-01
Full Text Available The determination of the accuracy of functions of measured or adjusted values may be a problem in geodetic computations. The general law of covariance propagation or in case of the uncorrelated observations the propagation of variance (or the Gaussian formula are commonly used for that purpose. That approach is theoretically justified for the linear functions. In case of the non-linear functions, the first-order Taylor series expansion is usually used but that solution is affected by the expansion error. The aim of the study is to determine the applicability of the general variance propagation law in case of the non-linear functions used in basic geodetic computations. The paper presents errors which are a result of negligence of the higher-order expressions and it determines the range of such simplification. The basis of that analysis is the comparison of the results obtained by the law of propagation of variance and the probabilistic approach, namely Monte Carlo simulations. Both methods are used to determine the accuracy of the following geodetic computations: the Cartesian coordinates of unknown point in the three-point resection problem, azimuths and distances of the Cartesian coordinates, height differences in the trigonometric and the geometric levelling. These simulations and the analysis of the results confirm the possibility of applying the general law of variance propagation in basic geodetic computations even if the functions are non-linear. The only condition is the accuracy of observations, which cannot be too low. Generally, this is not a problem with using present geodetic instruments.
Propagation of uncertainty by Monte Carlo simulations in case of basic geodetic computations
Wyszkowska, Patrycja
2017-12-01
The determination of the accuracy of functions of measured or adjusted values may be a problem in geodetic computations. The general law of covariance propagation or in case of the uncorrelated observations the propagation of variance (or the Gaussian formula) are commonly used for that purpose. That approach is theoretically justified for the linear functions. In case of the non-linear functions, the first-order Taylor series expansion is usually used but that solution is affected by the expansion error. The aim of the study is to determine the applicability of the general variance propagation law in case of the non-linear functions used in basic geodetic computations. The paper presents errors which are a result of negligence of the higher-order expressions and it determines the range of such simplification. The basis of that analysis is the comparison of the results obtained by the law of propagation of variance and the probabilistic approach, namely Monte Carlo simulations. Both methods are used to determine the accuracy of the following geodetic computations: the Cartesian coordinates of unknown point in the three-point resection problem, azimuths and distances of the Cartesian coordinates, height differences in the trigonometric and the geometric levelling. These simulations and the analysis of the results confirm the possibility of applying the general law of variance propagation in basic geodetic computations even if the functions are non-linear. The only condition is the accuracy of observations, which cannot be too low. Generally, this is not a problem with using present geodetic instruments.
Uncertainty propagation through an aeroelastic wind turbine model using polynomial surrogates
DEFF Research Database (Denmark)
Murcia Leon, Juan Pablo; Réthoré, Pierre-Elouan; Dimitrov, Nikolay Krasimirov
2018-01-01
of the uncertainty in annual energy production due to wind resource variability and/or robust wind power plant layout optimization. It can be concluded that it is possible to capture the global behavior of a modern wind turbine and its uncertainty under realistic inflow conditions using polynomial response surfaces......Polynomial surrogates are used to characterize the energy production and lifetime equivalent fatigue loads for different components of the DTU 10 MW reference wind turbine under realistic atmospheric conditions. The variability caused by different turbulent inflow fields are captured by creating......-alignment. The methodology presented extends the deterministic power and thrust coefficient curves to uncertainty models and adds new variables like damage equivalent fatigue loads in different components of the turbine. These surrogate models can then be implemented inside other work-flows such as: estimation...
DEFF Research Database (Denmark)
Castro, Oscar; Branner, Kim; Dimitrov, Nikolay Krasimirov
2018-01-01
amplitude loading cycles. Fatigue life predictions of unidirectional and multi-directional glass/epoxy laminates are carried out to validate the proposed model against experimental data. The probabilistic fatigue behavior of laminates is analyzed under constant amplitude loading conditions as well as under......A probabilistic model for estimating the fatigue life of laminated composite materials considering the uncertainty in their mechanical properties is developed. The uncertainty in the material properties is determined from fatigue coupon tests. Based on this uncertainty, probabilistic constant life...... diagrams are developed which can efficiently estimate probabilistic É›-N curves at any load level and stress ratio. The probabilistic É›-N curve information is used in a reliability analysis for fatigue limit state proposed for estimating the probability of failure of composite laminates under variable...
Calibration and Forward Uncertainty Propagation for Large-eddy Simulations of Engineering Flows
Energy Technology Data Exchange (ETDEWEB)
Templeton, Jeremy Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Blaylock, Myra L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Domino, Stefan P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hewson, John C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kumar, Pritvi Raj [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ling, Julia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Najm, Habib N. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ruiz, Anthony [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Safta, Cosmin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stewart, Alessia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wagner, Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-09-01
The objective of this work is to investigate the efficacy of using calibration strategies from Uncertainty Quantification (UQ) to determine model coefficients for LES. As the target methods are for engineering LES, uncertainty from numerical aspects of the model must also be quantified. 15 The ultimate goal of this research thread is to generate a cost versus accuracy curve for LES such that the cost could be minimized given an accuracy prescribed by an engineering need. Realization of this goal would enable LES to serve as a predictive simulation tool within the engineering design process.
Chiadamrong, N.; Piyathanavong, V.
2017-12-01
Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.
Monte-Carlo-based uncertainty propagation with hierarchical models—a case study in dynamic torque
Klaus, Leonard; Eichstädt, Sascha
2018-04-01
For a dynamic calibration, a torque transducer is described by a mechanical model, and the corresponding model parameters are to be identified from measurement data. A measuring device for the primary calibration of dynamic torque, and a corresponding model-based calibration approach, have recently been developed at PTB. The complete mechanical model of the calibration set-up is very complex, and involves several calibration steps—making a straightforward implementation of a Monte Carlo uncertainty evaluation tedious. With this in mind, we here propose to separate the complete model into sub-models, with each sub-model being treated with individual experiments and analysis. The uncertainty evaluation for the overall model then has to combine the information from the sub-models in line with Supplement 2 of the Guide to the Expression of Uncertainty in Measurement. In this contribution, we demonstrate how to carry this out using the Monte Carlo method. The uncertainty evaluation involves various input quantities of different origin and the solution of a numerical optimisation problem.
International Nuclear Information System (INIS)
Jin Danqing; Andrec, Michael; Montelione, Gaetano T.; Levy, Ronald M.
1998-01-01
In this paper we make use of the graphical procedure previously described [Jin, D. et al. (1997) J. Am. Chem. Soc., 119, 6923-6924] to analyze NMR relaxation data using the Lipari-Szabo model-free formalism. The graphical approach is advantageous in that it allows the direct visualization of the experimental uncertainties in the motional parameter space. Some general 'rules' describing the relationship between the precision of the relaxation measurements and the precision of the model-free parameters and how this relationship changes with the overall tumbling time (τm) are summarized. The effect of the precision in the relaxation measurements on the detection of internal motions not close to the extreme narrowing limit is analyzed. We also show that multiple timescale internal motions may be obscured by experimental uncertainty, and that the collection of relaxation data at very high field strength can improve the ability to detect such deviations from the simple Lipari-Szabo model
DEFF Research Database (Denmark)
He, Xiulan
parameters and model structures, which are the primary focuses of this PhD research. Parameter uncertainty was analyzed using an optimization tool (PEST: Parameter ESTimation) in combination with a random sampling method (LHS: Latin Hypercube Sampling). Model structure, namely geological architecture...... be compensated by model parameters, e.g. when hydraulic heads are considered. However, geological structure is the primary source of uncertainty with respect to simulations of groundwater age and capture zone. Operational MPS based software has been on stage for just around ten years; yet, issues regarding...... geological structures of these three sites provided appropriate conditions for testing the methods. Our study documented that MPS is an efficient approach for simulating geological heterogeneity, especially for non-stationary system. The high resolution of geophysical data such as SkyTEM is valuable both...
Uncertainty in Damage Detection, Dynamic Propagation and Just-in-Time Networks
2015-08-03
EEG) problem, of finding optimal number and locations for sensors for source identification in a 3D unit sphere from data on its boundary. In this...Poag, E. Thorpe, K.B. Flores and H.T. Banks, Uncertainty quantification for a model of HIV -1 patient response to antiretroviral therapy interruptions...Raleigh, NC, November, 2013; J. Inverse and Ill-posed Problems, 23 (2014), 135–-171, ISSN (Online) 1569-3945, ISSN ( Print ) 0928-0219, DOI
Bacchi, Vito; Duluc, Claire-Marie; Bertrand, Nathalie; Bardet, Lise
2017-04-01
In recent years, in the context of hydraulic risk assessment, much effort has been put into the development of sophisticated numerical model systems able reproducing surface flow field. These numerical models are based on a deterministic approach and the results are presented in terms of measurable quantities (water depths, flow velocities, etc…). However, the modelling of surface flows involves numerous uncertainties associated both to the numerical structure of the model, to the knowledge of the physical parameters which force the system and to the randomness inherent to natural phenomena. As a consequence, dealing with uncertainties can be a difficult task for both modelers and decision-makers [Ioss, 2011]. In the context of nuclear safety, IRSN assesses studies conducted by operators for different reference flood situations (local rain, small or large watershed flooding, sea levels, etc…), that are defined in the guide ASN N°13 [ASN, 2013]. The guide provides some recommendations to deal with uncertainties, by proposing a specific conservative approach to cover hydraulic modelling uncertainties. Depending of the situation, the influencing parameter might be the Strickler coefficient, levee behavior, simplified topographic assumptions, etc. Obviously, identifying the most influencing parameter and giving it a penalizing value is challenging and usually questionable. In this context, IRSN conducted cooperative (Compagnie Nationale du Rhone, I-CiTy laboratory of Polytech'Nice, Atomic Energy Commission, Bureau de Recherches Géologiques et Minières) research activities since 2011 in order to investigate feasibility and benefits of Uncertainties Analysis (UA) and Global Sensitivity Analysis (GSA) when applied to hydraulic modelling. A specific methodology was tested by using the computational environment Promethee, developed by IRSN, which allows carrying out uncertainties propagation study. This methodology was applied with various numerical models and in
Hong, Jinglan
2012-06-01
Uncertainty information is essential for the proper use of life cycle assessment and environmental assessments in decision making. To investigate the uncertainties of biodiesel and determine the level of confidence in the assertion that biodiesel is more environmentally friendly than diesel, an explicit analytical approach based on the Taylor series expansion for lognormal distribution was applied in the present study. A biodiesel case study demonstrates the probability that biodiesel has a lower global warming and non-renewable energy score than diesel, that is 92.3% and 93.1%, respectively. The results indicate the level of confidence in the assertion that biodiesel is more environmentally friendly than diesel based on the global warming and non-renewable energy scores. Copyright © 2011 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Helton, Jon Craig; Sallaberry, Cedric M.; Hansen, Clifford W.
2010-01-01
The 2008 performance assessment (PA) for the proposed repository for high-level radioactive waste at Yucca Mountain (YM), Nevada, illustrates the conceptual structure of risk assessments for complex systems. The 2008 YM PA is based on the following three conceptual entities: a probability space that characterizes aleatory uncertainty; a function that predicts consequences for individual elements of the sample space for aleatory uncertainty; and a probability space that characterizes epistemic uncertainty. These entities and their use in the characterization, propagation and analysis of aleatory and epistemic uncertainty are described and illustrated with results from the 2008 YM PA.
Samad, Noor Asma Fazli Abdul; Sin, Gürkan; Gernaey, Krist V; Gani, Rafiqul
2013-11-01
This paper presents the application of uncertainty and sensitivity analysis as part of a systematic model-based process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty, while for sensitivity analysis, global methods including the standardized regression coefficients (SRC) and Morris screening are used to identify the most significant parameters. The potassium dihydrogen phosphate (KDP) crystallization process is used as a case study, both in open-loop and closed-loop operation. In the uncertainty analysis, the impact on the predicted output of uncertain parameters related to the nucleation and the crystal growth model has been investigated for both a one- and two-dimensional crystal size distribution (CSD). The open-loop results show that the input uncertainties lead to significant uncertainties on the CSD, with appearance of a secondary peak due to secondary nucleation for both cases. The sensitivity analysis indicated that the most important parameters affecting the CSDs are nucleation order and growth order constants. In the proposed PAT system design (closed-loop), the target CSD variability was successfully reduced compared to the open-loop case, also when considering uncertainty in nucleation and crystal growth model parameters. The latter forms a strong indication of the robustness of the proposed PAT system design in achieving the target CSD and encourages its transfer to full-scale implementation. Copyright © 2013 Elsevier B.V. All rights reserved.
A Sparse Stochastic Collocation Technique for High-Frequency Wave Propagation with Uncertainty
Malenova, G.; Motamed, M.; Runborg, O.; Tempone, Raul
2016-01-01
We consider the wave equation with highly oscillatory initial data, where there is uncertainty in the wave speed, initial phase, and/or initial amplitude. To estimate quantities of interest related to the solution and their statistics, we combine a high-frequency method based on Gaussian beams with sparse stochastic collocation. Although the wave solution, uϵ, is highly oscillatory in both physical and stochastic spaces, we provide theoretical arguments for simplified problems and numerical evidence that quantities of interest based on local averages of |uϵ|2 are smooth, with derivatives in the stochastic space uniformly bounded in ϵ, where ϵ denotes the short wavelength. This observable related regularity makes the sparse stochastic collocation approach more efficient than Monte Carlo methods. We present numerical tests that demonstrate this advantage.
A Sparse Stochastic Collocation Technique for High-Frequency Wave Propagation with Uncertainty
Malenova, G.
2016-09-08
We consider the wave equation with highly oscillatory initial data, where there is uncertainty in the wave speed, initial phase, and/or initial amplitude. To estimate quantities of interest related to the solution and their statistics, we combine a high-frequency method based on Gaussian beams with sparse stochastic collocation. Although the wave solution, uϵ, is highly oscillatory in both physical and stochastic spaces, we provide theoretical arguments for simplified problems and numerical evidence that quantities of interest based on local averages of |uϵ|2 are smooth, with derivatives in the stochastic space uniformly bounded in ϵ, where ϵ denotes the short wavelength. This observable related regularity makes the sparse stochastic collocation approach more efficient than Monte Carlo methods. We present numerical tests that demonstrate this advantage.
Energy Technology Data Exchange (ETDEWEB)
Geffray, Clotaire Clement
2017-03-20
The work presented here constitutes an important step towards the validation of the use of coupled system thermal-hydraulics and computational fluid dynamics codes for the simulation of complex flows in liquid metal cooled pool-type facilities. First, a set of methods suited for uncertainty and sensitivity analysis and validation activities with regards to the specific constraints of the work with coupled and expensive-to-run codes is proposed. Then, these methods are applied to the ATHLET - ANSYS CFX model of the TALL-3D facility. Several transients performed at this latter facility are investigated. The results are presented, discussed and compared to the experimental data. Finally, assessments of the validity of the selected methods and of the quality of the model are offered.
Indian Academy of Sciences (India)
To reflect this uncertainty in the climate scenarios, the use of AOGCMs that explicitly simulate the carbon cycle and chemistry of all the substances are needed. The Hadley Centre has developed a version of the climate model that allows the effect of climate change on the carbon cycle and its feedback into climate, to be ...
International Nuclear Information System (INIS)
Silva, T.A. da
1988-01-01
The comparison between the uncertainty method recommended by International Atomic Energy Agency (IAEA) and the and the International Weight and Measure Commitee (CIPM) are showed, for the calibration of clinical dosimeters in the secondary standard Dosimetry Laboratory (SSDL). (C.G.C.) [pt
Homogeneous Minor Actinide Transmutation in SFR: Neutronic Uncertainties Propagation with Depletion
International Nuclear Information System (INIS)
Buiron, L.; Plisson-Rieunier, D.
2015-01-01
In the frame of next generation fast reactor design, the minimisation of nuclear waste production is one of the key objectives for current R and D. Among the possibilities studied at CEA, minor actinides multi-recycling is the most promising industrial way achievable in the near-term. Two main management options are considered: - Multi-recycling in a homogeneous way (minor actinides diluted in the driver fuel). If this solution can help achieving high transmutation rates, the negative impact of minor actinides on safety coefficients allows only a small fraction of the total heavy mass to be loaded in the core (∼ few %). - Multi-recycling in heterogeneous way by means of Minor Actinide Bearing Blanket (MABB) located at the core periphery. This solution offers more flexibility than the previous one, allowing a total minor actinides decoupled management from the core fuel. As the impact on feedback coefficient is small larger initial minor actinide mass can be loaded in this configuration. Starting from a breakeven Sodium Fast Reactor designed jointly by CEA, Areva and EdF teams, the so called SFR V2B, transmutation performances have been studied in frame on the French fleet for both options and various specific isotopic management (all minor actinides, americium only, etc.). Using these results, a sensitivity study has been performed to assess neutronic uncertainties (i.e coming from cross section) on mass balance on the most attractive configurations. This work in based on a new implementation of sensitivity on concentration with depletion in the ERANOS code package. Uncertainties on isotopes masses at the end of irradiation using various variance-covariance is discussed. (authors)
2016-04-25
ElectroMagnetic, Multipath propagation, Reflection-diffraction, SAR signal processing 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18...protection, and traffic surveillance, etc. With these above reasons, we are motivated to introduce a new approach to the target detection and...coherent integrating the backscattering signal , we propose a 3D propagation model that is useful not only in explaining the mechanisms of wave
International Nuclear Information System (INIS)
Machado, Marcio Dornellas
1998-09-01
One of the most important thermalhydraulics safety parameters is the DNBR (Departure from Nucleate Boiling Ratio). The current methodology in use at Eletronuclear to determine DNBR is extremely conservative and may result in penalties to the reactor power due to an increase plugging level of steam generator tubes. This work uses a new methodology to evaluate DNBR, named mini-RTDP. The standard methodology (STDP) currently in use establishes a limit design value which cannot be surpassed. This limit value is determined taking into account the uncertainties of the empirical correlation used in COBRA IIC/MIT code, modified to Angra 1 conditions. The correlation used is the Westinghouse's W-3 and the minimum DNBR (MDBR) value cannot be less than 1.3. The new methodology reduces the excessive level of conservatism associated with the parameters used in the DNBR calculation, which take most unfavorable values in the STDP methodology, by using their best estimate values. The final goal is to obtain a new DNBR design limit which will provide a margin gain due to more realistic parameters values used in the methodology. (author)
Accelerated uncertainty propagation in two-level probabilistic studies under monotony
International Nuclear Information System (INIS)
Limbourg, Philipp; Rocquigny, Etienne de; Andrianov, Guennadi
2010-01-01
Double-level probabilistic uncertainty models that separate aleatory and epistemic components enjoy significant interest in risk assessment. But the expensive computational costs associated with calculations of rare failure probabilities are still a large obstacle in practice. Computing accurately a risk lower than 10 -3 with 95% epistemic confidence usually requires 10 7 -10 8 runs in a brute-force double Monte Carlo. For single-level probabilistic studies, FORM (First Order Reliability Analysis) is a classical recipe allowing fast approximation of failure probabilities while MRM (Monotonous Reliability Method) recently proved an attractive robust alternative under monotony. This paper extends these methods to double-level probabilistic models through two novel algorithms designed to compute a set of failure probabilities or an aleatory risk level with an epistemic confidence quantile. The first, L2-FORM (level-2 FORM), allows a rapid approximation of the failure probabilities through a combination of FORM with new ideas to use similarity between computations. L2-MRM (level-2 MRM), a quadrature approach, provides 100%-guaranteed error bounds on the results. Experiments on three flood prediction problems showed that both algorithms approximate a set of 500 failure probabilities of 10 -3 -10 -2 or derived 95% epistemic quantiles with a total of only 500-1000 function evaluations, outperforming importance sampling, iterative FORM and regression splines metamodels.
DEFF Research Database (Denmark)
Van Bockstal, Pieter Jan; Mortier, Séverine Thérèse F.C.; Corver, Jos
2017-01-01
of a freeze-drying process, allowing to quantitatively estimate and control the risk of cake collapse (i.e., the Risk of Failure (RoF)). The propagation of the error on the estimation of the thickness of the dried layer Ldried as function of primary drying time was included in the uncertainty analysis...
International Nuclear Information System (INIS)
Whittingham, I.B.
1977-12-01
The bound electron propagator in quantum electrodynamics is reviewed and the Brown and Schaefer angular momentum representation of the propagator discussed. Regular and irregular solutions of the radial Dirac equations for both /E/ 2 and /E/ >or= mc 2 are required for the computation of the propagator. Analytical expressions for these solutions, and their corresponding Wronskians, are obtained for a point Coulomb potential. Some computational aspects are discussed in an appendix
Analytical Time-Domain Solution of Plane Wave Propagation Across a Viscoelastic Rock Joint
Zou, Yang; Li, Jianchun; Laloui, Lyesse; Zhao, Jian
2017-10-01
The effects of viscoelastic filled rock joints on wave propagation are of great significance in rock engineering. The solutions in time domain for plane longitudinal ( P-) and transverse ( S-) waves propagation across a viscoelastic rock joint are derived based on Maxwell and Kelvin models which are, respectively, applied to describe the viscoelastic deformational behaviour of the rock joint and incorporated into the displacement discontinuity model (DDM). The proposed solutions are verified by comparing with the previous studies on harmonic waves, which are simulated by sinusoidal incident P- and S-waves. Comparison between the predicted transmitted waves and the experimental data for P-wave propagation across a joint filled with clay is conducted. The Maxwell is found to be more appropriate to describe the filled joint. The parametric studies show that wave propagation is affected by many factors, such as the stiffness and the viscosity of joints, the incident angle and the duration of incident waves. Furthermore, the dependences of the transmission and reflection coefficients on the specific joint stiffness and viscosity are different for the joints with Maxwell and Kelvin behaviours. The alternation of the reflected and transmitted waveforms is discussed, and the application scope of this study is demonstrated by an illustration of the effects of the joint thickness. The solutions are also extended for multiple parallel joints with the virtual wave source method and the time-domain recursive method. For an incident wave with arbitrary waveform, it is convenient to adopt the present approach to directly calculate wave propagation across a viscoelastic rock joint without additional mathematical methods such as the Fourier and inverse Fourier transforms.
Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua
2018-01-01
Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Analytical approach of laser beam propagation in the hollow polygonal light pipe.
Zhu, Guangzhi; Zhu, Xiao; Zhu, Changhong
2013-08-10
An analytical method of researching the light distribution properties on the output end of a hollow n-sided polygonal light pipe and a light source with a Gaussian distribution is developed. The mirror transformation matrices and a special algorithm of removing void virtual images are created to acquire the location and direction vector of each effective virtual image on the entrance plane. The analytical method is demonstrated by Monte Carlo ray tracing. At the same time, four typical cases are discussed. The analytical results indicate that the uniformity of light distribution varies with the structural and optical parameters of the hollow n-sided polygonal light pipe and light source with a Gaussian distribution. The analytical approach will be useful to design and choose the hollow n-sided polygonal light pipe, especially for high-power laser beam homogenization techniques.
International Nuclear Information System (INIS)
Chen, J.C.; Chun, R.C.; Goudreau, G.L.; Maslenikov, O.R.; Johnson, J.J.
1984-01-01
This paper summarizes the results of the dynamic response analysis of the Zion reactor containment building using three different soil-structure interaction (SSI) analytical procedures: the substructure method, CLASSI; the equivalent linear finite element approach, ALUSH and the nonlinear finite element procedure, DYNA3D. Uncertainties in analyzing a soil-structure system due to SSI analysis procedures were investigated. Responses at selected locations in the structure were compared: peak accelerations and response spectra
Muthu, Satish; Childress, Amy; Brant, Jonathan
2014-08-15
Membrane fouling assessed from a fundamental standpoint within the context of the Derjaguin-Landau-Verwey-Overbeek (DLVO) model. The DLVO model requires that the properties of the membrane and foulant(s) be quantified. Membrane surface charge (zeta potential) and free energy values are characterized using streaming potential and contact angle measurements, respectively. Comparing theoretical assessments for membrane-colloid interactions between research groups requires that the variability of the measured inputs be established. The impact that such variability in input values on the outcome from interfacial models must be quantified to determine an acceptable variance in inputs. An interlaboratory study was conducted to quantify the variability in streaming potential and contact angle measurements when using standard protocols. The propagation of uncertainty from these errors was evaluated in terms of their impact on the quantitative and qualitative conclusions on extended DLVO (XDLVO) calculated interaction terms. The error introduced into XDLVO calculated values was of the same magnitude as the calculated free energy values at contact and at any given separation distance. For two independent laboratories to draw similar quantitative conclusions regarding membrane-foulant interfacial interactions the standard error in contact angle values must be⩽2.5°, while that for the zeta potential values must be⩽7 mV. Copyright © 2014 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Wang Ying; Yuan Chengxun; Gao Ruilin; Zhou Zhongxiang
2012-01-01
Theoretical investigations of a Gaussian laser beam propagating in relativistic plasmas have been performed with the WKB method and complex eikonal function. We consider the relativistic nonlinearity induced by intense laser beam, and present the relativistically generalized forms of the plasma frequency and electron collision frequency in plasmas. The coupled differential equations describing the propagation variations of laser beam are derived and numerically solved. The obtained simulation results present the similar variation tendency with experiments. By changing the plasma density, we theoretically analyze the feasibility of using a plasmas slab of a fixed thickness to compress the laser beam-width and acquire the focused laser intensity. The present work complements the relativistic correction of the electron collision frequency with reasonable derivations, promotes the theoretical approaching to experiments and provides effective instructions to the practical laser-plasma interactions.
International Nuclear Information System (INIS)
Raupach, Rainer; Flohr, Thomas G
2011-01-01
We analyze the signal and noise propagation of differential phase-contrast computed tomography (PCT) compared with conventional attenuation-based computed tomography (CT) from a theoretical point of view. This work focuses on grating-based differential phase-contrast imaging. A mathematical framework is derived that is able to analytically predict the relative performance of both imaging techniques in the sense of the relative contrast-to-noise ratio for the contrast of any two materials. Two fundamentally different properties of PCT compared with CT are identified. First, the noise power spectra show qualitatively different characteristics implying a resolution-dependent performance ratio. The break-even point is derived analytically as a function of system parameters such as geometry and visibility. A superior performance of PCT compared with CT can only be achieved at a sufficiently high spatial resolution. Second, due to periodicity of phase information which is non-ambiguous only in a bounded interval statistical phase wrapping can occur. This effect causes a collapse of information propagation for low signals which limits the applicability of phase-contrast imaging at low dose.
Kirchengast, G.; Schwaerz, M.; Fritzer, J.; Schwarz, J.; Scherllin-Pirscher, B.; Steiner, A. K.
2013-12-01
influences) exists so far. Establishing such a trace first-time in form of the Reference Occultation Processing System rOPS, providing reference RO data for climate science and applications, is therefore a current cornerstone endeavor at the Wegener Center over 2011 to 2015, supported also by colleagues from other key groups at EUMETSAT Darmstadt, UCAR Boulder, DMI Copenhagen, ECMWF Reading, IAP Moscow, AIUB Berne, and RMIT Melbourne. With the rOPS we undertake to process the full chain from the SI-tied raw data to the atmospheric ECVs with integrated uncertainty propagation. We summarize where we currently stand in quantifying RO accuracy and long-term stability and then discuss the concept, development status and initial results from the rOPS, with emphasis on its novel capability to provide SI-tied reference data with integrated uncertainty estimation. We comment how these data can provide ground-breaking support to challenges such as climate model evaluation, anthropogenic change detection and attribution, and calibration of complementary climate observing systems.
International Nuclear Information System (INIS)
Wang Yansong; Kulsrud, Russell; Ji, Hantao
2008-01-01
A local linear theory is proposed for a perpendicularly propagating drift instability driven by relative drifts between electrons and ions. The theory takes into account local cross-field current, pressure gradients, and modest collisions as in the Magnetic Reconnection Experiment [M. Yamada et al., Phys. Plasmas 4, 1936 (1997)]. The unstable waves have very small group velocities in the direction of the pressure gradient, but have a large phase velocity near the relative drift velocity between electrons and ions in the direction of the cross-field current. By taking into account the electron-ion collisions and applying the theory in the Harris sheet, we establish that this instability could be excited near the center of the Harris sheet and have enough e-foldings to grow to large amplitude before it propagates out of the unstable region. Comparing with the other magnetic reconnection related instabilities (lower-hybrid-drift instability, modified two-stream instability, etc.) studied previously, we believe the instability we found is a favorable candidate to produce anomalous resistivity because of its unique wave characteristics, such as electromagnetic component, large phase velocity, and small group velocity in the cross-current-layer direction.
International Nuclear Information System (INIS)
Wang, Y.; Kulsrud, R.; Ji, H.
2008-01-01
A local linear theory is proposed for a perpendicularly propagating drift instability driven by relative drifts between electrons and ions. The theory takes into account local cross-field current, pressure gradients and modest collisions as in the Magnetic Reconnection Experiment (MRX) (10). The unstable waves have very small group velocities in the direction of the pressure gradient, but have a large phase velocity near the relative drift velocity between electrons and ions in the direction of cross-field current. By taking into account the electron-ion collisions and applying the theory in the Harris sheet, we establish that this instability could be excited near the center of the Harris sheet and have enough efoldings to grow to large amplitude before it propagates out of the unstable region. Comparing with the other magnetic reconnection related instabilities (LHDI, MTSI et.) studied previously, we believe the instability we find is a favorable candidate to produce anomalous resistivity because of its unique wave characteristics, such as electromagnetic component, large phase velocity, and small group velocity in the cross current layer direction
Directory of Open Access Journals (Sweden)
M. W. Rotach
2012-08-01
Full Text Available D-PHASE was a Forecast Demonstration Project of the World Weather Research Programme (WWRP related to the Mesoscale Alpine Programme (MAP. Its goal was to demonstrate the reliability and quality of operational forecasting of orographically influenced (determined precipitation in the Alps and its consequences on the distribution of run-off characteristics. A special focus was, of course, on heavy-precipitation events.
The D-PHASE Operations Period (DOP ran from June to November~2007, during which an end-to-end forecasting system was operated covering many individual catchments in the Alps, with their water authorities, civil protection organizations or other end users. The forecasting system's core piece was a Visualization Platform where precipitation and flood warnings from some 30 atmospheric and 7 hydrological models (both deterministic and probabilistic and corresponding model fields were displayed in uniform and comparable formats. Also, meteograms, nowcasting information and end user communication was made available to all the forecasters, users and end users. D-PHASE information was assessed and used by some 50 different groups ranging from atmospheric forecasters to civil protection authorities or water management bodies.
In the present contribution, D-PHASE is briefly presented along with its outstanding scientific results and, in particular, the lessons learnt with respect to uncertainty propagation. A focus is thereby on the transfer of ensemble prediction information into the hydrological community and its use with respect to other aspects of societal impact. Objective verification of forecast quality is contrasted to subjective quality assessments during the project (end user workshops, questionnaires and some general conclusions concerning forecast demonstration projects are drawn.
Uncertainty in soil-structure interaction analysis arising from differences in analytical techniques
International Nuclear Information System (INIS)
Maslenikov, O.R.; Chen, J.C.; Johnson, J.J.
1982-07-01
This study addresses uncertainties arising from variations in different modeling approaches to soil-structure interaction of massive structures at a nuclear power plant. To perform a comprehensive systems analysis, it is necessary to quantify, for each phase of the traditional analysis procedure, both the realistic seismic response and the uncertainties associated with them. In this study two linear soil-structure interaction techniques were used to analyze the Zion, Illinois nuclear power plant: a direct method using the FLUSH computer program and a substructure approach using the CLASSI family of computer programs. In-structure response from two earthquakes, one real and one synthetic, was compared. Structure configurations from relatively simple to complicated multi-structure cases were analyzed. The resulting variations help quantify uncertainty in structure response due to analysis procedures
Liubartseva, Svitlana; Coppini, Giovanni; Ciliberti, Stefania Angela; Lecci, Rita
2017-04-01
In operational oil spill modeling, MEDSLIK-II (De Dominicis et al., 2013) focuses on the reliability of the oil drift and fate predictions routinely fed by operational oceanographic and atmospheric forecasting chain. Uncertainty calculations enhance oil spill forecast efficiency, supplying probability maps to quantify the propagation of various uncertainties. Recently, we have developed the methodology that allows users to evaluate the variability of oil drift forecast caused by uncertain data on the initial oil spill conditions (Liubartseva et al., 2016). One of the key methodological aspects is a reasonable choice of a way of parameter perturbation. In case of starting oil spill location and time, these scalars might be treated as independent random parameters. If we want to perturb the underlying ocean currents and wind, we have to deal with deterministic vector parameters. To a first approximation, we suggest rolling forecasts as a set of perturbed ocean currents and wind. This approach does not need any extra hydrodynamic calculations, and it is quick enough to be performed in web-based applications. The capabilities of the proposed methodology are explored using the Black Sea Forecasting System (BSFS) recently implemented by Ciliberti et al. (2016) for the Copernicus Marine Environment Monitoring Service (http://marine.copernicus.eu/services-portfolio/access-to-products). BSFS horizontal resolution is 1/36° in zonal and 1/27° in meridional direction (ca. 3 km). Vertical domain discretization is represented by 31 unevenly spaced vertical levels. Atmospheric wind data are provided by the European Centre for Medium-Range Weather Forecasts (ECMWF) forecasts, at 1/8° (ca. 12.5 km) horizontal and 6-hour temporal resolution. A great variety of probability patterns controlled by different underlying flows is represented including the cyclonic Rim Current, flow bifurcations in anticyclonic eddies (e.g., Sevastopol and Batumi), northwestern shelf circulation, etc
International Nuclear Information System (INIS)
Kiani, Keivan; Shodja, Hossein M.
2011-01-01
Highlights: ► Response of RC structures to macrocell corrosion of a rebar is studied analytically. ► The problem is solved prior to the onset of microcrack propagation. ► Suitable Love's potential functions are used to study the steel-rust-concrete media. ► The role of crucial factors on the time of onset of concrete cracking is examined. ► The effect of vital factors on the maximum radial stress of concrete is explored. - Abstract: Assessment of the macrocell corrosion which deteriorates reinforced concrete (RC) structures have attracted the attention of many researchers during recent years. In this type of rebar corrosion, the reduction in cross-section of the rebar is significantly accelerated due to the large ratio of the cathode's area to the anode's area. In order to examine the problem, an analytical solution is proposed for prediction of the response of the RC structure from the time of steel depassivation to the stage just prior to the onset of microcrack propagation. To this end, a circular cylindrical RC member under axisymmetric macrocell corrosion of the reinforcement is considered. Both cases of the symmetric and asymmetric rebar corrosion along the length of the anode zone are studied. According to the experimentally observed data, corrosion products are modeled as a thin layer with a nonlinear stress–strain relation. The exact expressions of the elastic fields associated with the steel, and concrete media are obtained using Love's potential function. By imposing the boundary conditions, the resulting set of nonlinear equations are solved in each time step by Newton's method. The effects of the key parameters which have dominating role in the time of the onset of concrete cracking and maximum radial stress field of the concrete have been examined.
A semi-analytical computation of the theoretical uncertainties of the solar neutrino flux
DEFF Research Database (Denmark)
Jorgensen, Andreas C. S.; Christensen-Dalsgaard, Jorgen
2017-01-01
We present a comparison between Monte Carlo simulations and a semi-analytical approach that reproduces the theoretical probability distribution functions of the solar neutrino fluxes, stemming from the pp, pep, hep, Be-7, B-8, N-13, O-15 and F-17 source reactions. We obtain good agreement between...
Valier-Brasier, Tony; Conoir, Jean-Marc; Coulouvrat, François; Thomas, Jean-Louis
2015-10-01
Sound propagation in dilute suspensions of small spheres is studied using two models: a hydrodynamic model based on the coupled phase equations and an acoustic model based on the ECAH (ECAH: Epstein-Carhart-Allegra-Hawley) multiple scattering theory. The aim is to compare both models through the study of three fundamental kinds of particles: rigid particles, elastic spheres, and viscous droplets. The hydrodynamic model is based on a Rayleigh-Plesset-like equation generalized to elastic spheres and viscous droplets. The hydrodynamic forces for elastic spheres are introduced by analogy with those of droplets. The ECAH theory is also modified in order to take into account the velocity of rigid particles. Analytical calculations performed for long wavelength, low dilution, and weak absorption in the ambient fluid show that both models are strictly equivalent for the three kinds of particles studied. The analytical calculations show that dilatational and translational mechanisms are modeled in the same way by both models. The effective parameters of dilute suspensions are also calculated.
Analytic models for beam propagation and far-field patterns in slab and bow-tie x-ray lasers
International Nuclear Information System (INIS)
Chandler, E.A.
1994-06-01
Simplified analytic models for beam propagation in slab and bow-tie x-ray lasers yield convenient expressions that provide both a framework for guidance in computer modeling and useful approximates for experimenters. In unrefracted bow-tie lasers, the laser shape in conjunction with the nearly-exponential weighting of rays according to their length produces a small effective aperture for the signal. We develop an analytic expression for the aperture and the properties of the far-field signal. Similarly, we develop the view that the far-field pattern of refractive slab lasers is the result of effective apertures that are created by the interplay of refraction and exponential amplification. We present expressions for the size of this aperture as a function of laser parameters as well as for the intensity and position of the far-field lineout. This analysis also yields conditions for the refraction limit in slab lasers and an estimate for the signal loss due to refraction
Analytic result for the one-loop scalar pentagon integral with massless propagators
International Nuclear Information System (INIS)
Kniehl, Bernd A.; Tarasov, Oleg V.
2010-01-01
The method of dimensional recurrences proposed by one of the authors (O. V.Tarasov, 1996) is applied to the evaluation of the pentagon-type scalar integral with on-shell external legs and massless internal lines. For the first time, an analytic result valid for arbitrary space-time dimension d and five arbitrary kinematic variables is presented. An explicit expression in terms of the Appell hypergeometric function F 3 and the Gauss hypergeometric function 2 F 1 , both admitting one-fold integral representations, is given. In the case when one kinematic variable vanishes, the integral reduces to a combination of Gauss hypergeometric functions 2 F 1 . For the case when one scalar invariant is large compared to the others, the asymptotic values of the integral in terms of Gauss hypergeometric functions 2 F 1 are presented in d=2-2ε, 4-2ε, and 6-2ε dimensions. For multi-Regge kinematics, the asymptotic value of the integral in d=4-2ε dimensions is given in terms of the Appell function F 3 and the Gauss hypergeometric function 2 F 1 . (orig.)
Analytic result for the one-loop scalar pentagon integral with massless propagators
Energy Technology Data Exchange (ETDEWEB)
Kniehl, Bernd A.; Tarasov, Oleg V. [Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik
2010-01-15
The method of dimensional recurrences proposed by one of the authors (O. V.Tarasov, 1996) is applied to the evaluation of the pentagon-type scalar integral with on-shell external legs and massless internal lines. For the first time, an analytic result valid for arbitrary space-time dimension d and five arbitrary kinematic variables is presented. An explicit expression in terms of the Appell hypergeometric function F{sub 3} and the Gauss hypergeometric function {sub 2}F{sub 1}, both admitting one-fold integral representations, is given. In the case when one kinematic variable vanishes, the integral reduces to a combination of Gauss hypergeometric functions {sub 2}F{sub 1}. For the case when one scalar invariant is large compared to the others, the asymptotic values of the integral in terms of Gauss hypergeometric functions {sub 2}F{sub 1} are presented in d=2-2{epsilon}, 4-2{epsilon}, and 6-2{epsilon} dimensions. For multi-Regge kinematics, the asymptotic value of the integral in d=4-2{epsilon} dimensions is given in terms of the Appell function F{sub 3} and the Gauss hypergeometric function {sub 2}F{sub 1}. (orig.)
Analytic result for the one-loop scalar pentagon integral with massless propagators
International Nuclear Information System (INIS)
Kniehl, Bernd A.; Tarasov, Oleg V.
2010-01-01
The method of dimensional recurrences proposed by Tarasov (1996, 2000) is applied to the evaluation of the pentagon-type scalar integral with on-shell external legs and massless internal lines. For the first time, an analytic result valid for arbitrary space-time dimension d and five arbitrary kinematic variables is presented. An explicit expression in terms of the Appell hypergeometric function F 3 and the Gauss hypergeometric function 2 F 1 , both admitting one-fold integral representations, is given. In the case when one kinematic variable vanishes, the integral reduces to a combination of Gauss hypergeometric functions 2 F 1 . For the case when one scalar invariant is large compared to the others, the asymptotic values of the integral in terms of Gauss hypergeometric functions 2 F 1 are presented in d=2-2ε, 4-2ε, and 6-2ε dimensions. For multi-Regge kinematics, the asymptotic value of the integral in d=4-2ε dimensions is given in terms of the Appell function F 3 and the Gauss hypergeometric function 2 F 1 .
Dalarsson, Mariana; Tassin, Philippe
2009-04-13
We have investigated the transmission and reflection properties of structures incorporating left-handed materials with graded index of refraction. We present an exact analytical solution to Helmholtz' equation for a graded index profile changing according to a hyperbolic tangent function along the propagation direction. We derive expressions for the field intensity along the graded index structure, and we show excellent agreement between the analytical solution and the corresponding results obtained by accurate numerical simulations. Our model straightforwardly allows for arbitrary spectral dispersion.
Dalarsson, Mariana; Tassin, Philippe
2012-01-01
We have investigated the transmission and reflection properties of structures incorporating left-handed materials with graded index of refraction. We present an exact analytical solution to Helmholtz' equation for a graded index profile changing according to a hyperbolic tangent function along the propagation direction. We derive expressions for the field intensity along the graded index structure, and we show excellent agreement between the analytical solution and the corresponding results o...
Karakoylu, E.; Franz, B.
2016-01-01
First attempt at quantifying uncertainties in ocean remote sensing reflectance satellite measurements. Based on 1000 iterations of Monte Carlo. Data source is a SeaWiFS 4-day composite, 2003. The uncertainty is for remote sensing reflectance (Rrs) at 443 nm.
International Nuclear Information System (INIS)
Conroy, Charlie; Gunn, James E.; White, Martin
2010-01-01
Models for the formation and evolution of galaxies readily predict physical properties such as star formation rates, metal-enrichment histories, and, increasingly, gas and dust content of synthetic galaxies. Such predictions are frequently compared to the spectral energy distributions of observed galaxies via the stellar population synthesis (SPS) technique. Substantial uncertainties in SPS exist, and yet their relevance to the task of comparing galaxy evolution models to observations has received little attention. In the present work, we begin to address this issue by investigating the importance of uncertainties in stellar evolution, the initial stellar mass function (IMF), and dust and interstellar medium (ISM) properties on the translation from models to observations. We demonstrate that these uncertainties translate into substantial uncertainties in the ultraviolet, optical, and near-infrared colors of synthetic galaxies. Aspects that carry significant uncertainties include the logarithmic slope of the IMF above 1 M sun , dust attenuation law, molecular cloud disruption timescale, clumpiness of the ISM, fraction of unobscured starlight, and treatment of advanced stages of stellar evolution including blue stragglers, the horizontal branch, and the thermally pulsating asymptotic giant branch. The interpretation of the resulting uncertainties in the derived colors is highly non-trivial because many of the uncertainties are likely systematic, and possibly correlated with the physical properties of galaxies. We therefore urge caution when comparing models to observations.
Post van der Burg, Max; Cullinane Thomas, Catherine; Holcombe, Tracy R.; Nelson, Richard D.
2016-01-01
The Landscape Conservation Cooperatives (LCCs) are a network of partnerships throughout North America that are tasked with integrating science and management to support more effective delivery of conservation at a landscape scale. In order to achieve this integration, some LCCs have adopted the approach of providing their partners with better scientific information in an effort to facilitate more effective and coordinated conservation decisions. Taking this approach has led many LCCs to begin funding research to provide the information for improved decision making. To ensure that funding goes to research projects with the highest likelihood of leading to more integrated broad scale conservation, some LCCs have also developed approaches for prioritizing which information needs will be of most benefit to their partnerships. We describe two case studies in which decision analytic tools were used to quantitatively assess the relative importance of information for decisions made by partners in the Plains and Prairie Potholes LCC. The results of the case studies point toward a few valuable lessons in terms of using these tools with LCCs. Decision analytic tools tend to help shift focus away from research oriented discussions and toward discussions about how information is used in making better decisions. However, many technical experts do not have enough knowledge about decision making contexts to fully inform the latter type of discussion. When assessed in the right decision context, however, decision analyses can point out where uncertainties actually affect optimal decisions and where they do not. This helps technical experts understand that not all research is valuable in improving decision making. But perhaps most importantly, our results suggest that decision analytic tools may be more useful for LCCs as way of developing integrated objectives for coordinating partner decisions across the landscape, rather than simply ranking research priorities.
International Nuclear Information System (INIS)
Pecchia, Marco; Vasiliev, Alexander; Leray, Olivier; Ferroukhi, Hakim; Pautz, Andreas
2015-01-01
A new methodology, referred to as manufacturing and technological parameters uncertainty quantification (MTUQ), is under development at Paul Scherrer Institut (PSI). Based on uncertainty and global sensitivity analysis methods, MTUQ aims at advancing state-of-the-art for the treatment of geometrical/material uncertainties in light water reactor computations, using the MCNPX Monte Carlo neutron transport code. The development is currently focused primarily on criticality safety evaluations (CSE). In that context, the key components are a dedicated modular interface with the MCNPX code and a user-friendly interface to model functional relationship between system variables. A unique feature is an automatic capability to parameterize variables belonging to so-called “repeated structures” such as to allow for perturbations of each individual element of a given system modelled with MCNPX. Concerning the statistical analysis capabilities, these are currently implemented through an interface with the ROOT platform to handle the random sampling design. This paper presents the current status of the MTUQ methodology development and a first assessment of an ongoing organisation for economic cooperation and development/nuclear energy agency benchmark dedicated to uncertainty analyses for CSE. The presented results illustrate the overall capabilities of MTUQ and underline its relevance in predicting more realistic results compared to a methodology previously applied at PSI for this particular benchmark. (author)
Shan, Zhendong; Ling, Daosheng
2018-02-01
This article develops an analytical solution for the transient wave propagation of a cylindrical P-wave line source in a semi-infinite elastic solid with a fluid layer. The analytical solution is presented in a simple closed form in which each term represents a transient physical wave. The Scholte equation is derived, through which the Scholte wave velocity can be determined. The Scholte wave is the wave that propagates along the interface between the fluid and solid. To develop the analytical solution, the wave fields in the fluid and solid are defined, their analytical solutions in the Laplace domain are derived using the boundary and interface conditions, and the solutions are then decomposed into series form according to the power series expansion method. Each item of the series solution has a clear physical meaning and represents a transient wave path. Finally, by applying Cagniard's method and the convolution theorem, the analytical solutions are transformed into the time domain. Numerical examples are provided to illustrate some interesting features in the fluid layer, the interface and the semi-infinite solid. When the P-wave velocity in the fluid is higher than that in the solid, two head waves in the solid, one head wave in the fluid and a Scholte wave at the interface are observed for the cylindrical P-wave line source.
International Nuclear Information System (INIS)
Boaratti, Mario Francisco Guerra
2006-01-01
Leaks in pressurized tubes generate acoustic waves that propagate through the walls of these tubes, which can be captured by accelerometers or by acoustic emission sensors. The knowledge of how these walls can vibrate, or in another way, how these acoustic waves propagate in this material is fundamental in the detection and localization process of the leak source. In this work an analytic model was implemented, through the motion equations of a cylindrical shell, with the objective to understand the behavior of the tube surface excited by a point source. Since the cylindrical surface has a closed pattern in the circumferential direction, waves that are beginning their trajectory will meet with another that has already completed the turn over the cylindrical shell, in the clockwise direction as well as in the counter clockwise direction, generating constructive and destructive interferences. After enough time of propagation, peaks and valleys in the shell surface are formed, which can be visualized through a graphic representation of the analytic solution created. The theoretical results were proven through measures accomplished in an experimental setup composed of a steel tube finished in sand box, simulating the condition of infinite tube. To determine the location of the point source on the surface, the process of inverse solution was adopted, that is to say, known the signals of the sensor disposed in the tube surface , it is determined through the theoretical model where the source that generated these signals can be. (author)
Hosseinbor, A. Pasha; Chung, Moo K.; Wu, Yu-Chien; Alexander, Andrew L.
2012-01-01
The ensemble average propagator (EAP) describes the 3D average diffusion process of water molecules, capturing both its radial and angular contents. The EAP can thus provide richer information about complex tissue microstructure properties than the orientation distribution function (ODF), an angular feature of the EAP. Recently, several analytical EAP reconstruction schemes for multiple q-shell acquisitions have been proposed, such as diffusion propagator imaging (DPI) and spherical polar Fourier imaging (SPFI). In this study, a new analytical EAP reconstruction method is proposed, called Bessel Fourier orientation reconstruction (BFOR), whose solution is based on heat equation estimation of the diffusion signal for each shell acquisition, and is validated on both synthetic and real datasets. A significant portion of the paper is dedicated to comparing BFOR, SPFI, and DPI using hybrid, non-Cartesian sampling for multiple b-value acquisitions. Ways to mitigate the effects of Gibbs ringing on EAP reconstruction are also explored. In addition to analytical EAP reconstruction, the aforementioned modeling bases can be used to obtain rotationally invariant q-space indices of potential clinical value, an avenue which has not yet been thoroughly explored. Three such measures are computed: zero-displacement probability (Po), mean squared displacement (MSD), and generalized fractional anisotropy (GFA). PMID:22963853
Anderson, R.; Gronewold, A.; Alameddine, I.; Reckhow, K.
2008-12-01
The latest official assessment of United States (US) surface water quality indicates that pathogens are a leading cause of coastal shoreline water quality standard violations. Rainfall-runoff and hydrodynamic water quality models are commonly used to predict fecal indicator bacteria (FIB) concentrations in these waters and to subsequently identify climate change, land use, and pollutant mitigation scenarios which might improve water quality and lead to reinstatement of a designated use. While decay, settling, and other loss kinetics dominate FIB fate and transport in freshwater systems, previous authors identify tidal advection as a dominant fate and transport process in coastal estuaries. As a result, acknowledging hydrodynamic model input (e.g. watershed runoff) variability and parameter (e.g tidal dynamics parameter) uncertainty is critical to building a robust coastal water quality model. Despite the widespread application of watershed models (and associated model calibration procedures), we find model inputs and parameters are commonly encoded as deterministic point estimates (as opposed to random variables), an approach which effectively ignores potential sources of variability and uncertainty. Here, we present an innovative approach to building, calibrating, and propagating uncertainty and variability through a coupled data-based mechanistic (DBM) rainfall-runoff and tidal prism water quality model. While we apply the model to an ungauged tributary of the Newport River Estuary (one of many currently impaired shellfish harvesting waters in Eastern North Carolina), our model can be used to evaluate water quality restoration scenarios for coastal waters with a wide range of designated uses. We begin by calibrating the DBM rainfall-runoff model, as implemented in the IHACRES software package, using a regionalized calibration approach. We then encode parameter estimates as random variables (in the rainfall-runoff component of our comprehensive model) via the
Energy Technology Data Exchange (ETDEWEB)
Holland, Michael K. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); O' Rourke, Patrick E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-05-04
An SRNL H-Canyon Test Bed performance evaluation project was completed jointly by SRNL and LANL on a prototype monochromatic energy dispersive x-ray fluorescence instrument, the hiRX. A series of uncertainty propagations were generated based upon plutonium and uranium measurements performed using the alpha-prototype hiRX instrument. Data reduction and uncertainty modeling provided in this report were performed by the SRNL authors. Observations and lessons learned from this evaluation were also used to predict the expected uncertainties that should be achievable at multiple plutonium and uranium concentration levels provided instrument hardware and software upgrades being recommended by LANL and SRNL are performed.
Mendoza Beltran, A.; Heijungs, R.; Guinée, J.; Tukker, A.
2016-01-01
Purpose: Despite efforts to treat uncertainty due to methodological choices in life cycle assessment (LCA) such as standardization, one-at-a-time (OAT) sensitivity analysis, and analytical and statistical methods, no method exists that propagate this source of uncertainty for all relevant processes
International Nuclear Information System (INIS)
Ishida, Hitoshi; Meshii, Toshiyuki
2010-01-01
This study proposes an element size selection method named the 'Impact-Meshing (IM) method' for a finite element waves propagation analysis model, which is characterized by (1) determination of element division of the model with strain energy in the whole model, (2) static analysis (dynamic analysis in a single time step) with boundary conditions which gives a maximum change of displacement in the time increment and inertial (impact) force caused by the displacement change. In this paper, an example of application of the IM method to 3D ultrasonic wave propagation problem in an elastic solid is described. These examples showed an analysis result with a model determined by the IM method was convergence and calculation time for determination of element subdivision was reduced to about 1/6 by the IM Method which did not need determination of element subdivision by a dynamic transient analysis with 100 time steps. (author)
Energy Technology Data Exchange (ETDEWEB)
Dunn, Floyd E. [Argonne National Lab. (ANL), Argonne, IL (United States); Hu, Lin-wen [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States). Nuclear Reactor Lab.; Wilson, Erik [Argonne National Lab. (ANL), Argonne, IL (United States)
2016-12-01
The STAT code was written to automate many of the steady-state thermal hydraulic safety calculations for the MIT research reactor, both for conversion of the reactor from high enrichment uranium fuel to low enrichment uranium fuel and for future fuel re-loads after the conversion. A Monte-Carlo statistical propagation approach is used to treat uncertainties in important parameters in the analysis. These safety calculations are ultimately intended to protect against high fuel plate temperatures due to critical heat flux or departure from nucleate boiling or onset of flow instability; but additional margin is obtained by basing the limiting safety settings on avoiding onset of nucleate boiling. STAT7 can simultaneously analyze all of the axial nodes of all of the fuel plates and all of the coolant channels for one stripe of a fuel element. The stripes run the length of the fuel, from the bottom to the top. Power splits are calculated for each axial node of each plate to determine how much of the power goes out each face of the plate. By running STAT7 multiple times, full core analysis has been performed by analyzing the margin to ONB for each axial node of each stripe of each plate of each element in the core.
Koch, Michael
Measurement uncertainty is one of the key issues in quality assurance. It became increasingly important for analytical chemistry laboratories with the accreditation to ISO/IEC 17025. The uncertainty of a measurement is the most important criterion for the decision whether a measurement result is fit for purpose. It also delivers help for the decision whether a specification limit is exceeded or not. Estimation of measurement uncertainty often is not trivial. Several strategies have been developed for this purpose that will shortly be described in this chapter. In addition the different possibilities to take into account the uncertainty in compliance assessment are explained.
Plósz, Benedek Gy; De Clercq, Jeriffa; Nopens, Ingmar; Benedetti, Lorenzo; Vanrolleghem, Peter A
2011-01-01
In WWTP models, the accurate assessment of solids inventory in bioreactors equipped with solid-liquid separators, mostly described using one-dimensional (1-D) secondary settling tank (SST) models, is the most fundamental requirement of any calibration procedure. Scientific knowledge on characterising particulate organics in wastewater and on bacteria growth is well-established, whereas 1-D SST models and their impact on biomass concentration predictions are still poorly understood. A rigorous assessment of two 1-DSST models is thus presented: one based on hyperbolic (the widely used Takács-model) and one based on parabolic (the more recently presented Plósz-model) partial differential equations. The former model, using numerical approximation to yield realistic behaviour, is currently the most widely used by wastewater treatment process modellers. The latter is a convection-dispersion model that is solved in a numerically sound way. First, the explicit dispersion in the convection-dispersion model and the numerical dispersion for both SST models are calculated. Second, simulation results of effluent suspended solids concentration (XTSS,Eff), sludge recirculation stream (XTSS,RAS) and sludge blanket height (SBH) are used to demonstrate the distinct behaviour of the models. A thorough scenario analysis is carried out using SST feed flow rate, solids concentration, and overflow rate as degrees of freedom, spanning a broad loading spectrum. A comparison between the measurements and the simulation results demonstrates a considerably improved 1-D model realism using the convection-dispersion model in terms of SBH, XTSS,RAS and XTSS,Eff. Third, to assess the propagation of uncertainty derived from settler model structure to the biokinetic model, the impact of the SST model as sub-model in a plant-wide model on the general model performance is evaluated. A long-term simulation of a bulking event is conducted that spans temperature evolution throughout a summer
Casse, C.; Gosset, M.; Peugeot, C.; Boone, A.; Pedinotti, V.
2013-12-01
The use of satellite based rainfall in research or operational Hydrological application is becoming more and more frequent. This is specially true in the Tropics where ground based gauge (or radar) network are generally scarce and often degrading. Member of the GPM constellation, the new French-Indian satellite Mission Megha-Tropiques (MT) dedicated to the water and energy budget in the tropical atmosphere contributes to a better monitoring of rainfall in the inter-tropical zone. As part of this mission, research is developed on the use of MT rainfall products for hydrological research or operational application such as flood monitoring. A key issue for such applications is how to account for rainfall products biases and uncertainties, and how to propagate them in the end user models ? Another important question is how to choose the best space-time resolution for the rainfall forcing, given that both model performances and rain-product uncertainties are resolution dependent. This talk will present on going investigations and perspectives on this subject, with examples from the Megha_tropiques Ground validation sites in West Africa. The CNRM model Surfex-ISBA/TRIP has been set up to simulate the hydrological behavior of the Niger River. This modeling set up is being used to study the predictability of Niger Floods events in the city of Niamey and the performances of satellite rainfall products as forcing for such predictions. One of the interesting feature of the Niger outflow in Niamey is its double peak : a first peak attributed to 'local' rainfall falling in small to medium size basins situated in the region of Niamey, and a second peak linked to the rainfall falling in the upper par of the river, and slowly propagating through the river towards Niamey. The performances of rainfall products are found to differ between the wetter/upper part of the basin, and the local/sahelian areas. Both academic tests with artificially biased or 'perturbed' rainfield and actual
Energy Technology Data Exchange (ETDEWEB)
Morales Prieto, M.; Ortega Saiz, P.
2011-07-01
Analysis of analytical uncertainties of the methodology of simulation of processes for obtaining isotopic ending inventory of spent fuel, the ARIANE experiment explores the part of simulation of burning.
Datta, Arjun
2018-03-01
We present a suite of programs that implement decades-old algorithms for computation of seismic surface wave reflection and transmission coefficients at a welded contact between two laterally homogeneous quarter-spaces. For Love as well as Rayleigh waves, the algorithms are shown to be capable of modelling multiple mode conversions at a lateral discontinuity, which was not shown in the original publications or in the subsequent literature. Only normal incidence at a lateral boundary is considered so there is no Love-Rayleigh coupling, but incidence of any mode and coupling to any (other) mode can be handled. The code is written in Python and makes use of SciPy's Simpson's rule integrator and NumPy's linear algebra solver for its core functionality. Transmission-side results from this code are found to be in good agreement with those from finite-difference simulations. In today's research environment of extensive computing power, the coded algorithms are arguably redundant but SWRT can be used as a valuable testing tool for the ever evolving numerical solvers of seismic wave propagation. SWRT is available via GitHub (https://github.com/arjundatta23/SWRT.git).
Directory of Open Access Journals (Sweden)
A. Datta
2018-03-01
Full Text Available We present a suite of programs that implement decades-old algorithms for computation of seismic surface wave reflection and transmission coefficients at a welded contact between two laterally homogeneous quarter-spaces. For Love as well as Rayleigh waves, the algorithms are shown to be capable of modelling multiple mode conversions at a lateral discontinuity, which was not shown in the original publications or in the subsequent literature. Only normal incidence at a lateral boundary is considered so there is no Love–Rayleigh coupling, but incidence of any mode and coupling to any (other mode can be handled. The code is written in Python and makes use of SciPy's Simpson's rule integrator and NumPy's linear algebra solver for its core functionality. Transmission-side results from this code are found to be in good agreement with those from finite-difference simulations. In today's research environment of extensive computing power, the coded algorithms are arguably redundant but SWRT can be used as a valuable testing tool for the ever evolving numerical solvers of seismic wave propagation. SWRT is available via GitHub (https://github.com/arjundatta23/SWRT.git.
Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.
Directory of Open Access Journals (Sweden)
Arika Ligmann-Zielinska
Full Text Available Agent-based models (ABMs have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1 efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2 conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.
Energy Technology Data Exchange (ETDEWEB)
Vršnak, B.; Žic, T.; Dumbović, M. [Hvar Observatory, Faculty of Geodesy, University of Zagreb, Kačćeva 26, HR-10000 Zagreb (Croatia); Temmer, M.; Möstl, C.; Veronig, A. M. [Kanzelhöhe Observatory—IGAM, Institute of Physics, University of Graz, Universittsplatz 5, A-8010 Graz (Austria); Taktakishvili, A.; Mays, M. L. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Odstrčil, D., E-mail: bvrsnak@geof.hr, E-mail: tzic@geof.hr, E-mail: mdumbovic@geof.hr, E-mail: manuela.temmer@uni-graz.at, E-mail: christian.moestl@uni-graz.at, E-mail: astrid.veronig@uni-graz.at, E-mail: aleksandre.taktakishvili-1@nasa.gov, E-mail: m.leila.mays@nasa.gov, E-mail: dusan.odstrcil@nasa.gov [George Mason University, Fairfax, VA 22030 (United States)
2014-08-01
Real-time forecasting of the arrival of coronal mass ejections (CMEs) at Earth, based on remote solar observations, is one of the central issues of space-weather research. In this paper, we compare arrival-time predictions calculated applying the numerical ''WSA-ENLIL+Cone model'' and the analytical ''drag-based model'' (DBM). Both models use coronagraphic observations of CMEs as input data, thus providing an early space-weather forecast two to four days before the arrival of the disturbance at the Earth, depending on the CME speed. It is shown that both methods give very similar results if the drag parameter Γ = 0.1 is used in DBM in combination with a background solar-wind speed of w = 400 km s{sup –1}. For this combination, the mean value of the difference between arrival times calculated by ENLIL and DBM is Δ-bar =0.09±9.0 hr with an average of the absolute-value differences of |Δ|-bar =7.1 hr. Comparing the observed arrivals (O) with the calculated ones (C) for ENLIL gives O – C = –0.3 ± 16.9 hr and, analogously, O – C = +1.1 ± 19.1 hr for DBM. Applying Γ = 0.2 with w = 450 km s{sup –1} in DBM, one finds O – C = –1.7 ± 18.3 hr, with an average of the absolute-value differences of 14.8 hr, which is similar to that for ENLIL, 14.1 hr. Finally, we demonstrate that the prediction accuracy significantly degrades with increasing solar activity.
Uncertainty and sensitivity assessments of GPS and GIS integrated applications for transportation.
Hong, Sungchul; Vonderohe, Alan P
2014-02-10
Uncertainty and sensitivity analysis methods are introduced, concerning the quality of spatial data as well as that of output information from Global Positioning System (GPS) and Geographic Information System (GIS) integrated applications for transportation. In the methods, an error model and an error propagation method form a basis for formulating characterization and propagation of uncertainties. They are developed in two distinct approaches: analytical and simulation. Thus, an initial evaluation is performed to compare and examine uncertainty estimations from the analytical and simulation approaches. The evaluation results show that estimated ranges of output information from the analytical and simulation approaches are compatible, but the simulation approach rather than the analytical approach is preferred for uncertainty and sensitivity analyses, due to its flexibility and capability to realize positional errors in both input data. Therefore, in a case study, uncertainty and sensitivity analyses based upon the simulation approach is conducted on a winter maintenance application. The sensitivity analysis is used to determine optimum input data qualities, and the uncertainty analysis is then applied to estimate overall qualities of output information from the application. The analysis results show that output information from the non-distance-based computation model is not sensitive to positional uncertainties in input data. However, for the distance-based computational model, output information has a different magnitude of uncertainties, depending on position uncertainties in input data.
International Nuclear Information System (INIS)
Davis, B.W.
1982-01-01
Thermal igniters proposed by the Tennessee Valley Authority for intentional ignition of hydrogen in nuclear reactor containments have been tested in mixtures of air, hydrogen, and steam. The igniters, conventional diesel engine glow plugs, were tested in a 10.6 ft 3 pressure vessel with dry hydrogen concentrations from 4% to 29%, and in steam fractions of up to 50%. Dry tests indicated complete combustion consistently occurred at H 2 fractions above 8% with no combustion for concentrations below 5%. Combustion tests in the presence of steam were conducted with hydrogen volume fractions of 8%, 10%, and 12%. Steam concentrations of up to 30% consistently resulted in ignition. Most of the 40% steam fraction tests indicated a pressure rise. Circulation of the mixture improved combustion in both the dry and the steam tests, most notably at low H 2 concentrations. An analysis of the high steam fraction test data yielded evidence of the presence of small, suspended, water droplets in the combustion mixture. The suppressive influence of condensation-generated fog on combustion is evaluated. Analysis of experimental results along with results derived from analytic models have provided consistent evidence of the strong influence of mass condensation rates and fog on experimentally observed ignition and flame propagation phenomena
The NASA Langley Multidisciplinary Uncertainty Quantification Challenge
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.
Coquelin, L.; Le Brusquet, L.; Fischer, N.; Gensdarmes, F.; Motzkus, C.; Mace, T.; Fleury, G.
2018-05-01
A scanning mobility particle sizer (SMPS) is a high resolution nanoparticle sizing system that is widely used as the standard method to measure airborne particle size distributions (PSD) in the size range 1 nm–1 μm. This paper addresses the problem to assess the uncertainty associated with PSD when a differential mobility analyzer (DMA) operates under scanning mode. The sources of uncertainty are described and then modeled either through experiments or knowledge extracted from the literature. Special care is brought to model the physics and to account for competing theories. Indeed, it appears that the modeling errors resulting from approximations of the physics can largely affect the final estimate of this indirect measurement, especially for quantities that are not measured during day-to-day experiments. The Monte Carlo method is used to compute the uncertainty associated with PSD. The method is tested against real data sets that are monosize polystyrene latex spheres (PSL) with nominal diameters of 100 nm, 200 nm and 450 nm. The median diameters and associated standard uncertainty of the aerosol particles are estimated as 101.22 nm ± 0.18 nm, 204.39 nm ± 1.71 nm and 443.87 nm ± 1.52 nm with the new approach. Other statistical parameters, such as the mean diameter, the mode and the geometric mean and associated standard uncertainty, are also computed. These results are then compared with the results obtained by SMPS embedded software.
International Nuclear Information System (INIS)
Depres, B.; Dossantos-Uzarralde, P.
2009-01-01
More than 150 researchers and engineers from universities and the industrial world met to discuss on the new methodologies developed around assessing uncertainty. About 20 papers were presented and the main topics were: methods to study the propagation of uncertainties, sensitivity analysis, nuclear data covariances or multi-parameter optimisation. This report gathers the contributions of CEA researchers and engineers
Resolving uncertainty in chemical speciation determinations
Smith, D. Scott; Adams, Nicholas W. H.; Kramer, James R.
1999-10-01
Speciation determinations involve uncertainty in system definition and experimentation. Identification of appropriate metals and ligands from basic chemical principles, analytical window considerations, types of species and checking for consistency in equilibrium calculations are considered in system definition uncertainty. A systematic approach to system definition limits uncertainty in speciation investigations. Experimental uncertainty is discussed with an example of proton interactions with Suwannee River fulvic acid (SRFA). A Monte Carlo approach was used to estimate uncertainty in experimental data, resulting from the propagation of uncertainties in electrode calibration parameters and experimental data points. Monte Carlo simulations revealed large uncertainties present at high (>9-10) and low (monoprotic ligands. Least-squares fit the data with 21 sites, whereas linear programming fit the data equally well with 9 sites. Multiresponse fitting, involving simultaneous fluorescence and pH measurements, improved model discrimination. Deconvolution of the excitation versus emission fluorescence surface for SRFA establishes a minimum of five sites. Diprotic sites are also required for the five fluorescent sites, and one non-fluorescent monoprotic site was added to accommodate the pH data. Consistent with greater complexity, the multiresponse method had broader confidence limits than the uniresponse methods, but corresponded better with the accepted total carboxylic content for SRFA. Overall there was a 40% standard deviation in total carboxylic content for the multiresponse fitting, versus 10% and 1% for least-squares and linear programming, respectively.
An introductory guide to uncertainty analysis in environmental and health risk assessment
International Nuclear Information System (INIS)
Hoffman, F.O.; Hammonds, J.S.
1992-10-01
To compensate for the potential for overly conservative estimates of risk using standard US Environmental Protection Agency methods, an uncertainty analysis should be performed as an integral part of each risk assessment. Uncertainty analyses allow one to obtain quantitative results in the form of confidence intervals that will aid in decision making and will provide guidance for the acquisition of additional data. To perform an uncertainty analysis, one must frequently rely on subjective judgment in the absence of data to estimate the range and a probability distribution describing the extent of uncertainty about a true but unknown value for each parameter of interest. This information is formulated from professional judgment based on an extensive review of literature, analysis of the data, and interviews with experts. Various analytical and numerical techniques are available to allow statistical propagation of the uncertainty in the model parameters to a statement of uncertainty in the risk to a potentially exposed individual. Although analytical methods may be straightforward for relatively simple models, they rapidly become complicated for more involved risk assessments. Because of the tedious efforts required to mathematically derive analytical approaches to propagate uncertainty in complicated risk assessments, numerical methods such as Monte Carlo simulation should be employed. The primary objective of this report is to provide an introductory guide for performing uncertainty analysis in risk assessments being performed for Superfund sites
Manning, Robert M.
2004-01-01
The extended wide-angle parabolic wave equation applied to electromagnetic wave propagation in random media is considered. A general operator equation is derived which gives the statistical moments of an electric field of a propagating wave. This expression is used to obtain the first and second order moments of the wave field and solutions are found that transcend those which incorporate the full paraxial approximation at the outset. Although these equations can be applied to any propagation scenario that satisfies the conditions of application of the extended parabolic wave equation, the example of propagation through atmospheric turbulence is used. It is shown that in the case of atmospheric wave propagation and under the Markov approximation (i.e., the delta-correlation of the fluctuations in the direction of propagation), the usual parabolic equation in the paraxial approximation is accurate even at millimeter wavelengths. The comprehensive operator solution also allows one to obtain expressions for the longitudinal (generalized) second order moment. This is also considered and the solution for the atmospheric case is obtained and discussed. The methodology developed here can be applied to any qualifying situation involving random propagation through turbid or plasma environments that can be represented by a spectral density of permittivity fluctuations.
Understanding and reducing statistical uncertainties in nebular abundance determinations
Wesson, R.; Stock, D. J.; Scicluna, P.
2012-06-01
Whenever observations are compared to theories, an estimate of the uncertainties associated with the observations is vital if the comparison is to be meaningful. However, many or even most determinations of temperatures, densities and abundances in photoionized nebulae do not quote the associated uncertainty. Those that do typically propagate the uncertainties using analytical techniques which rely on assumptions that generally do not hold. Motivated by this issue, we have developed Nebular Empirical Analysis Tool (NEAT), a new code for calculating chemical abundances in photoionized nebulae. The code carries out a standard analysis of lists of emission lines using long-established techniques to estimate the amount of interstellar extinction, calculate representative temperatures and densities, compute ionic abundances from both collisionally excited lines and recombination lines, and finally to estimate total elemental abundances using an ionization correction scheme. NEATuses a Monte Carlo technique to robustly propagate uncertainties from line flux measurements through to the derived abundances. We show that, for typical observational data, this approach is superior to analytic estimates of uncertainties. NEAT also accounts for the effect of upward biasing on measurements of lines with low signal-to-noise ratio, allowing us to accurately quantify the effect of this bias on abundance determinations. We find not only that the effect can result in significant overestimates of heavy element abundances derived from weak lines, but also that taking it into account reduces the uncertainty of these abundance determinations. Finally, we investigate the effect of possible uncertainties in R, the ratio of selective-to-total extinction, on abundance determinations. We find that the uncertainty due to this parameter is negligible compared to the statistical uncertainties due to typical line flux measurement uncertainties.
International Nuclear Information System (INIS)
Hammonds, J.S.; Hoffman, F.O.; Bartell, S.M.
1994-12-01
This report presents guidelines for evaluating uncertainty in mathematical equations and computer models applied to assess human health and environmental risk. Uncertainty analyses involve the propagation of uncertainty in model parameters and model structure to obtain confidence statements for the estimate of risk and identify the model components of dominant importance. Uncertainty analyses are required when there is no a priori knowledge about uncertainty in the risk estimate and when there is a chance that the failure to assess uncertainty may affect the selection of wrong options for risk reduction. Uncertainty analyses are effective when they are conducted in an iterative mode. When the uncertainty in the risk estimate is intolerable for decision-making, additional data are acquired for the dominant model components that contribute most to uncertainty. This process is repeated until the level of residual uncertainty can be tolerated. A analytical and numerical methods for error propagation are presented along with methods for identifying the most important contributors to uncertainty. Monte Carlo simulation with either Simple Random Sampling (SRS) or Latin Hypercube Sampling (LHS) is proposed as the most robust method for propagating uncertainty through either simple or complex models. A distinction is made between simulating a stochastically varying assessment endpoint (i.e., the distribution of individual risks in an exposed population) and quantifying uncertainty due to lack of knowledge about a fixed but unknown quantity (e.g., a specific individual, the maximally exposed individual, or the mean, median, or 95%-tile of the distribution of exposed individuals). Emphasis is placed on the need for subjective judgement to quantify uncertainty when relevant data are absent or incomplete
Energy Technology Data Exchange (ETDEWEB)
Groen, E.A., E-mail: Evelyne.Groen@gmail.com [Wageningen University, P.O. Box 338, Wageningen 6700 AH (Netherlands); Heijungs, R. [Vrije Universiteit Amsterdam, De Boelelaan 1105, Amsterdam 1081 HV (Netherlands); Leiden University, Einsteinweg 2, Leiden 2333 CC (Netherlands)
2017-01-15
Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlations between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.
International Nuclear Information System (INIS)
Karanki, D.R.; Rahman, S.; Dang, V.N.; Zerkak, O.
2017-01-01
The coupling of plant simulation models and stochastic models representing failure events in Dynamic Event Trees (DET) is a framework used to model the dynamic interactions among physical processes, equipment failures, and operator responses. The integration of physical and stochastic models may additionally enhance the treatment of uncertainties. Probabilistic Safety Assessments as currently implemented propagate the (epistemic) uncertainties in failure probabilities, rates, and frequencies; while the uncertainties in the physical model (parameters) are not propagated. The coupling of deterministic (physical) and probabilistic models in integrated simulations such as DET allows both types of uncertainties to be considered. However, integrated accident simulations with epistemic uncertainties will challenge even today's high performance computing infrastructure, especially for simulations of inherently complex nuclear or chemical plants. Conversely, intentionally limiting computations for practical reasons would compromise accuracy of results. This work investigates how to tradeoff accuracy and computations to quantify risk in light of both uncertainties and accident dynamics. A simple depleting tank problem that can be solved analytically is considered to examine the adequacy of a discrete DET approach. The results show that optimal allocation of computational resources between epistemic and aleatory calculations by means of convergence studies ensures accuracy within a limited budget. - Highlights: • Accident simulations considering uncertainties require intensive computations. • Tradeoff between accuracy and accident simulations is a challenge. • Optimal allocation between epistemic & aleatory computations ensures the tradeoff. • Online convergence gives an early indication of computational requirements. • Uncertainty propagation in DDET is examined on a tank problem solved analytically.
Vector network analyzer (VNA) measurements and uncertainty assessment
Shoaib, Nosherwan
2017-01-01
This book describes vector network analyzer measurements and uncertainty assessments, particularly in waveguide test-set environments, in order to establish their compatibility to the International System of Units (SI) for accurate and reliable characterization of communication networks. It proposes a fully analytical approach to measurement uncertainty evaluation, while also highlighting the interaction and the linear propagation of different uncertainty sources to compute the final uncertainties associated with the measurements. The book subsequently discusses the dimensional characterization of waveguide standards and the quality of the vector network analyzer (VNA) calibration techniques. The book concludes with an in-depth description of the novel verification artefacts used to assess the performance of the VNAs. It offers a comprehensive reference guide for beginners to experts, in both academia and industry, whose work involves the field of network analysis, instrumentation and measurements.
International Nuclear Information System (INIS)
Andres, T.H.
2002-05-01
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
Energy Technology Data Exchange (ETDEWEB)
Andres, T.H
2002-05-01
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
Energy Technology Data Exchange (ETDEWEB)
Barrado, A. I.; Garcia, S.; Perez, R. M.
2013-06-01
This paper presents an evaluation of uncertainty associated to analytical measurement of eighteen polycyclic aromatic compounds (PACs) in ambient air by liquid chromatography with fluorescence detection (HPLC/FD). The study was focused on analyses of PM{sub 1}0, PM{sub 2}.5 and gas phase fractions. Main analytical uncertainty was estimated for eleven polycyclic aromatic hydrocarbons (PAHs), four nitro polycyclic aromatic hydrocarbons (nitro-PAHs) and two hydroxy polycyclic aromatic hydrocarbons (OH-PAHs) based on the analytical determination, reference material analysis and extraction step. Main contributions reached 15-30% and came from extraction process of real ambient samples, being those for nitro- PAHs the highest (20-30%). Range and mean concentration of PAC mass concentrations measured in gas phase and PM{sub 1}0/PM{sub 2}.5 particle fractions during a full year are also presented. Concentrations of OH-PAHs were about 2-4 orders of magnitude lower than their parent PAHs and comparable to those sparsely reported in literature. (Author) 7 refs.
International Nuclear Information System (INIS)
Thomas, R.E.
1982-03-01
An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software
Orbit covariance propagation via quadratic-order state transition matrix in curvilinear coordinates
Hernando-Ayuso, Javier; Bombardelli, Claudio
2017-09-01
In this paper, an analytical second-order state transition matrix (STM) for relative motion in curvilinear coordinates is presented and applied to the problem of orbit uncertainty propagation in nearly circular orbits (eccentricity smaller than 0.1). The matrix is obtained by linearization around a second-order analytical approximation of the relative motion recently proposed by one of the authors and can be seen as a second-order extension of the curvilinear Clohessy-Wiltshire (C-W) solution. The accuracy of the uncertainty propagation is assessed by comparison with numerical results based on Monte Carlo propagation of a high-fidelity model including geopotential and third-body perturbations. Results show that the proposed STM can greatly improve the accuracy of the predicted relative state: the average error is found to be at least one order of magnitude smaller compared to the curvilinear C-W solution. In addition, the effect of environmental perturbations on the uncertainty propagation is shown to be negligible up to several revolutions in the geostationary region and for a few revolutions in low Earth orbit in the worst case.
ALBERTO CARLOS DE QUEIROZ PINTO; VICTOR GALÁN SAÚCO; SISIR KUMAR MITRA; FRANCISCO RICARDO FERREIRA
2018-01-01
ABSTRACT This Chapter has the objectives to search, through the review of the available literature, important informations on the evolution of mango propagation regarding theoretical and practical aspects from cellular base of sexual propagation, nursery structures and organizations, substrate compositions and uses, importance of rootstock and scion selections, also it will be described the preparation and transport of the grafts (stem and bud) as well as the main asexual propagation methods...
Uncertainty, probability and information-gaps
International Nuclear Information System (INIS)
Ben-Haim, Yakov
2004-01-01
This paper discusses two main ideas. First, we focus on info-gap uncertainty, as distinct from probability. Info-gap theory is especially suited for modelling and managing uncertainty in system models: we invest all our knowledge in formulating the best possible model; this leaves the modeller with very faulty and fragmentary information about the variation of reality around that optimal model. Second, we examine the interdependence between uncertainty modelling and decision-making. Good uncertainty modelling requires contact with the end-use, namely, with the decision-making application of the uncertainty model. The most important avenue of uncertainty-propagation is from initial data- and model-uncertainties into uncertainty in the decision-domain. Two questions arise. Is the decision robust to the initial uncertainties? Is the decision prone to opportune windfall success? We apply info-gap robustness and opportunity functions to the analysis of representation and propagation of uncertainty in several of the Sandia Challenge Problems
Directory of Open Access Journals (Sweden)
Vicari Kristin J
2012-04-01
the TE model predictions. This analysis highlights the primary measurements that merit further development to reduce the uncertainty associated with their use in TE models. While we develop and apply this mathematical framework to a specific biorefinery scenario here, this analysis can be readily adapted to other types of biorefining processes and provides a general framework for propagating uncertainty due to analytical measurements through a TE model.
Quantum dynamics via a time propagator in Wigner's phase space
DEFF Research Database (Denmark)
Grønager, Michael; Henriksen, Niels Engholm
1995-01-01
We derive an expression for a short-time phase space propagator. We use it in a new propagation scheme and demonstrate that it works for a Morse potential. The propagation scheme is used to propagate classical distributions which do not obey the Heisenberg uncertainty principle. It is shown that ...... as a part of the sampling function. ©1995 American Institute of Physics....
Fuzzy Uncertainty Evaluation for Fault Tree Analysis
Energy Technology Data Exchange (ETDEWEB)
Kim, Ki Beom; Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of); Jae, Moo Sung [Hanyang University, Seoul (Korea, Republic of)
2015-05-15
This traditional probabilistic approach can calculate relatively accurate results. However it requires a long time because of repetitive computation due to the MC method. In addition, when informative data for statistical analysis are not sufficient or some events are mainly caused by human error, the probabilistic approach may not be possible because uncertainties of these events are difficult to be expressed by probabilistic distributions. In order to reduce the computation time and quantify uncertainties of top events when basic events whose uncertainties are difficult to be expressed by probabilistic distributions exist, the fuzzy uncertainty propagation based on fuzzy set theory can be applied. In this paper, we develop a fuzzy uncertainty propagation code and apply the fault tree of the core damage accident after the large loss of coolant accident (LLOCA). The fuzzy uncertainty propagation code is implemented and tested for the fault tree of the radiation release accident. We apply this code to the fault tree of the core damage accident after the LLOCA in three cases and compare the results with those computed by the probabilistic uncertainty propagation using the MC method. The results obtained by the fuzzy uncertainty propagation can be calculated in relatively short time, covering the results obtained by the probabilistic uncertainty propagation.
Directory of Open Access Journals (Sweden)
ALBERTO CARLOS DE QUEIROZ PINTO
2018-03-01
Full Text Available ABSTRACT This Chapter has the objectives to search, through the review of the available literature, important informations on the evolution of mango propagation regarding theoretical and practical aspects from cellular base of sexual propagation, nursery structures and organizations, substrate compositions and uses, importance of rootstock and scion selections, also it will be described the preparation and transport of the grafts (stem and bud as well as the main asexual propagation methods their uses and practices. Finally, pattern and quality of graft mangos and their commercialization aspects will be discussed in this Chapter.
Flood modelling : Parameterisation and inflow uncertainty
Mukolwe, M.M.; Di Baldassarre, G.; Werner, M.; Solomatine, D.P.
2014-01-01
This paper presents an analysis of uncertainty in hydraulic modelling of floods, focusing on the inaccuracy caused by inflow errors and parameter uncertainty. In particular, the study develops a method to propagate the uncertainty induced by, firstly, application of a stage–discharge rating curve
Time-Varying Uncertainty in Shock and Vibration Applications Using the Impulse Response
Directory of Open Access Journals (Sweden)
J.B. Weathers
2012-01-01
Full Text Available Design of mechanical systems often necessitates the use of dynamic simulations to calculate the displacements (and their derivatives of the bodies in a system as a function of time in response to dynamic inputs. These types of simulations are especially prevalent in the shock and vibration community where simulations associated with models having complex inputs are routine. If the forcing functions as well as the parameters used in these simulations are subject to uncertainties, then these uncertainties will propagate through the models resulting in uncertainties in the outputs of interest. The uncertainty analysis procedure for these kinds of time-varying problems can be challenging, and in many instances, explicit data reduction equations (DRE's, i.e., analytical formulas, are not available because the outputs of interest are obtained from complex simulation software, e.g. FEA programs. Moreover, uncertainty propagation in systems modeled using nonlinear differential equations can prove to be difficult to analyze. However, if (1 the uncertainties propagate through the models in a linear manner, obeying the principle of superposition, then the complexity of the problem can be significantly simplified. If in addition, (2 the uncertainty in the model parameters do not change during the simulation and the manner in which the outputs of interest respond to small perturbations in the external input forces is not dependent on when the perturbations are applied, then the number of calculations required can be greatly reduced. Conditions (1 and (2 characterize a Linear Time Invariant (LTI uncertainty model. This paper seeks to explain one possible approach to obtain the uncertainty results based on these assumptions.
International Nuclear Information System (INIS)
Gureghian, A.B.; Wu, Y.T.; Sagar, B.
1992-12-01
Exact analytical solutions based on the Laplace transforms are derived for describing the one-dimensional space-time-dependent, advective transport of a decaying species in a layered, saturated rock system intersected by a planar fracture of varying aperture. These solutions, which account for advection in fracture, molecular diffusion into the rock matrix, adsorption in both fracture and matrix, and radioactive decay, predict the concentrations in both fracture and rock matrix and the cumulative mass in the fracture. The solute migration domain in both fracture and rock is assumed to be semi-infinite with non-zero initial conditions. The concentration of each nuclide at the source is allowed to decay either continuously or according to some periodical fluctuations where both are subjected to either a step or band release mode. Two numerical examples related to the transport of Np-237 and Cm-245 in a five-layered system of fractured rock were used to verify these solutions with several well established evaluation methods of Laplace inversion integrals in the real and complex domain. In addition, with respect to the model parameters, a comparison of the analytically derived local sensitivities for the concentration and cumulative mass of Np-237 in the fracture with the ones obtained through a finite-difference method of approximation is also reported
Estimation of the uncertainties considered in NPP PSA level 2
International Nuclear Information System (INIS)
Kalchev, B.; Hristova, R.
2005-01-01
The main approaches of the uncertainties analysis are presented. The sources of uncertainties which should be considered in PSA level 2 for WWER reactor such as: uncertainties propagated from level 1 PSA; uncertainties in input parameters; uncertainties related to the modelling of physical phenomena during the accident progression and uncertainties related to the estimation of source terms are defined. The methods for estimation of the uncertainties are also discussed in this paper
Uncertainty analysis for geologic disposal of radioactive waste
International Nuclear Information System (INIS)
Cranwell, R.M.; Helton, J.C.
1981-01-01
The incorporation and representation of uncertainty in the analysis of the consequences and risks associated with the geologic disposal of high-level radioactive waste are discussed. Such uncertainty has three primary components: process modeling uncertainty, model input data uncertainty, and scenario uncertainty. The following topics are considered in connection with the preceding components: propagation of uncertainty in the modeling of a disposal site, sampling of input data for models, and uncertainty associated with model output
International Nuclear Information System (INIS)
De Miceli, Enrica; Monti, Giorgio; Bianco, Vincenzo; Filetici, Maria Grazia
2016-01-01
This paper presents a study aiming at assessing the seismic safety and developing the rehabilitation design of a masonry retaining wall, known as 'Bastione Farnesiano', and placed around the Palatinum hill, in the central archeological area of Rome, in Italy. It is a singular artifact of its kind and hardly identifiable with known stereotypes or constructive models. The phase of survey, together with both the material and degradation analyses, showed the impossibility to define with certainty some features, even geometrical, of the monument, necessary to reach a judgment about its safety. Therefore, it was necessary to formulate the risk assessment problem by taking into due consideration all uncertainties and evaluating them in probabilistic terms. A simple mechanical model, considering different and alternative collapse modes, was developed and, after characterizing the uncertain parameters in probabilistic terms, Monte Carlo simulations were carried out. Based on the obtained results: a) the value of the current risk index has been determined, and b) a sensitivity analysis has been performed in order to identify the parameters that mostly affect the monument safety. This analysis has provided useful information that has allowed to orient the seismic amelioration design strategy by acting on one of the parameters that have greater impact on the risk reduction.
Directory of Open Access Journals (Sweden)
N. Eckert
2008-10-01
Full Text Available For snow avalanches, passive defense structures are generally designed by considering high return period events. In this paper, taking inspiration from other natural hazards, an alternative method based on the maximization of the economic benefit of the defense structure is proposed. A general Bayesian framework is described first. Special attention is given to the problem of taking the poor local information into account in the decision-making process. Therefore, simplifying assumptions are made. The avalanche hazard is represented by a Peak Over Threshold (POT model. The influence of the dam is quantified in terms of runout distance reduction with a simple relation derived from small-scale experiments using granular media. The costs corresponding to dam construction and the damage to the element at risk are roughly evaluated for each dam height-hazard value pair, with damage evaluation corresponding to the maximal expected loss. Both the classical and the Bayesian risk functions can then be computed analytically. The results are illustrated with a case study from the French avalanche database. A sensitivity analysis is performed and modelling assumptions are discussed in addition to possible further developments.
Lindley, Dennis V
2013-01-01
Praise for the First Edition ""...a reference for everyone who is interested in knowing and handling uncertainty.""-Journal of Applied Statistics The critically acclaimed First Edition of Understanding Uncertainty provided a study of uncertainty addressed to scholars in all fields, showing that uncertainty could be measured by probability, and that probability obeyed three basic rules that enabled uncertainty to be handled sensibly in everyday life. These ideas were extended to embrace the scientific method and to show how decisions, containing an uncertain element, could be rationally made.
The Uncertainty of Measurement Results
Energy Technology Data Exchange (ETDEWEB)
Ambrus, A. [Hungarian Food Safety Office, Budapest (Hungary)
2009-07-15
Factors affecting the uncertainty of measurement are explained, basic statistical formulae given, and the theoretical concept explained in the context of pesticide formulation analysis. Practical guidance is provided on how to determine individual uncertainty components within an analytical procedure. An extended and comprehensive table containing the relevant mathematical/statistical expressions elucidates the relevant underlying principles. Appendix I provides a practical elaborated example on measurement uncertainty estimation, above all utilizing experimental repeatability and reproducibility laboratory data. (author)
Analyzing Bullwhip Effect in Supply Networks under Exogenous Uncertainty
Directory of Open Access Journals (Sweden)
Mitra Darvish
2014-05-01
Full Text Available This paper explains a model for analyzing and measuring the propagation of order amplifications (i.e. bullwhip effect for a single-product supply network topology considering exogenous uncertainty and linear and time-invariant inventory management policies for network entities. The stream of orders placed by each entity of the network is characterized assuming customer demand is ergodic. In fact, we propose an exact formula in order to measure the bullwhip effect in the addressed supply network topology considering the system in Markovian chain framework and presenting a matrix of network member relationships and relevant order sequences. The formula turns out using a mathematical method called frequency domain analysis. The major contribution of this paper is analyzing the bullwhip effect considering exogenous uncertainty in supply networks and using the Fourier transform in order to simplify the relevant calculations. We present a number of numerical examples to assess the analytical results accuracy in quantifying the bullwhip effect.
Hump-shape Uncertainty, Agency Costs and Aggregate Fluctuations
Lee, Gabriel; Kevin, Salyer; Strobel, Johannes
2016-01-01
Previously measured uncertainty shocks using the U.S. data show a hump-shape time path: Uncertainty rises for two years before its decline. Current literature on the effects uncertainty on macroeconomics, including housing, has not accounted for this observation. Consequently, the literature on uncertainty and macroeconomics is divided on the effcts and the propagation mechanism of uncertainty on aggregate uctuations. This paper shows that when uncertainty rises and falls over time, th...
Stereo-particle image velocimetry uncertainty quantification
International Nuclear Information System (INIS)
Bhattacharya, Sayantan; Vlachos, Pavlos P; Charonko, John J
2017-01-01
Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric
Uncertainty covariances in robotics applications
International Nuclear Information System (INIS)
Smith, D.L.
1984-01-01
The application of uncertainty covariance matrices in the analysis of robot trajectory errors is explored. First, relevant statistical concepts are reviewed briefly. Then, a simple, hypothetical robot model is considered to illustrate methods for error propagation and performance test data evaluation. The importance of including error correlations is emphasized
Characterizing Epistemic Uncertainty for Launch Vehicle Designs
Novack, Steven D.; Rogers, Jim; Hark, Frank; Al Hassan, Mohammad
2016-01-01
NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty are rendered obsolete since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods.This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper shows how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.
Dealing with exploration uncertainties
International Nuclear Information System (INIS)
Capen, E.
1992-01-01
Exploration for oil and gas should fulfill the most adventurous in their quest for excitement and surprise. This paper tries to cover that tall order. The authors will touch on the magnitude of the uncertainty (which is far greater than in most other businesses), the effects of not knowing target sizes very well, how to build uncertainty into analyses naturally, how to tie reserves and chance estimates to economics, and how to look at the portfolio effect of an exploration program. With no apologies, the authors will be using a different language for some readers - the language of uncertainty, which means probability and statistics. These tools allow one to combine largely subjective exploration information with the more analytical data from the engineering and economic side
Treatment and reporting of uncertainties for environmental radiation measurements
International Nuclear Information System (INIS)
Colle, R.
1980-01-01
Recommendations for a practical and uniform method for treating and reporting uncertainties in environmental radiation measurements data are presented. The method requires that each reported measurement result include the value, a total propagated random uncertainty expressed as the standard deviation, and a combined overall uncertainty. The uncertainty assessment should be based on as nearly a complete assessment as possible and should include every conceivable or likely source of inaccuracy in the result. Guidelines are given for estimating random and systematic uncertainty components, and for propagating and combining them to form an overall uncertainty
International Nuclear Information System (INIS)
Hermansson, B.R.
1989-01-01
The main part of this thesis consists of 15 published papers, in which the numerical Beam Propagating Method (BPM) is investigated, verified and used in a number of applications. In the introduction a derivation of the nonlinear Schroedinger equation is presented to connect the beginning of the soliton papers with Maxwell's equations including a nonlinear polarization. This thesis focuses on the wide use of the BPM for numerical simulations of propagating light and particle beams through different types of structures such as waveguides, fibers, tapers, Y-junctions, laser arrays and crystalline solids. We verify the BPM in the above listed problems against other numerical methods for example the Finite-element Method, perturbation methods and Runge-Kutta integration. Further, the BPM is shown to be a simple and effective way to numerically set up the Green's function in matrix form for periodic structures. The Green's function matrix can then be diagonalized with matrix methods yielding the eigensolutions of the structure. The BPM inherent transverse periodicity can be untied, if desired, by for example including an absorptive refractive index at the computational window edges. The interaction of two first-order soliton pulses is strongly dependent on the phase relationship between the individual solitons. When optical phase shift keying is used in coherent one-carrier wavelength communication, the fiber attenuation will suppress or delay the nonlinear instability. (orig.)
Liu, Baoding
2015-01-01
When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, c...
International Nuclear Information System (INIS)
Silva, Jose Wanderley S. da; Barros, Pedro Dionisio de; Araujo, Radier Mario S. de
2009-01-01
In safeguards, independent analysis of uranium content and enrichment of nuclear materials to verify operator's declarations is an important tool to evaluate the accountability system applied by nuclear installations. This determination may be performed by nondestructive (NDA) methods, generally done in the field using portable radiation detection systems, or destructive (DA) methods by chemical analysis when more accurate and precise results are necessary. Samples for DA analysis are collected by inspectors during safeguards inspections and sent to Safeguards Laboratory (LASAL) of the Brazilian Nuclear Energy Commission - (CNEN), where the analysis take place. The method used by LASAL for determination of uranium in different physical and chemical forms is the Davies and Gray/NBL using an automatic potentiometric titrator, which performs the titration of uranium IV by a standard solution of K 2 Cr 2 O 7 . Uncertainty budgets have been determined based on the concepts of the ISO 'Guide to the Expression of Uncertainty in Measurement' (GUM). In order to simplify the calculation of the uncertainty, a computational tool named Kragten Spreadsheet was used. Such spreadsheet uses the concepts established by the GUM and provides results that numerically approximates to those obtained by propagation of uncertainty with analytically determined sensitivity coefficients. The main parameters (input quantities) interfering on the uncertainty were studied. In order to evaluate their contribution in the final uncertainty, the uncertainties of all steps of the analytical method were estimated and compiled. (author)
NDE errors and their propagation in sizing and growth estimates
International Nuclear Information System (INIS)
Horn, D.; Obrutsky, L.; Lakhan, R.
2009-01-01
The accuracy attributed to eddy current flaw sizing determines the amount of conservativism required in setting tube-plugging limits. Several sources of error contribute to the uncertainty of the measurements, and the way in which these errors propagate and interact affects the overall accuracy of the flaw size and flaw growth estimates. An example of this calculation is the determination of an upper limit on flaw growth over one operating period, based on the difference between two measurements. Signal-to-signal comparison involves a variety of human, instrumental, and environmental error sources; of these, some propagate additively and some multiplicatively. In a difference calculation, specific errors in the first measurement may be correlated with the corresponding errors in the second; others may be independent. Each of the error sources needs to be identified and quantified individually, as does its distribution in the field data. A mathematical framework for the propagation of the errors can then be used to assess the sensitivity of the overall uncertainty to each individual error component. This paper quantifies error sources affecting eddy current sizing estimates and presents analytical expressions developed for their effect on depth estimates. A simple case study is used to model the analysis process. For each error source, the distribution of the field data was assessed and propagated through the analytical expressions. While the sizing error obtained was consistent with earlier estimates and with deviations from ultrasonic depth measurements, the error on growth was calculated as significantly smaller than that obtained assuming uncorrelated errors. An interesting result of the sensitivity analysis in the present case study is the quantification of the error reduction available from post-measurement compensation of magnetite effects. With the absolute and difference error equations, variance-covariance matrices, and partial derivatives developed in
Pole solutions for flame front propagation
Kupervasser, Oleg
2015-01-01
This book deals with solving mathematically the unsteady flame propagation equations. New original mathematical methods for solving complex non-linear equations and investigating their properties are presented. Pole solutions for flame front propagation are developed. Premixed flames and filtration combustion have remarkable properties: the complex nonlinear integro-differential equations for these problems have exact analytical solutions described by the motion of poles in a complex plane. Instead of complex equations, a finite set of ordinary differential equations is applied. These solutions help to investigate analytically and numerically properties of the flame front propagation equations.
Justification for recommended uncertainties
International Nuclear Information System (INIS)
Pronyaev, V.G.; Badikov, S.A.; Carlson, A.D.
2007-01-01
The uncertainties obtained in an earlier standards evaluation were considered to be unrealistically low by experts of the US Cross Section Evaluation Working Group (CSEWG). Therefore, the CSEWG Standards Subcommittee replaced the covariance matrices of evaluated uncertainties by expanded percentage errors that were assigned to the data over wide energy groups. There are a number of reasons that might lead to low uncertainties of the evaluated data: Underestimation of the correlations existing between the results of different measurements; The presence of unrecognized systematic uncertainties in the experimental data can lead to biases in the evaluated data as well as to underestimations of the resulting uncertainties; Uncertainties for correlated data cannot only be characterized by percentage uncertainties or variances. Covariances between evaluated value at 0.2 MeV and other points obtained in model (RAC R matrix and PADE2 analytical expansion) and non-model (GMA) fits of the 6 Li(n,t) TEST1 data and the correlation coefficients are presented and covariances between the evaluated value at 0.045 MeV and other points (along the line or column of the matrix) as obtained in EDA and RAC R matrix fits of the data available for reactions that pass through the formation of the 7 Li system are discussed. The GMA fit with the GMA database is shown for comparison. The following diagrams are discussed: Percentage uncertainties of the evaluated cross section for the 6 Li(n,t) reaction and the for the 235 U(n,f) reaction; estimation given by CSEWG experts; GMA result with full GMA database, including experimental data for the 6 Li(n,t), 6 Li(n,n) and 6 Li(n,total) reactions; uncertainties in the GMA combined fit for the standards; EDA and RAC R matrix results, respectively. Uncertainties of absolute and 252 Cf fission spectrum averaged cross section measurements, and deviations between measured and evaluated values for 235 U(n,f) cross-sections in the neutron energy range 1
Courtney, H; Kirkland, J; Viguerie, P
1997-01-01
At the heart of the traditional approach to strategy lies the assumption that by applying a set of powerful analytic tools, executives can predict the future of any business accurately enough to allow them to choose a clear strategic direction. But what happens when the environment is so uncertain that no amount of analysis will allow us to predict the future? What makes for a good strategy in highly uncertain business environments? The authors, consultants at McKinsey & Company, argue that uncertainty requires a new way of thinking about strategy. All too often, they say, executives take a binary view: either they underestimate uncertainty to come up with the forecasts required by their companies' planning or capital-budging processes, or they overestimate it, abandon all analysis, and go with their gut instinct. The authors outline a new approach that begins by making a crucial distinction among four discrete levels of uncertainty that any company might face. They then explain how a set of generic strategies--shaping the market, adapting to it, or reserving the right to play at a later time--can be used in each of the four levels. And they illustrate how these strategies can be implemented through a combination of three basic types of actions: big bets, options, and no-regrets moves. The framework can help managers determine which analytic tools can inform decision making under uncertainty--and which cannot. At a broader level, it offers executives a discipline for thinking rigorously and systematically about uncertainty and its implications for strategy.
Duerdoth, Ian
2009-01-01
The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…
DEFF Research Database (Denmark)
Heydorn, Kaj; Anglov, Thomas
2002-01-01
Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...
The state of the art of the impact of sampling uncertainty on measurement uncertainty
Leite, V. J.; Oliveira, E. C.
2018-03-01
The measurement uncertainty is a parameter that marks the reliability and can be divided into two large groups: sampling and analytical variations. Analytical uncertainty is a controlled process, performed in the laboratory. The same does not occur with the sampling uncertainty, which, because it faces several obstacles and there is no clarity on how to perform the procedures, has been neglected, although it is admittedly indispensable to the measurement process. This paper aims at describing the state of the art of sampling uncertainty and at assessing its relevance to measurement uncertainty.
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
Decay heat uncertainty quantification of MYRRHA
Directory of Open Access Journals (Sweden)
Fiorito Luca
2017-01-01
Full Text Available MYRRHA is a lead-bismuth cooled MOX-fueled accelerator driven system (ADS currently in the design phase at SCK·CEN in Belgium. The correct evaluation of the decay heat and of its uncertainty level is very important for the safety demonstration of the reactor. In the first part of this work we assessed the decay heat released by the MYRRHA core using the ALEPH-2 burnup code. The second part of the study focused on the nuclear data uncertainty and covariance propagation to the MYRRHA decay heat. Radioactive decay data, independent fission yield and cross section uncertainties/covariances were propagated using two nuclear data sampling codes, namely NUDUNA and SANDY. According to the results, 238U cross sections and fission yield data are the largest contributors to the MYRRHA decay heat uncertainty. The calculated uncertainty values are deemed acceptable from the safety point of view as they are well within the available regulatory limits.
Background and Qualification of Uncertainty Methods
International Nuclear Information System (INIS)
D'Auria, F.; Petruzzi, A.
2008-01-01
The evaluation of uncertainty constitutes the necessary supplement of Best Estimate calculations performed to understand accident scenarios in water cooled nuclear reactors. The needs come from the imperfection of computational tools on the one side and from the interest in using such tool to get more precise evaluation of safety margins. The paper reviews the salient features of two independent approaches for estimating uncertainties associated with predictions of complex system codes. Namely the propagation of code input error and the propagation of the calculation output error constitute the key-words for identifying the methods of current interest for industrial applications. Throughout the developed methods, uncertainty bands can be derived (both upper and lower) for any desired quantity of the transient of interest. For the second case, the uncertainty method is coupled with the thermal-hydraulic code to get the Code with capability of Internal Assessment of Uncertainty, whose features are discussed in more detail.
Uncertainty analysis techniques
International Nuclear Information System (INIS)
Marivoet, J.; Saltelli, A.; Cadelli, N.
1987-01-01
The origin of the uncertainty affecting Performance Assessments, as well as their propagation to dose and risk results is discussed. The analysis is focused essentially on the uncertainties introduced by the input parameters, the values of which may range over some orders of magnitude and may be given as probability distribution function. The paper briefly reviews the existing sampling techniques used for Monte Carlo simulations and the methods for characterizing the output curves, determining their convergence and confidence limits. Annual doses, expectation values of the doses and risks are computed for a particular case of a possible repository in clay, in order to illustrate the significance of such output characteristics as the mean, the logarithmic mean and the median as well as their ratios. The report concludes that provisionally, due to its better robustness, such estimation as the 90th percentile may be substituted to the arithmetic mean for comparison of the estimated doses with acceptance criteria. In any case, the results obtained through Uncertainty Analyses must be interpreted with caution as long as input data distribution functions are not derived from experiments reasonably reproducing the situation in a well characterized repository and site
Deterministic uncertainty analysis
International Nuclear Information System (INIS)
Worley, B.A.
1987-12-01
This paper presents a deterministic uncertainty analysis (DUA) method for calculating uncertainties that has the potential to significantly reduce the number of computer runs compared to conventional statistical analysis. The method is based upon the availability of derivative and sensitivity data such as that calculated using the well known direct or adjoint sensitivity analysis techniques. Formation of response surfaces using derivative data and the propagation of input probability distributions are discussed relative to their role in the DUA method. A sample problem that models the flow of water through a borehole is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. Propogation of uncertainties by the DUA method is compared for ten cases in which the number of reference model runs was varied from one to ten. The DUA method gives a more accurate representation of the true cumulative distribution of the flow rate based upon as few as two model executions compared to fifty model executions using a statistical approach. 16 refs., 4 figs., 5 tabs
Verburg, P.H.; Tabeau, A.A.; Hatna, E.
2013-01-01
Land change model outcomes are vulnerable to multiple types of uncertainty, including uncertainty in input data, structural uncertainties in the model and uncertainties in model parameters. In coupled model systems the uncertainties propagate between the models. This paper assesses uncertainty of
Model uncertainty in safety assessment
International Nuclear Information System (INIS)
Pulkkinen, U.; Huovinen, T.
1996-01-01
The uncertainty analyses are an essential part of any risk assessment. Usually the uncertainties of reliability model parameter values are described by probability distributions and the uncertainty is propagated through the whole risk model. In addition to the parameter uncertainties, the assumptions behind the risk models may be based on insufficient experimental observations and the models themselves may not be exact descriptions of the phenomena under analysis. The description and quantification of this type of uncertainty, model uncertainty, is the topic of this report. The model uncertainty is characterized and some approaches to model and quantify it are discussed. The emphasis is on so called mixture models, which have been applied in PSAs. Some of the possible disadvantages of the mixture model are addressed. In addition to quantitative analyses, also qualitative analysis is discussed shortly. To illustrate the models, two simple case studies on failure intensity and human error modeling are described. In both examples, the analysis is based on simple mixture models, which are observed to apply in PSA analyses. (orig.) (36 refs., 6 figs., 2 tabs.)
Model uncertainty in safety assessment
Energy Technology Data Exchange (ETDEWEB)
Pulkkinen, U; Huovinen, T [VTT Automation, Espoo (Finland). Industrial Automation
1996-01-01
The uncertainty analyses are an essential part of any risk assessment. Usually the uncertainties of reliability model parameter values are described by probability distributions and the uncertainty is propagated through the whole risk model. In addition to the parameter uncertainties, the assumptions behind the risk models may be based on insufficient experimental observations and the models themselves may not be exact descriptions of the phenomena under analysis. The description and quantification of this type of uncertainty, model uncertainty, is the topic of this report. The model uncertainty is characterized and some approaches to model and quantify it are discussed. The emphasis is on so called mixture models, which have been applied in PSAs. Some of the possible disadvantages of the mixture model are addressed. In addition to quantitative analyses, also qualitative analysis is discussed shortly. To illustrate the models, two simple case studies on failure intensity and human error modeling are described. In both examples, the analysis is based on simple mixture models, which are observed to apply in PSA analyses. (orig.) (36 refs., 6 figs., 2 tabs.).
DEFF Research Database (Denmark)
Nguyen, Daniel Xuyen
This paper presents a model of trade that explains why firms wait to export and why many exporters fail. Firms face uncertain demands that are only realized after the firm enters the destination. The model retools the timing of uncertainty resolution found in productivity heterogeneity models....... This retooling addresses several shortcomings. First, the imperfect correlation of demands reconciles the sales variation observed in and across destinations. Second, since demands for the firm's output are correlated across destinations, a firm can use previously realized demands to forecast unknown demands...... in untested destinations. The option to forecast demands causes firms to delay exporting in order to gather more information about foreign demand. Third, since uncertainty is resolved after entry, many firms enter a destination and then exit after learning that they cannot profit. This prediction reconciles...
Optimizing production under uncertainty
DEFF Research Database (Denmark)
Rasmussen, Svend
This Working Paper derives criteria for optimal production under uncertainty based on the state-contingent approach (Chambers and Quiggin, 2000), and discusses po-tential problems involved in applying the state-contingent approach in a normative context. The analytical approach uses the concept...... of state-contingent production functions and a definition of inputs including both sort of input, activity and alloca-tion technology. It also analyses production decisions where production is combined with trading in state-contingent claims such as insurance contracts. The final part discusses...
International Nuclear Information System (INIS)
Robouch, P.; Arana, G.; Eguskiza, M.; Etxebarria, N.
2000-01-01
The concepts of the Guide to the expression of Uncertainties in Measurements for chemical measurements (GUM) and the recommendations of the Eurachem document 'Quantifying Uncertainty in Analytical Methods' are applied to set up the uncertainty budget for k 0 -NAA. The 'universally applicable spreadsheet technique', described by KRAGTEN, is applied to the k 0 -NAA basic equations for the computation of uncertainties. The variance components - individual standard uncertainties - highlight the contribution and the importance of the different parameters to be taken into account. (author)
Uncertainty, joint uncertainty, and the quantum uncertainty principle
International Nuclear Information System (INIS)
Narasimhachar, Varun; Poostindouz, Alireza; Gour, Gilad
2016-01-01
Historically, the element of uncertainty in quantum mechanics has been expressed through mathematical identities called uncertainty relations, a great many of which continue to be discovered. These relations use diverse measures to quantify uncertainty (and joint uncertainty). In this paper we use operational information-theoretic principles to identify the common essence of all such measures, thereby defining measure-independent notions of uncertainty and joint uncertainty. We find that most existing entropic uncertainty relations use measures of joint uncertainty that yield themselves to a small class of operational interpretations. Our notion relaxes this restriction, revealing previously unexplored joint uncertainty measures. To illustrate the utility of our formalism, we derive an uncertainty relation based on one such new measure. We also use our formalism to gain insight into the conditions under which measure-independent uncertainty relations can be found. (paper)
Bayesian Mars for uncertainty quantification in stochastic transport problems
International Nuclear Information System (INIS)
Stripling, Hayes F.; McClarren, Ryan G.
2011-01-01
We present a method for estimating solutions to partial differential equations with uncertain parameters using a modification of the Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator. The BMARS algorithm uses Markov chain Monte Carlo (MCMC) to construct a basis function composed of polynomial spline functions, for which derivatives and integrals are straightforward to compute. We use these calculations and a modification of the curve-fitting BMARS algorithm to search for a basis function (response surface) which, in combination with its derivatives/integrals, satisfies a governing differential equation and specified boundary condition. We further show that this fit can be improved by enforcing a conservation or other physics-based constraint. Our results indicate that estimates to solutions of simple first order partial differential equations (without uncertainty) can be efficiently computed with very little regression error. We then extend the method to estimate uncertainties in the solution to a pure absorber transport problem in a medium with uncertain cross-section. We describe and compare two strategies for propagating the uncertain cross-section through the BMARS algorithm; the results from each method are in close comparison with analytic results. We discuss the scalability of the algorithm to parallel architectures and the applicability of the two strategies to larger problems with more degrees of uncertainty. (author)
Uncertainty Characterization of Reactor Vessel Fracture Toughness
International Nuclear Information System (INIS)
Li, Fei; Modarres, Mohammad
2002-01-01
To perform fracture mechanics analysis of reactor vessel, fracture toughness (K Ic ) at various temperatures would be necessary. In a best estimate approach, K Ic uncertainties resulting from both lack of sufficient knowledge and randomness in some of the variables of K Ic must be characterized. Although it may be argued that there is only one type of uncertainty, which is lack of perfect knowledge about the subject under study, as a matter of practice K Ic uncertainties can be divided into two types: aleatory and epistemic. Aleatory uncertainty is related to uncertainty that is very difficult to reduce, if not impossible; epistemic uncertainty, on the other hand, can be practically reduced. Distinction between aleatory and epistemic uncertainties facilitates decision-making under uncertainty and allows for proper propagation of uncertainties in the computation process. Typically, epistemic uncertainties representing, for example, parameters of a model are sampled (to generate a 'snapshot', single-value of the parameters), but the totality of aleatory uncertainties is carried through the calculation as available. In this paper a description of an approach to account for these two types of uncertainties associated with K Ic has been provided. (authors)
Zou, Xiao-Duan; Li, Jian-Yang; Clark, Beth Ellen; Golish, Dathon
2018-01-01
The OSIRIS-REx spacecraft, launched in September, 2016, will study the asteroid Bennu and return a sample from its surface to Earth in 2023. Bennu is a near-Earth carbonaceous asteroid which will provide insight into the formation and evolution of the solar system. OSIRIS-REx will first approach Bennu in August 2018 and will study the asteroid for approximately two years before sampling. OSIRIS-REx will develop its photometric model (including Lommel-Seelinger, ROLO, McEwen, Minnaert and Akimov) of Bennu with OCAM and OVIRS during the Detailed Survey mission phase. The model developed during this phase will be used to photometrically correct the OCAM and OVIRS data.Here we present the analysis of the error for the photometric corrections. Based on our testing data sets, we find:1. The model uncertainties is only correct when we use the covariance matrix to calculate, because the parameters are highly correlated.2. No evidence of domination of any parameter in each model.3. And both model error and the data error contribute to the final correction error comparably.4. We tested the uncertainty module on fake and real data sets, and find that model performance depends on the data coverage and data quality. These tests gave us a better understanding of how different model behave in different case.5. L-S model is more reliable than others. Maybe because the simulated data are based on L-S model. However, the test on real data (SPDIF) does show slight advantage of L-S, too. ROLO is not reliable to use when calculating bond albedo. The uncertainty of McEwen model is big in most cases. Akimov performs unphysical on SOPIE 1 data.6. Better use L-S as our default choice, this conclusion is based mainly on our test on SOPIE data and IPDIF.
EPA’s Web Analytics Program collects, analyzes, and provides reports on traffic, quality assurance, and customer satisfaction metrics for EPA’s website. The program uses a variety of analytics tools, including Google Analytics and CrazyEgg.
Uncertainty quantification applied to the radiological characterization of radioactive waste.
Zaffora, B; Magistris, M; Saporta, G; Chevalier, J-P
2017-09-01
This paper describes the process adopted at the European Organization for Nuclear Research (CERN) to quantify uncertainties affecting the characterization of very-low-level radioactive waste. Radioactive waste is a by-product of the operation of high-energy particle accelerators. Radioactive waste must be characterized to ensure its safe disposal in final repositories. Characterizing radioactive waste means establishing the list of radionuclides together with their activities. The estimated activity levels are compared to the limits given by the national authority of the waste disposal. The quantification of the uncertainty affecting the concentration of the radionuclides is therefore essential to estimate the acceptability of the waste in the final repository but also to control the sorting, volume reduction and packaging phases of the characterization process. The characterization method consists of estimating the activity of produced radionuclides either by experimental methods or statistical approaches. The uncertainties are estimated using classical statistical methods and uncertainty propagation. A mixed multivariate random vector is built to generate random input parameters for the activity calculations. The random vector is a robust tool to account for the unknown radiological history of legacy waste. This analytical technique is also particularly useful to generate random chemical compositions of materials when the trace element concentrations are not available or cannot be measured. The methodology was validated using a waste population of legacy copper activated at CERN. The methodology introduced here represents a first approach for the uncertainty quantification (UQ) of the characterization process of waste produced at particle accelerators. Copyright © 2017 Elsevier Ltd. All rights reserved.
Radial propagation of turbulence in tokamaks
International Nuclear Information System (INIS)
Garbet, X.; Laurent, L.; Samain, A.
1993-12-01
It is shown in this paper that a turbulence propagation can be due to toroidal or non linear mode coupling. An analytical analysis indicates that the toroidal coupling acts through a convection while the non linear effects induce a diffusion. Numerical simulations suggest that the toroidal propagation is usually the fastest process, except perhaps in some highly turbulent regimes. The consequence is the possibility of non local effects on the fluctuation level and the associated transport. (authors). 7 figs., 19 refs
International Nuclear Information System (INIS)
Picard, R.R.
1989-01-01
Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process
Physics of Earthquake Rupture Propagation
Xu, Shiqing; Fukuyama, Eiichi; Sagy, Amir; Doan, Mai-Linh
2018-05-01
A comprehensive understanding of earthquake rupture propagation requires the study of not only the sudden release of elastic strain energy during co-seismic slip, but also of other processes that operate at a variety of spatiotemporal scales. For example, the accumulation of the elastic strain energy usually takes decades to hundreds of years, and rupture propagation and termination modify the bulk properties of the surrounding medium that can influence the behavior of future earthquakes. To share recent findings in the multiscale investigation of earthquake rupture propagation, we held a session entitled "Physics of Earthquake Rupture Propagation" during the 2016 American Geophysical Union (AGU) Fall Meeting in San Francisco. The session included 46 poster and 32 oral presentations, reporting observations of natural earthquakes, numerical and experimental simulations of earthquake ruptures, and studies of earthquake fault friction. These presentations and discussions during and after the session suggested a need to document more formally the research findings, particularly new observations and views different from conventional ones, complexities in fault zone properties and loading conditions, the diversity of fault slip modes and their interactions, the evaluation of observational and model uncertainties, and comparison between empirical and physics-based models. Therefore, we organize this Special Issue (SI) of Tectonophysics under the same title as our AGU session, hoping to inspire future investigations. Eighteen articles (marked with "this issue") are included in this SI and grouped into the following six categories.
International Nuclear Information System (INIS)
Davis, C.B.
1987-08-01
The uncertainties of calculations of loss-of-feedwater transients at Davis-Besse Unit 1 were determined to address concerns of the US Nuclear Regulatory Commission relative to the effectiveness of feed and bleed cooling. Davis-Besse Unit 1 is a pressurized water reactor of the raised-loop Babcock and Wilcox design. A detailed, quality-assured RELAP5/MOD2 model of Davis-Besse was developed at the Idaho National Engineering Laboratory. The model was used to perform an analysis of the loss-of-feedwater transient that occurred at Davis-Besse on June 9, 1985. A loss-of-feedwater transient followed by feed and bleed cooling was also calculated. The evaluation of uncertainty was based on the comparisons of calculations and data, comparisons of different calculations of the same transient, sensitivity calculations, and the propagation of the estimated uncertainty in initial and boundary conditions to the final calculated results
Uncertainty quantification theory, implementation, and applications
Smith, Ralph C
2014-01-01
The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...
NLO error propagation exercise: statistical results
International Nuclear Information System (INIS)
Pack, D.J.; Downing, D.J.
1985-09-01
Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or 235 U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, 235 U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and 235 U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods
Uncertainty Quantification in Numerical Aerodynamics
Litvinenko, Alexander
2017-05-16
We consider uncertainty quantification problem in aerodynamic simulations. We identify input uncertainties, classify them, suggest an appropriate statistical model and, finally, estimate propagation of these uncertainties into the solution (pressure, velocity and density fields as well as the lift and drag coefficients). The deterministic problem under consideration is a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. Input uncertainties include: uncertain angle of attack, the Mach number, random perturbations in the airfoil geometry, mesh, shock location, turbulence model and parameters of this turbulence model. This problem requires efficient numerical/statistical methods since it is computationally expensive, especially for the uncertainties caused by random geometry variations which involve a large number of variables. In numerical section we compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry [D.Liu et al \\'17]. For modeling we used the TAU code, developed in DLR, Germany.
Uncertainty and conservatism in safety evaluations based on a BEPU approach
International Nuclear Information System (INIS)
Yamaguchi, A.; Mizokami, S.; Kudo, Y.; Hotta, A.
2009-01-01
Atomic Energy Society of Japan has published 'Standard Method for Safety Evaluation using Best Estimate Code Based on Uncertainty and Scaling Analyses with Statistical Approach' to be applied to accidents and AOOs in the safety evaluation of LWRs. In this method, hereafter named as the AESJ-SSE (Statistical Safety Evaluation) method, identification and quantification of uncertainties will be performed and then a combination of the best estimate code and the evaluation of uncertainty propagation will be performed. Uncertainties are categorized into bias and variability. In general, bias is related to our state-of-knowledge on uncertainty objects (modeling, scaling, input data, etc.) while variability reflects stochastic features involved in these objects. Considering many kinds of uncertainties in thermal-hydraulics models and experimental databases show variabilities that will be strongly influenced by our state of knowledge, it seems reasonable that these variabilities are also related to state-of-knowledge. The design basis events (DBEs) that are employed for licensing analyses form a main part of the given or prior conservatism. The regulatory acceptance criterion is also regarded as the prior conservatism. In addition to these prior conservatisms, a certain amount of the posterior conservatism is added with maintaining intimate relationships with state-of-knowledge. In the AESJ-SSE method, this posterior conservatism can be incorporated into the safety evaluation in a combination of the following three ways, (1) broadening ranges of variability relevant to uncertainty objects, (2) employing more disadvantageous biases relevant to uncertainty objects and (3) adding an extra bias to the safety evaluation results. Knowing implemented quantitative bases of uncertainties and conservatism, the AESJ-SSE method provides a useful ground for rational decision-making. In order to seek for 'the best estimation' as well as reasonably setting the analytical margin, a degree
Uncertainty analysis of dosimetry spectrum unfolding
International Nuclear Information System (INIS)
Perey, F.G.
1977-01-01
The propagation of uncertainties in the input data is analyzed for the usual dosimetry unfolding solution. A new formulation of the dosimetry unfolding problem is proposed in which the most likely value of the spectrum is obtained. The relationship of this solution to the usual one is discussed
International Nuclear Information System (INIS)
Kančev, Duško; Čepin, Marko
2012-01-01
Highlights: ► Application of analytical unavailability model integrating T and M, ageing, and test strategy. ► Ageing data uncertainty propagation on system level assessed via Monte Carlo simulation. ► Uncertainty impact is growing with the extension of the surveillance test interval. ► Calculated system unavailability dependence on two different sensitivity study ageing databases. ► System unavailability sensitivity insights regarding specific groups of BEs as test intervals extend. - Abstract: The interest in operational lifetime extension of the existing nuclear power plants is growing. Consequently, plants life management programs, considering safety components ageing, are being developed and employed. Ageing represents a gradual degradation of the physical properties and functional performance of different components consequently implying their reduced availability. Analyses, which are being made in the direction of nuclear power plants lifetime extension are based upon components ageing management programs. On the other side, the large uncertainties of the ageing parameters as well as the uncertainties associated with most of the reliability data collections are widely acknowledged. This paper addresses the uncertainty and sensitivity analyses conducted utilizing a previously developed age-dependent unavailability model, integrating effects of test and maintenance activities, for a selected stand-by safety system in a nuclear power plant. The most important problem is the lack of data concerning the effects of ageing as well as the relatively high uncertainty associated to these data, which would correspond to more detailed modelling of ageing. A standard Monte Carlo simulation was coded for the purpose of this paper and utilized in the process of assessment of the component ageing parameters uncertainty propagation on system level. The obtained results from the uncertainty analysis indicate the extent to which the uncertainty of the selected
Experimental uncertainty estimation and statistics for data having interval uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)
2007-05-01
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Transformation of Bayesian posterior distribution into a basic analytical distribution
International Nuclear Information System (INIS)
Jordan Cizelj, R.; Vrbanic, I.
2002-01-01
Bayesian estimation is well-known approach that is widely used in Probabilistic Safety Analyses for the estimation of input model reliability parameters, such as component failure rates or probabilities of failure upon demand. In this approach, a prior distribution, which contains some generic knowledge about a parameter is combined with likelihood function, which contains plant-specific data about the parameter. Depending on the type of prior distribution, the resulting posterior distribution can be estimated numerically or analytically. In many instances only a numerical Bayesian integration can be performed. In such a case the posterior is provided in the form of tabular discrete distribution. On the other hand, it is much more convenient to have a parameter's uncertainty distribution that is to be input into a PSA model to be provided in the form of some basic analytical probability distribution, such as lognormal, gamma or beta distribution. One reason is that this enables much more convenient propagation of parameters' uncertainties through the model up to the so-called top events, such as plant system unavailability or core damage frequency. Additionally, software tools used to run PSA models often require that parameter's uncertainty distribution is defined in the form of one among the several allowed basic types of distributions. In such a case the posterior distribution that came as a product of Bayesian estimation needs to be transformed into an appropriate basic analytical form. In this paper, some approaches on transformation of posterior distribution to a basic probability distribution are proposed and discussed. They are illustrated by an example from NPP Krsko PSA model.(author)
Laser beam propagation in non-linearly absorbing media
CSIR Research Space (South Africa)
Forbes, A
2006-08-01
Full Text Available Many analytical techniques exist to explore the propagation of certain laser beams in free space, or in a linearly absorbing medium. When the medium is nonlinearly absorbing the propagation must be described by an iterative process using the well...
Procedures for uncertainty and sensitivity analysis in repository performance assessment
International Nuclear Information System (INIS)
Poern, K.; Aakerlund, O.
1985-10-01
The objective of the project was mainly a literature study of available methods for the treatment of parameter uncertainty propagation and sensitivity aspects in complete models such as those concerning geologic disposal of radioactive waste. The study, which has run parallel with the development of a code package (PROPER) for computer assisted analysis of function, also aims at the choice of accurate, cost-affective methods for uncertainty and sensitivity analysis. Such a choice depends on several factors like the number of input parameters, the capacity of the model and the computer reresources required to use the model. Two basic approaches are addressed in the report. In one of these the model of interest is directly simulated by an efficient sampling technique to generate an output distribution. Applying the other basic method the model is replaced by an approximating analytical response surface, which is then used in the sampling phase or in moment matching to generate the output distribution. Both approaches are illustrated by simple examples in the report. (author)
Uncertainty and global climate change research
Energy Technology Data Exchange (ETDEWEB)
Tonn, B.E. [Oak Ridge National Lab., TN (United States); Weiher, R. [National Oceanic and Atmospheric Administration, Boulder, CO (United States)
1994-06-01
The Workshop on Uncertainty and Global Climate Change Research March 22--23, 1994, in Knoxville, Tennessee. This report summarizes the results and recommendations of the workshop. The purpose of the workshop was to examine in-depth the concept of uncertainty. From an analytical point of view, uncertainty is a central feature of global climate science, economics and decision making. The magnitude and complexity of uncertainty surrounding global climate change has made it quite difficult to answer even the most simple and important of questions-whether potentially costly action is required now to ameliorate adverse consequences of global climate change or whether delay is warranted to gain better information to reduce uncertainties. A major conclusion of the workshop is that multidisciplinary integrated assessments using decision analytic techniques as a foundation is key to addressing global change policy concerns. First, uncertainty must be dealt with explicitly and rigorously since it is and will continue to be a key feature of analysis and recommendations on policy questions for years to come. Second, key policy questions and variables need to be explicitly identified, prioritized, and their uncertainty characterized to guide the entire scientific, modeling, and policy analysis process. Multidisciplinary integrated assessment techniques and value of information methodologies are best suited for this task. In terms of timeliness and relevance of developing and applying decision analytic techniques, the global change research and policy communities are moving rapidly toward integrated approaches to research design and policy analysis.
The quark propagator in a covariant gauge
International Nuclear Information System (INIS)
Bonnet, F.D.R.; Leinweber, D.B.; Williams, A.G.; Zanotti, J.M.
2000-01-01
Full text: The quark propagator is one of the fundamental building blocks of QCD. Results strongly depend on the ansatz for the propagator. Direct simulations of QCD on a space time lattice can provide guidance and constraints on the analytic structure of the quark propagator. On the lattice the infrared and asymptotic behaviour of the quark propagator is of particular interest since it is a reflection of the accuracy of the discretised quark action. In the deep infrared region, artefacts associated with the finite size of the lattice spacing become small. This is the most interesting region as nonperturbative physics lies here. However, the ultraviolet behaviour at large momentum of the propagator will in general strongly deviate from the correct continuum behaviour. This behaviour will be action dependent. Some interesting progress has been made in improving the ultraviolet behaviour of the propagator. A method, recently developed and referred to as tree-level correction, consists of using the knowledge of the tree-level behaviour to eliminate the obvious lattice artefacts. Tree-level correction represents a crucial step in extracting meaningful results for the mass function and the renormalisation function outside of the deep infrared region. The mass function is particularly interesting as it provides insights into the constituent quark mass as a measure of the nonperturbative physics. In this poster I will present results from the analytic structure of the propagator in recent lattice studies for a variety of fermion actions in lattice QCD. I will also present the new ratio method used to tree-level correct these quark propagators
Gilli, L.
2013-01-01
This thesis presents the development and the implementation of an uncertainty propagation algorithm based on the concept of spectral expansion. The first part of the thesis is dedicated to the study of uncertainty propagation methodologies and to the analysis of spectral techniques. The concepts
Methodology for characterizing modeling and discretization uncertainties in computational simulation
Energy Technology Data Exchange (ETDEWEB)
ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.
2000-03-01
This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.
A methodology for uncertainty analysis of reference equations of state
DEFF Research Database (Denmark)
Cheung, Howard; Frutiger, Jerome; Bell, Ian H.
We present a detailed methodology for the uncertainty analysis of reference equations of state (EOS) based on Helmholtz energy. In recent years there has been an increased interest in uncertainties of property data and process models of thermal systems. In the literature there are various...... for uncertainty analysis is suggested as a tool for EOS. The uncertainties of the EOS properties are calculated from the experimental values and the EOS model structure through the parameter covariance matrix and subsequent linear error propagation. This allows reporting the uncertainty range (95% confidence...
International Nuclear Information System (INIS)
Landsberg, P.T.
1990-01-01
This paper explores how the quantum mechanics uncertainty relation can be considered to result from measurements. A distinction is drawn between the uncertainties obtained by scrutinising experiments and the standard deviation type of uncertainty definition used in quantum formalism. (UK)
An analysis of combined standard uncertainty for radiochemical measurements of environmental samples
International Nuclear Information System (INIS)
Berne, A.
1996-01-01
It is anticipated that future data acquisitions intended for use in radiological risk assessments will require the incorporation of uncertainty analysis. Often, only one aliquot of the sample is taken and a single determination is made. Under these circumstances, the total uncertainty is calculated using the open-quotes propagation of errorsclose quotes approach. However, there is no agreement in the radioanalytical community as to the exact equations to use. The Quality Assurance/Metrology Division of the Environmental Measurements Laboratory has developed a systematic process to compute uncertainties in constituent components of the analytical procedure, as well as the combined standard uncertainty (CSU). The equations for computation are presented here, with examples of their use. They have also been incorporated into a code for use in the spreadsheet application, QuattroPro trademark. Using the spreadsheet with appropriate inputs permits an analysis of the variations in the CSU as a function of several different variables. The relative importance of the open-quotes counting uncertaintyclose quotes can also be ascertained
Bruce, William J; Maxwell, E A; Sneddon, I N
1963-01-01
Analytic Trigonometry details the fundamental concepts and underlying principle of analytic geometry. The title aims to address the shortcomings in the instruction of trigonometry by considering basic theories of learning and pedagogy. The text first covers the essential elements from elementary algebra, plane geometry, and analytic geometry. Next, the selection tackles the trigonometric functions of angles in general, basic identities, and solutions of equations. The text also deals with the trigonometric functions of real numbers. The fifth chapter details the inverse trigonometric functions
Federal Laboratory Consortium — The Analytical Labspecializes in Oil and Hydraulic Fluid Analysis, Identification of Unknown Materials, Engineering Investigations, Qualification Testing (to support...
Evaluating measurement uncertainty in fluid phase equilibrium calculations
van der Veen, Adriaan M. H.
2018-04-01
The evaluation of measurement uncertainty in accordance with the ‘Guide to the expression of uncertainty in measurement’ (GUM) has not yet become widespread in physical chemistry. With only the law of the propagation of uncertainty from the GUM, many of these uncertainty evaluations would be cumbersome, as models are often non-linear and require iterative calculations. The methods from GUM supplements 1 and 2 enable the propagation of uncertainties under most circumstances. Experimental data in physical chemistry are used, for example, to derive reference property data and support trade—all applications where measurement uncertainty plays an important role. This paper aims to outline how the methods for evaluating and propagating uncertainty can be applied to some specific cases with a wide impact: deriving reference data from vapour pressure data, a flash calculation, and the use of an equation-of-state to predict the properties of both phases in a vapour-liquid equilibrium. The three uncertainty evaluations demonstrate that the methods of GUM and its supplements are a versatile toolbox that enable us to evaluate the measurement uncertainty of physical chemical measurements, including the derivation of reference data, such as the equilibrium thermodynamical properties of fluids.
International Nuclear Information System (INIS)
Kozine, Igor O.; Utkin, Lev V.
2004-01-01
The paper describes an approach to representing, aggregating and propagating aleatory and epistemic uncertainty through computational models. The framework for the approach employs the theory of imprecise coherent probabilities. The approach is exemplified by a simple algebraic system, the inputs of which are uncertain. Six different uncertainty situations are considered, including mixtures of epistemic and aleatory uncertainty
Semiclassical propagation: Hilbert space vs. Wigner representation
Gottwald, Fabian; Ivanov, Sergei D.
2018-03-01
A unified viewpoint on the van Vleck and Herman-Kluk propagators in Hilbert space and their recently developed counterparts in Wigner representation is presented. Based on this viewpoint, the Wigner Herman-Kluk propagator is conceptually the most general one. Nonetheless, the respective semiclassical expressions for expectation values in terms of the density matrix and the Wigner function are mathematically proven here to coincide. The only remaining difference is a mere technical flexibility of the Wigner version in choosing the Gaussians' width for the underlying coherent states beyond minimal uncertainty. This flexibility is investigated numerically on prototypical potentials and it turns out to provide neither qualitative nor quantitative improvements. Given the aforementioned generality, utilizing the Wigner representation for semiclassical propagation thus leads to the same performance as employing the respective most-developed (Hilbert-space) methods for the density matrix.
Error propagation analysis for a sensor system
International Nuclear Information System (INIS)
Yeater, M.L.; Hockenbury, R.W.; Hawkins, J.; Wilkinson, J.
1976-01-01
As part of a program to develop reliability methods for operational use with reactor sensors and protective systems, error propagation analyses are being made for each model. An example is a sensor system computer simulation model, in which the sensor system signature is convoluted with a reactor signature to show the effect of each in revealing or obscuring information contained in the other. The error propagation analysis models the system and signature uncertainties and sensitivities, whereas the simulation models the signatures and by extensive repetitions reveals the effect of errors in various reactor input or sensor response data. In the approach for the example presented, the errors accumulated by the signature (set of ''noise'' frequencies) are successively calculated as it is propagated stepwise through a system comprised of sensor and signal processing components. Additional modeling steps include a Fourier transform calculation to produce the usual power spectral density representation of the product signature, and some form of pattern recognition algorithm
The uncertainties in estimating measurement uncertainties
International Nuclear Information System (INIS)
Clark, J.P.; Shull, A.H.
1994-01-01
All measurements include some error. Whether measurements are used for accountability, environmental programs or process support, they are of little value unless accompanied by an estimate of the measurements uncertainty. This fact is often overlooked by the individuals who need measurements to make decisions. This paper will discuss the concepts of measurement, measurements errors (accuracy or bias and precision or random error), physical and error models, measurement control programs, examples of measurement uncertainty, and uncertainty as related to measurement quality. Measurements are comparisons of unknowns to knowns, estimates of some true value plus uncertainty; and are no better than the standards to which they are compared. Direct comparisons of unknowns that match the composition of known standards will normally have small uncertainties. In the real world, measurements usually involve indirect comparisons of significantly different materials (e.g., measuring a physical property of a chemical element in a sample having a matrix that is significantly different from calibration standards matrix). Consequently, there are many sources of error involved in measurement processes that can affect the quality of a measurement and its associated uncertainty. How the uncertainty estimates are determined and what they mean is as important as the measurement. The process of calculating the uncertainty of a measurement itself has uncertainties that must be handled correctly. Examples of chemistry laboratory measurement will be reviewed in this report and recommendations made for improving measurement uncertainties
Photon Propagation through Linearly Active Dimers
Directory of Open Access Journals (Sweden)
José Delfino Huerta Morales
2017-06-01
Full Text Available We provide an analytic propagator for non-Hermitian dimers showing linear gain or losses in the quantum regime. In particular, we focus on experimentally feasible realizations of the PT -symmetric dimer and provide their mean photon number and second order two-point correlation. We study the propagation of vacuum, single photon spatially-separable, and two-photon spatially-entangled states. We show that each configuration produces a particular signature that might signal their possible uses as photon switches, semi-classical intensity-tunable sources, or spatially entangled sources to mention a few possible applications.
Summary of existing uncertainty methods
International Nuclear Information System (INIS)
Glaeser, Horst
2013-01-01
A summary of existing and most used uncertainty methods is presented, and the main features are compared. One of these methods is the order statistics method based on Wilks' formula. It is applied in safety research as well as in licensing. This method has been first proposed by GRS for use in deterministic safety analysis, and is now used by many organisations world-wide. Its advantage is that the number of potential uncertain input and output parameters is not limited to a small number. Such a limitation was necessary for the first demonstration of the Code Scaling Applicability Uncertainty Method (CSAU) by the United States Regulatory Commission (USNRC). They did not apply Wilks' formula in their statistical method propagating input uncertainties to obtain the uncertainty of a single output variable, like peak cladding temperature. A Phenomena Identification and Ranking Table (PIRT) was set up in order to limit the number of uncertain input parameters, and consequently, the number of calculations to be performed. Another purpose of such a PIRT process is to identify the most important physical phenomena which a computer code should be suitable to calculate. The validation of the code should be focused on the identified phenomena. Response surfaces are used in some applications replacing the computer code for performing a high number of calculations. The second well known uncertainty method is the Uncertainty Methodology Based on Accuracy Extrapolation (UMAE) and the follow-up method 'Code with the Capability of Internal Assessment of Uncertainty (CIAU)' developed by the University Pisa. Unlike the statistical approaches, the CIAU does compare experimental data with calculation results. It does not consider uncertain input parameters. Therefore, the CIAU is highly dependent on the experimental database. The accuracy gained from the comparison between experimental data and calculated results are extrapolated to obtain the uncertainty of the system code predictions
A Study of Malware Propagation via Online Social Networking
Faghani, Mohammad Reza; Nguyen, Uyen Trang
The popularity of online social networks (OSNs) have attracted malware creators who would use OSNs as a platform to propagate automated worms from one user's computer to another's. On the other hand, the topic of malware propagation in OSNs has only been investigated recently. In this chapter, we discuss recent advances on the topic of malware propagation by way of online social networking. In particular, we present three malware propagation techniques in OSNs, namely cross site scripting (XSS), Trojan and clickjacking types, and their characteristics via analytical models and simulations.
Uncertainty Quantification with Applications to Engineering Problems
DEFF Research Database (Denmark)
Bigoni, Daniele
in measurements, predictions and manufacturing, and we can say that any dynamical system used in engineering is subject to some of these uncertainties. The first part of this work presents an overview of the mathematical framework used in Uncertainty Quantification (UQ) analysis and introduces the spectral tensor...... and thus the UQ analysis of the associated systems will benefit greatly from the application of methods which require few function evaluations. We first consider the propagation of the uncertainty and the sensitivity analysis of the non-linear dynamics of railway vehicles with suspension components whose......-scale problems, where efficient methods are necessary with today’s computational resources. The outcome of this work was also the creation of several freely available Python modules for Uncertainty Quantification, which are listed and described in the appendix....
International Nuclear Information System (INIS)
Verma, Surendra P.; Andaverde, Jorge; Santoyo, E.
2006-01-01
We used the error propagation theory to calculate uncertainties in static formation temperature estimates in geothermal and petroleum wells from three widely used methods (line-source or Horner method; spherical and radial heat flow method; and cylindrical heat source method). Although these methods commonly use an ordinary least-squares linear regression model considered in this study, we also evaluated two variants of a weighted least-squares linear regression model for the actual relationship between the bottom-hole temperature and the corresponding time functions. Equations based on the error propagation theory were derived for estimating uncertainties in the time function of each analytical method. These uncertainties in conjunction with those on bottom-hole temperatures were used to estimate individual weighting factors required for applying the two variants of the weighted least-squares regression model. Standard deviations and 95% confidence limits of intercept were calculated for both types of linear regressions. Applications showed that static formation temperatures computed with the spherical and radial heat flow method were generally greater (at the 95% confidence level) than those from the other two methods under study. When typical measurement errors of 0.25 h in time and 5 deg. C in bottom-hole temperature were assumed for the weighted least-squares model, the uncertainties in the estimated static formation temperatures were greater than those for the ordinary least-squares model. However, if these errors were smaller (about 1% in time and 0.5% in temperature measurements), the weighted least-squares linear regression model would generally provide smaller uncertainties for the estimated temperatures than the ordinary least-squares linear regression model. Therefore, the weighted model would be statistically correct and more appropriate for such applications. We also suggest that at least 30 precise and accurate BHT and time measurements along with
Modelling the gluon propagator
Energy Technology Data Exchange (ETDEWEB)
Leinweber, D.B.; Parrinello, C.; Skullerud, J.I.; Williams, A.G
1999-03-01
Scaling of the Landau gauge gluon propagator calculated at {beta} = 6.0 and at {beta} = 6.2 is demonstrated. A variety of functional forms for the gluon propagator calculated on a large (32{sup 3} x 64) lattice at {beta} = 6.0 are investigated.
Salomons, E.; Polinder, H.; Lohman, W.; Zhou, H.; Borst, H.
2009-01-01
A new engineering model for sound propagation in cities is presented. The model is based on numerical and experimental studies of sound propagation between street canyons. Multiple reflections in the source canyon and the receiver canyon are taken into account in an efficient way, while weak
Uncertainty in social dilemmas
Kwaadsteniet, Erik Willem de
2007-01-01
This dissertation focuses on social dilemmas, and more specifically, on environmental uncertainty in these dilemmas. Real-life social dilemma situations are often characterized by uncertainty. For example, fishermen mostly do not know the exact size of the fish population (i.e., resource size uncertainty). Several researchers have therefore asked themselves the question as to how such uncertainty influences people’s choice behavior. These researchers have repeatedly concluded that uncertainty...
Believable statements of uncertainty and believable science
International Nuclear Information System (INIS)
Lindstrom, R.M.
2017-01-01
Nearly 50 years ago, two landmark papers appeared that should have cured the problem of ambiguous uncertainty statements in published data. Eisenhart's paper in Science called for statistically meaningful numbers, and Currie's Analytical Chemistry paper revealed the wide range in common definitions of detection limit. Confusion and worse can result when uncertainties are misinterpreted or ignored. The recent stories of cold fusion, variable radioactive decay, and piezonuclear reactions provide cautionary examples in which prior probability has been neglected. We show examples from our laboratory and others to illustrate the fact that uncertainty depends on both statistical and scientific judgment. (author)
Large-uncertainty intelligent states for angular momentum and angle
International Nuclear Information System (INIS)
Goette, Joerg B; Zambrini, Roberta; Franke-Arnold, Sonja; Barnett, Stephen M
2005-01-01
The equality in the uncertainty principle for linear momentum and position is obtained for states which also minimize the uncertainty product. However, in the uncertainty relation for angular momentum and angular position both sides of the inequality are state dependent and therefore the intelligent states, which satisfy the equality, do not necessarily give a minimum for the uncertainty product. In this paper, we highlight the difference between intelligent states and minimum uncertainty states by investigating a class of intelligent states which obey the equality in the angular uncertainty relation while having an arbitrarily large uncertainty product. To develop an understanding for the uncertainties of angle and angular momentum for the large-uncertainty intelligent states we compare exact solutions with analytical approximations in two limiting cases
Uncertainty for Part Density Determination: An Update
Energy Technology Data Exchange (ETDEWEB)
Valdez, Mario Orlando [Los Alamos National Laboratory
2016-12-14
Accurate and precise density measurements by hydrostatic weighing requires the use of an analytical balance, configured with a suspension system, to both measure the weight of a part in water and in air. Additionally, the densities of these liquid media (water and air) must be precisely known for the part density determination. To validate the accuracy and precision of these measurements, uncertainty statements are required. The work in this report is a revision of an original report written more than a decade ago, specifically applying principles and guidelines suggested by the Guide to the Expression of Uncertainty in Measurement (GUM) for determining the part density uncertainty through sensitivity analysis. In this work, updated derivations are provided; an original example is revised with the updated derivations and appendix, provided solely to uncertainty evaluations using Monte Carlo techniques, specifically using the NIST Uncertainty Machine, as a viable alternative method.
Results from the Application of Uncertainty Methods in the CSNI Uncertainty Methods Study (UMS)
International Nuclear Information System (INIS)
Glaeser, H.
2008-01-01
Within licensing procedures there is the incentive to replace the conservative requirements for code application by a - best estimate - concept supplemented by an uncertainty analysis to account for predictive uncertainties of code results. Methods have been developed to quantify these uncertainties. The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI, has compared five methods for calculating the uncertainty in the predictions of advanced -best estimate- thermal-hydraulic codes. Most of the methods identify and combine input uncertainties. The major differences between the predictions of the methods came from the choice of uncertain parameters and the quantification of the input uncertainties, i.e. the wideness of the uncertainty ranges. Therefore, suitable experimental and analytical information has to be selected to specify these uncertainty ranges or distributions. After the closure of the Uncertainty Method Study (UMS) and after the report was issued comparison calculations of experiment LSTF-SB-CL-18 were performed by University of Pisa using different versions of the RELAP 5 code. It turned out that the version used by two of the participants calculated a 170 K higher peak clad temperature compared with other versions using the same input deck. This may contribute to the differences of the upper limit of the uncertainty ranges.
Burdette, A C
1971-01-01
Analytic Geometry covers several fundamental aspects of analytic geometry needed for advanced subjects, including calculus.This book is composed of 12 chapters that review the principles, concepts, and analytic proofs of geometric theorems, families of lines, the normal equation of the line, and related matters. Other chapters highlight the application of graphing, foci, directrices, eccentricity, and conic-related topics. The remaining chapters deal with the concept polar and rectangular coordinates, surfaces and curves, and planes.This book will prove useful to undergraduate trigonometric st
Directory of Open Access Journals (Sweden)
F Dabbagh Kashani
2013-12-01
Full Text Available Study of the beam propagation behavior through oceanic media is a challenging subject. In this paper, based on generalized Collins integral, the mean irradiance profile of Gaussian laser beam propagation through ocean is investigated. Power In Special Bucket (PIB is calculated. Using analytical expressions and calculating seawater transmission, the effects of absorption and scattering on beam propagation are studied. Based on these formulae, propagation in ocean and atmosphere are compared. The effects of some optical and environmental specifications, such as divergence angle and chlorophyll concentration in seawater on beam propagation by using mean irradiance, PIB and analytical formula of oceanic transmission are studied. The calculated results are shown graphically.
Directory of Open Access Journals (Sweden)
Griffin Patrick
2017-01-01
Full Text Available A rigorous treatment of the uncertainty in the underlying nuclear data on silicon displacement damage metrics is presented. The uncertainty in the cross sections and recoil atom spectra are propagated into the energy-dependent uncertainty contribution in the silicon displacement kerma and damage energy using a Total Monte Carlo treatment. An energy-dependent covariance matrix is used to characterize the resulting uncertainty. A strong correlation between different reaction channels is observed in the high energy neutron contributions to the displacement damage metrics which supports the necessity of using a Monte Carlo based method to address the nonlinear nature of the uncertainty propagation.
Uncertainty analysis in Monte Carlo criticality computations
International Nuclear Information System (INIS)
Qi Ao
2011-01-01
Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.
Chemical model reduction under uncertainty
Malpica Galassi, Riccardo
2017-03-06
A general strategy for analysis and reduction of uncertain chemical kinetic models is presented, and its utility is illustrated in the context of ignition of hydrocarbon fuel–air mixtures. The strategy is based on a deterministic analysis and reduction method which employs computational singular perturbation analysis to generate simplified kinetic mechanisms, starting from a detailed reference mechanism. We model uncertain quantities in the reference mechanism, namely the Arrhenius rate parameters, as random variables with prescribed uncertainty factors. We propagate this uncertainty to obtain the probability of inclusion of each reaction in the simplified mechanism. We propose probabilistic error measures to compare predictions from the uncertain reference and simplified models, based on the comparison of the uncertain dynamics of the state variables, where the mixture entropy is chosen as progress variable. We employ the construction for the simplification of an uncertain mechanism in an n-butane–air mixture homogeneous ignition case, where a 176-species, 1111-reactions detailed kinetic model for the oxidation of n-butane is used with uncertainty factors assigned to each Arrhenius rate pre-exponential coefficient. This illustration is employed to highlight the utility of the construction, and the performance of a family of simplified models produced depending on chosen thresholds on importance and marginal probabilities of the reactions.
Decay heat uncertainty quantification of MYRRHA
Fiorito Luca; Buss Oliver; Hoefer Axel; Stankovskiy Alexey; Eynde Gert Van den
2017-01-01
MYRRHA is a lead-bismuth cooled MOX-fueled accelerator driven system (ADS) currently in the design phase at SCK·CEN in Belgium. The correct evaluation of the decay heat and of its uncertainty level is very important for the safety demonstration of the reactor. In the first part of this work we assessed the decay heat released by the MYRRHA core using the ALEPH-2 burnup code. The second part of the study focused on the nuclear data uncertainty and covariance propagation to the MYRRHA decay hea...
The Drag-based Ensemble Model (DBEM) for Coronal Mass Ejection Propagation
Dumbović, Mateja; Čalogović, Jaša; Vršnak, Bojan; Temmer, Manuela; Mays, M. Leila; Veronig, Astrid; Piantschitsch, Isabell
2018-02-01
The drag-based model for heliospheric propagation of coronal mass ejections (CMEs) is a widely used analytical model that can predict CME arrival time and speed at a given heliospheric location. It is based on the assumption that the propagation of CMEs in interplanetary space is solely under the influence of magnetohydrodynamical drag, where CME propagation is determined based on CME initial properties as well as the properties of the ambient solar wind. We present an upgraded version, the drag-based ensemble model (DBEM), that covers ensemble modeling to produce a distribution of possible ICME arrival times and speeds. Multiple runs using uncertainty ranges for the input values can be performed in almost real-time, within a few minutes. This allows us to define the most likely ICME arrival times and speeds, quantify prediction uncertainties, and determine forecast confidence. The performance of the DBEM is evaluated and compared to that of ensemble WSA-ENLIL+Cone model (ENLIL) using the same sample of events. It is found that the mean error is ME = ‑9.7 hr, mean absolute error MAE = 14.3 hr, and root mean square error RMSE = 16.7 hr, which is somewhat higher than, but comparable to ENLIL errors (ME = ‑6.1 hr, MAE = 12.8 hr and RMSE = 14.4 hr). Overall, DBEM and ENLIL show a similar performance. Furthermore, we find that in both models fast CMEs are predicted to arrive earlier than observed, most likely owing to the physical limitations of models, but possibly also related to an overestimation of the CME initial speed for fast CMEs.
Energy Technology Data Exchange (ETDEWEB)
Chae, Myeong Hu; Lee, Hu Jun; Kim, Ha Seok
1989-02-15
This book give explanations on analytical chemistry with ten chapters, which deal with development of analytical chemistry, the theory of error with definition and classification, sample and treatment gravimetry on general process of gravimetry in aqueous solution and non-aqueous solution, precipitation titration about precipitation reaction and types, complexometry with summary and complex compound, oxidation-reduction equilibrium on electrode potential and potentiometric titration, solvent extraction and chromatograph and experiment with basic operation for chemical experiment.
International Nuclear Information System (INIS)
Chae, Myeong Hu; Lee, Hu Jun; Kim, Ha Seok
1989-02-01
This book give explanations on analytical chemistry with ten chapters, which deal with development of analytical chemistry, the theory of error with definition and classification, sample and treatment gravimetry on general process of gravimetry in aqueous solution and non-aqueous solution, precipitation titration about precipitation reaction and types, complexometry with summary and complex compound, oxidation-reduction equilibrium on electrode potential and potentiometric titration, solvent extraction and chromatograph and experiment with basic operation for chemical experiment.
A Bayesian Framework of Uncertainties Integration in 3D Geological Model
Liang, D.; Liu, X.
2017-12-01
3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.
Statistical uncertainties and unrecognized relationships
International Nuclear Information System (INIS)
Rankin, J.P.
1985-01-01
Hidden relationships in specific designs directly contribute to inaccuracies in reliability assessments. Uncertainty factors at the system level may sometimes be applied in attempts to compensate for the impact of such unrecognized relationships. Often uncertainty bands are used to relegate unknowns to a miscellaneous category of low-probability occurrences. However, experience and modern analytical methods indicate that perhaps the dominant, most probable and significant events are sometimes overlooked in statistical reliability assurances. The author discusses the utility of two unique methods of identifying the otherwise often unforeseeable system interdependencies for statistical evaluations. These methods are sneak circuit analysis and a checklist form of common cause failure analysis. Unless these techniques (or a suitable equivalent) are also employed along with the more widely-known assurance tools, high reliability of complex systems may not be adequately assured. This concern is indicated by specific illustrations. 8 references, 5 figures
Uncertainty estimation of ultrasonic thickness measurement
International Nuclear Information System (INIS)
Yassir Yassen, Abdul Razak Daud; Mohammad Pauzi Ismail; Abdul Aziz Jemain
2009-01-01
The most important factor that should be taken into consideration when selecting ultrasonic thickness measurement technique is its reliability. Only when the uncertainty of a measurement results is known, it may be judged if the result is adequate for intended purpose. The objective of this study is to model the ultrasonic thickness measurement function, to identify the most contributing input uncertainty components, and to estimate the uncertainty of the ultrasonic thickness measurement results. We assumed that there are five error sources significantly contribute to the final error, these sources are calibration velocity, transit time, zero offset, measurement repeatability and resolution, by applying the propagation of uncertainty law to the model function, a combined uncertainty of the ultrasonic thickness measurement was obtained. In this study the modeling function of ultrasonic thickness measurement was derived. By using this model the estimation of the uncertainty of the final output result was found to be reliable. It was also found that the most contributing input uncertainty components are calibration velocity, transit time linearity and zero offset. (author)
Heat pulse propagation studies in TFTR
Energy Technology Data Exchange (ETDEWEB)
Fredrickson, E.D.; Callen, J.D.; Colchin, R.J.; Efthimion, P.C.; Hill, K.W.; Izzo, R.; Mikkelsen, D.R.; Monticello, D.A.; McGuire, K.; Bell, J.D.
1986-02-01
The time scales for sawtooth repetition and heat pulse propagation are much longer (10's of msec) in the large tokamak TFTR than in previous, smaller tokamaks. This extended time scale coupled with more detailed diagnostics has led us to revisit the analysis of the heat pulse propagation as a method to determine the electron heat diffusivity, chi/sub e/, in the plasma. A combination of analytic and computer solutions of the electron heat diffusion equation are used to clarify previous work and develop new methods for determining chi/sub e/. Direct comparison of the predicted heat pulses with soft x-ray and ECE data indicates that the space-time evolution is diffusive. However, the chi/sub e/ determined from heat pulse propagation usually exceeds that determined from background plasma power balance considerations by a factor ranging from 2 to 10. Some hypotheses for resolving this discrepancy are discussed. 11 refs., 19 figs., 1 tab.
Heat pulse propagation studies in TFTR
International Nuclear Information System (INIS)
Fredrickson, E.D.; Callen, J.D.; Colchin, R.J.
1986-02-01
The time scales for sawtooth repetition and heat pulse propagation are much longer (10's of msec) in the large tokamak TFTR than in previous, smaller tokamaks. This extended time scale coupled with more detailed diagnostics has led us to revisit the analysis of the heat pulse propagation as a method to determine the electron heat diffusivity, chi/sub e/, in the plasma. A combination of analytic and computer solutions of the electron heat diffusion equation are used to clarify previous work and develop new methods for determining chi/sub e/. Direct comparison of the predicted heat pulses with soft x-ray and ECE data indicates that the space-time evolution is diffusive. However, the chi/sub e/ determined from heat pulse propagation usually exceeds that determined from background plasma power balance considerations by a factor ranging from 2 to 10. Some hypotheses for resolving this discrepancy are discussed. 11 refs., 19 figs., 1 tab
Wave Propagation in Bimodular Geomaterials
Kuznetsova, Maria; Pasternak, Elena; Dyskin, Arcady; Pelinovsky, Efim
2016-04-01
Observations and laboratory experiments show that fragmented or layered geomaterials have the mechanical response dependent on the sign of the load. The most adequate model accounting for this effect is the theory of bimodular (bilinear) elasticity - a hyperelastic model with different elastic moduli for tension and compression. For most of geo- and structural materials (cohesionless soils, rocks, concrete, etc.) the difference between elastic moduli is such that their modulus in compression is considerably higher than that in tension. This feature has a profound effect on oscillations [1]; however, its effect on wave propagation has not been comprehensively investigated. It is believed that incorporation of bilinear elastic constitutive equations within theory of wave dynamics will bring a deeper insight to the study of mechanical behaviour of many geomaterials. The aim of this paper is to construct a mathematical model and develop analytical methods and numerical algorithms for analysing wave propagation in bimodular materials. Geophysical and exploration applications and applications in structural engineering are envisaged. The FEM modelling of wave propagation in a 1D semi-infinite bimodular material has been performed with the use of Marlow potential [2]. In the case of the initial load expressed by a harmonic pulse loading strong dependence on the pulse sign is observed: when tension is applied before compression, the phenomenon of disappearance of negative (compressive) strains takes place. References 1. Dyskin, A., Pasternak, E., & Pelinovsky, E. (2012). Periodic motions and resonances of impact oscillators. Journal of Sound and Vibration, 331(12), 2856-2873. 2. Marlow, R. S. (2008). A Second-Invariant Extension of the Marlow Model: Representing Tension and Compression Data Exactly. In ABAQUS Users' Conference.
Feynman propagator in curved space-time
International Nuclear Information System (INIS)
Candelas, P.; Raine, D.J.
1977-01-01
The Wick rotation is generalized in a covariant manner so as to apply to curved manifolds in a way that is independent of the analytic properties of the manifold. This enables us to show that various methods for defining a Feynman propagator to be found in the literature are equivalent where they are applicable. We are also able to discuss the relation between certain regularization methods that have been employed
Frenkel, Robert B; Farrance, Ian
2018-01-01
The "Guide to the Expression of Uncertainty in Measurement" (GUM) is the foundational document of metrology. Its recommendations apply to all areas of metrology including metrology associated with the biomedical sciences. When the output of a measurement process depends on the measurement of several inputs through a measurement equation or functional relationship, the propagation of uncertainties in the inputs to the uncertainty in the output demands a level of understanding of the differential calculus. This review is intended as an elementary guide to the differential calculus and its application to uncertainty in measurement. The review is in two parts. In Part I, Section 3, we consider the case of a single input and introduce the concepts of error and uncertainty. Next we discuss, in the following sections in Part I, such notions as derivatives and differentials, and the sensitivity of an output to errors in the input. The derivatives of functions are obtained using very elementary mathematics. The overall purpose of this review, here in Part I and subsequently in Part II, is to present the differential calculus for those in the medical sciences who wish to gain a quick but accurate understanding of the propagation of uncertainties. © 2018 Elsevier Inc. All rights reserved.
A Strategy for Uncertainty Visualization Design
2009-10-01
143–156, Magdeburg , Germany . [11] Thomson, J., Hetzler, E., MacEachren, A., Gahegan, M. and Pavel, M. (2005), A Typology for Visualizing Uncertainty...and Stasko [20] to bridge analytic gaps in visualization design, when tasks in the strategy overlap (and therefore complement) design frameworks
Investment Decisions with Two-Factor Uncertainty
Compernolle, T.; Huisman, Kuno; Kort, Peter; Lavrutich, Maria; Nunes, Claudia; Thijssen, J.J.J.
2018-01-01
This paper considers investment problems in real options with non-homogeneous two-factor uncertainty. It shows that, despite claims made in the literature, the method used to derive an analytical solution in one dimensional problems cannot be straightforwardly extended to problems with two
Groves, Curtis Edward
2014-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional "validation by test only" mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics
Groves, Curtis Edward
2014-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional validation by test only mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions.Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations. This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions
Database for propagation models
Kantak, Anil V.
1991-07-01
A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.
Instrument uncertainty predictions
International Nuclear Information System (INIS)
Coutts, D.A.
1991-07-01
The accuracy of measurements and correlations should normally be provided for most experimental activities. The uncertainty is a measure of the accuracy of a stated value or equation. The uncertainty term reflects a combination of instrument errors, modeling limitations, and phenomena understanding deficiencies. This report provides several methodologies to estimate an instrument's uncertainty when used in experimental work. Methods are shown to predict both the pretest and post-test uncertainty
Lemos, Nivaldo A
2018-01-01
Analytical mechanics is the foundation of many areas of theoretical physics including quantum theory and statistical mechanics, and has wide-ranging applications in engineering and celestial mechanics. This introduction to the basic principles and methods of analytical mechanics covers Lagrangian and Hamiltonian dynamics, rigid bodies, small oscillations, canonical transformations and Hamilton–Jacobi theory. This fully up-to-date textbook includes detailed mathematical appendices and addresses a number of advanced topics, some of them of a geometric or topological character. These include Bertrand's theorem, proof that action is least, spontaneous symmetry breakdown, constrained Hamiltonian systems, non-integrability criteria, KAM theory, classical field theory, Lyapunov functions, geometric phases and Poisson manifolds. Providing worked examples, end-of-chapter problems, and discussion of ongoing research in the field, it is suitable for advanced undergraduate students and graduate students studying analyt...
Climate change decision-making: Model & parameter uncertainties explored
Energy Technology Data Exchange (ETDEWEB)
Dowlatabadi, H.; Kandlikar, M.; Linville, C.
1995-12-31
A critical aspect of climate change decision-making is uncertainties in current understanding of the socioeconomic, climatic and biogeochemical processes involved. Decision-making processes are much better informed if these uncertainties are characterized and their implications understood. Quantitative analysis of these uncertainties serve to inform decision makers about the likely outcome of policy initiatives, and help set priorities for research so that outcome ambiguities faced by the decision-makers are reduced. A family of integrated assessment models of climate change have been developed at Carnegie Mellon. These models are distinguished from other integrated assessment efforts in that they were designed from the outset to characterize and propagate parameter, model, value, and decision-rule uncertainties. The most recent of these models is ICAM 2.1. This model includes representation of the processes of demographics, economic activity, emissions, atmospheric chemistry, climate and sea level change and impacts from these changes and policies for emissions mitigation, and adaptation to change. The model has over 800 objects of which about one half are used to represent uncertainty. In this paper we show, that when considering parameter uncertainties, the relative contribution of climatic uncertainties are most important, followed by uncertainties in damage calculations, economic uncertainties and direct aerosol forcing uncertainties. When considering model structure uncertainties we find that the choice of policy is often dominated by model structure choice, rather than parameter uncertainties.
Characterizing spatial uncertainty when integrating social data in conservation planning.
Lechner, A M; Raymond, C M; Adams, V M; Polyakov, M; Gordon, A; Rhodes, J R; Mills, M; Stein, A; Ives, C D; Lefroy, E C
2014-12-01
Recent conservation planning studies have presented approaches for integrating spatially referenced social (SRS) data with a view to improving the feasibility of conservation action. We reviewed the growing conservation literature on SRS data, focusing on elicited or stated preferences derived through social survey methods such as choice experiments and public participation geographic information systems. Elicited SRS data includes the spatial distribution of willingness to sell, willingness to pay, willingness to act, and assessments of social and cultural values. We developed a typology for assessing elicited SRS data uncertainty which describes how social survey uncertainty propagates when projected spatially and the importance of accounting for spatial uncertainty such as scale effects and data quality. These uncertainties will propagate when elicited SRS data is integrated with biophysical data for conservation planning and may have important consequences for assessing the feasibility of conservation actions. To explore this issue further, we conducted a systematic review of the elicited SRS data literature. We found that social survey uncertainty was commonly tested for, but that these uncertainties were ignored when projected spatially. Based on these results we developed a framework which will help researchers and practitioners estimate social survey uncertainty and use these quantitative estimates to systematically address uncertainty within an analysis. This is important when using SRS data in conservation applications because decisions need to be made irrespective of data quality and well characterized uncertainty can be incorporated into decision theoretic approaches. © 2014 Society for Conservation Biology.
Spain, Barry; Ulam, S; Stark, M
1960-01-01
Analytical Quadrics focuses on the analytical geometry of three dimensions. The book first discusses the theory of the plane, sphere, cone, cylinder, straight line, and central quadrics in their standard forms. The idea of the plane at infinity is introduced through the homogenous Cartesian coordinates and applied to the nature of the intersection of three planes and to the circular sections of quadrics. The text also focuses on paraboloid, including polar properties, center of a section, axes of plane section, and generators of hyperbolic paraboloid. The book also touches on homogenous coordi
Optimization of FRAP uncertainty analysis option
International Nuclear Information System (INIS)
Peck, S.O.
1979-10-01
The automated uncertainty analysis option that has been incorporated in the FRAP codes (FRAP-T5 and FRAPCON-2) provides the user with a means of obtaining uncertainty bands on code predicted variables at user-selected times during a fuel pin analysis. These uncertainty bands are obtained by multiple single fuel pin analyses to generate data which can then be analyzed by second order statistical error propagation techniques. In this process, a considerable amount of data is generated and stored on tape. The user has certain choices to make regarding which independent variables are to be used in the analysis and what order of error propagation equation should be used in modeling the output response. To aid the user in these decisions, a computer program, ANALYZ, has been written and added to the uncertainty analysis option package. A variety of considerations involved in fitting response surface equations and certain pit-falls of which the user should be aware are discussed. An equation is derived expressing a residual as a function of a fitted model and an assumed true model. A variety of experimental design choices are discussed, including the advantages and disadvantages of each approach. Finally, a description of the subcodes which constitute program ANALYZ is provided
Uncertainty of Doppler reactivity worth due to uncertainties of JENDL-3.2 resonance parameters
Energy Technology Data Exchange (ETDEWEB)
Zukeran, Atsushi [Hitachi Ltd., Hitachi, Ibaraki (Japan). Power and Industrial System R and D Div.; Hanaki, Hiroshi; Nakagawa, Tuneo; Shibata, Keiichi; Ishikawa, Makoto
1998-03-01
Analytical formula of Resonance Self-shielding Factor (f-factor) is derived from the resonance integral (J-function) based on NR approximation and the analytical expression for Doppler reactivity worth ({rho}) is also obtained by using the result. Uncertainties of the f-factor and Doppler reactivity worth are evaluated on the basis of sensitivity coefficients to the resonance parameters. The uncertainty of the Doppler reactivity worth at 487{sup 0}K is about 4 % for the PNC Large Fast Breeder Reactor. (author)
2016-04-30
Warfare, Naval Sea Systems Command Acquisition Cycle Time : Defining the Problem David Tate, Institute for Defense Analyses Schedule Analytics Jennifer...research was comprised of the following high- level steps : Identify and review primary data sources 1...research. However, detailed reviews of the OMB IT Dashboard data revealed that schedule data is highly aggregated. Program start date and program end date
Uncertainty and Cognitive Control
Directory of Open Access Journals (Sweden)
Faisal eMushtaq
2011-10-01
Full Text Available A growing trend of neuroimaging, behavioural and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1 There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2 There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3 The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the need for control; (4 Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders.
International Nuclear Information System (INIS)
Shapiro, E G
2008-01-01
Simple analytic expressions are derived to approximate the bit error rate for data transmission through fibreoptic communication lines. The propagation of optical pulses is directly numerically simulated. Analytic estimates are in good agreement with numerical calculations. (fibreoptic communication)
Accuracy in Orbital Propagation: A Comparison of Predictive Software Models
2017-06-01
30] M. Lane and K. Cranford, "An improved analytical drag theory for the artificial satellite problem," American Institute of Aeronautics and...which have a foundation in similar theory . Since their first operational use, both propagators have incorporated updated theory and mathematical...propagators should therefore utilize the most current TLE data available to avoid accuracy errors. 14. SUBJECT TERMS orbital mechanics , orbital
Uncertainty in hydrological signatures for gauged and ungauged catchments
Westerberg, Ida K.; Wagener, Thorsten; Coxon, Gemma; McMillan, Hilary K.; Castellarin, Attilio; Montanari, Alberto; Freer, Jim
2016-03-01
Reliable information about hydrological behavior is needed for water-resource management and scientific investigations. Hydrological signatures quantify catchment behavior as index values, and can be predicted for ungauged catchments using a regionalization procedure. The prediction reliability is affected by data uncertainties for the gauged catchments used in prediction and by uncertainties in the regionalization procedure. We quantified signature uncertainty stemming from discharge data uncertainty for 43 UK catchments and propagated these uncertainties in signature regionalization, while accounting for regionalization uncertainty with a weighted-pooling-group approach. Discharge uncertainty was estimated using Monte Carlo sampling of multiple feasible rating curves. For each sampled rating curve, a discharge time series was calculated and used in deriving the gauged signature uncertainty distribution. We found that the gauged uncertainty varied with signature type, local measurement conditions and catchment behavior, with the highest uncertainties (median relative uncertainty ±30-40% across all catchments) for signatures measuring high- and low-flow magnitude and dynamics. Our regionalization method allowed assessing the role and relative magnitudes of the gauged and regionalized uncertainty sources in shaping the signature uncertainty distributions predicted for catchments treated as ungauged. We found that (1) if the gauged uncertainties were neglected there was a clear risk of overconditioning the regionalization inference, e.g., by attributing catchment differences resulting from gauged uncertainty to differences in catchment behavior, and (2) uncertainty in the regionalization results was lower for signatures measuring flow distribution (e.g., mean flow) than flow dynamics (e.g., autocorrelation), and for average flows (and then high flows) compared to low flows.
Analytic structure of ρ meson propagator at finite temperature
International Nuclear Information System (INIS)
Ghosh, Sabyasachi; Sarkar, Sourav; Mallik, S.
2010-01-01
We analyse the structure of one-loop self-energy graphs for the ρ meson in real time formulation of finite temperature field theory. We find the discontinuities of these graphs across the unitary and the Landau cuts. These contributions are identified with different sources of medium modification discussed in the literature. We also calculate numerically the imaginary and the real parts of the self-energies and construct the spectral function of the ρ meson, which are compared with an earlier determination. A significant contribution arises from the unitary cut of the πω loop, that was ignored so far in the literature. (orig.)
ANALYTICAL EVALUATION OF CRACK PROPAGATION FOR BULB HYDRAULIC TURBINES SHAFTS
Directory of Open Access Journals (Sweden)
Mircea O. POPOVICU
2011-05-01
Full Text Available The Hydroelectric Power Plants uses the regenerating energy of rivers. The hydraulic Bulb turbines running with low heads are excellent alternative energy sources. The shafts of these units present themselves as massive pieces, with cylindrical shape, manufactured from low-alloyed steels. The paper analyses the fatigue cracks occurring at some turbines in the neighbourhood of the connection zone between the shaft and the turbine runner flange. To obtain the tension state in this zone ANSIS and AFGROW computing programs were used. The number of running hours until the piercing of the shaft wall is established as a useful result.
Uncertainty Estimation Cheat Sheet for Probabilistic Risk Assessment
Britton, Paul T.; Al Hassan, Mohammad; Ring, Robert W.
2017-01-01
"Uncertainty analysis itself is uncertain, therefore, you cannot evaluate it exactly," Source Uncertain Quantitative results for aerospace engineering problems are influenced by many sources of uncertainty. Uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables as well as through the propagation of these uncertainties up to the result. Uncertainty can be thought of as a measure of the 'goodness' of a result and is typically represented as statistical dispersion. This paper will explain common measures of centrality and dispersion; and-with examples-will provide guidelines for how they may be estimated to ensure effective technical contributions to decision-making.
Durability reliability analysis for corroding concrete structures under uncertainty
Zhang, Hao
2018-02-01
This paper presents a durability reliability analysis of reinforced concrete structures subject to the action of marine chloride. The focus is to provide insight into the role of epistemic uncertainties on durability reliability. The corrosion model involves a number of variables whose probabilistic characteristics cannot be fully determined due to the limited availability of supporting data. All sources of uncertainty, both aleatory and epistemic, should be included in the reliability analysis. Two methods are available to formulate the epistemic uncertainty: the imprecise probability-based method and the purely probabilistic method in which the epistemic uncertainties are modeled as random variables. The paper illustrates how the epistemic uncertainties are modeled and propagated in the two methods, and shows how epistemic uncertainties govern the durability reliability.
Inverse Problems and Uncertainty Quantification
Litvinenko, Alexander
2014-01-06
In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Inverse Problems and Uncertainty Quantification
Litvinenko, Alexander; Matthies, Hermann G.
2014-01-01
In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Inverse problems and uncertainty quantification
Litvinenko, Alexander
2013-12-18
In a Bayesian setting, inverse problems and uncertainty quantification (UQ)— the propagation of uncertainty through a computational (forward) model—are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
David, P
2013-01-01
Propagation of Waves focuses on the wave propagation around the earth, which is influenced by its curvature, surface irregularities, and by passage through atmospheric layers that may be refracting, absorbing, or ionized. This book begins by outlining the behavior of waves in the various media and at their interfaces, which simplifies the basic phenomena, such as absorption, refraction, reflection, and interference. Applications to the case of the terrestrial sphere are also discussed as a natural generalization. Following the deliberation on the diffraction of the "ground? wave around the ear
Energy Technology Data Exchange (ETDEWEB)
Choi, Jae Seong
1993-02-15
This book is comprised of nineteen chapters, which describes introduction of analytical chemistry, experimental error and statistics, chemistry equilibrium and solubility, gravimetric analysis with mechanism of precipitation, range and calculation of the result, volume analysis on general principle, sedimentation method on types and titration curve, acid base balance, acid base titration curve, complex and firing reaction, introduction of chemical electro analysis, acid-base titration curve, electrode and potentiometry, electrolysis and conductometry, voltammetry and polarographic spectrophotometry, atomic spectrometry, solvent extraction, chromatograph and experiments.
International Nuclear Information System (INIS)
Choi, Jae Seong
1993-02-01
This book is comprised of nineteen chapters, which describes introduction of analytical chemistry, experimental error and statistics, chemistry equilibrium and solubility, gravimetric analysis with mechanism of precipitation, range and calculation of the result, volume analysis on general principle, sedimentation method on types and titration curve, acid base balance, acid base titration curve, complex and firing reaction, introduction of chemical electro analysis, acid-base titration curve, electrode and potentiometry, electrolysis and conductometry, voltammetry and polarographic spectrophotometry, atomic spectrometry, solvent extraction, chromatograph and experiments.
International Nuclear Information System (INIS)
Anon.
1985-01-01
The division for Analytical Chemistry continued to try and develope an accurate method for the separation of trace amounts from mixtures which, contain various other elements. Ion exchange chromatography is of special importance in this regard. New separation techniques were tried on certain trace amounts in South African standard rock materials and special ceramics. Methods were also tested for the separation of carrier-free radioisotopes from irradiated cyclotron discs
Morse oscillator propagator in the high temperature limit I: Theory
Energy Technology Data Exchange (ETDEWEB)
Toutounji, Mohamad, E-mail: Mtoutounji@uaeu.ac.ae
2017-02-15
In an earlier work of the author the time evolution of Morse oscillator was studied analytically and exactly at low temperatures whereupon optical correlation functions were calculated using Morse oscillator coherent states were employed. Morse oscillator propagator in the high temperature limit is derived and a closed form of its corresponding canonical partition function is obtained. Both diagonal and off-diagonal forms of Morse oscillator propagator are derived in the high temperature limit. Partition functions of diatomic molecules are calculated. - Highlights: • Derives the quantum propagator of Morse oscillator in the high temperature limit. • Uses the resulting diagonal propagator to derive a closed form of Morse oscillator partition function. • Provides a more sophisticated formula of the quantum propagator to test the accuracy of the herein results.
Effects of laser beam propagation in a multilevel photoionization system
International Nuclear Information System (INIS)
Izawa, Y.; Nomaru, K.; Chen, Y. W.
1995-01-01
When the intense laser pulse propagates in the atomic vapor over a long distance, the laser pulse shape, the carrier frequency and the propagating velocity are greatly modified during the propagation by the resonant and/or the near-resonant interactions with atoms. We have been investigating these effects on the laser beam propagation experimentally and analytically. The simulation code named CEALIS-P has been developed, which employs the coupled three- level Bloch-Maxwell equations to study the atomic excitation and laser beam propagation simultaneously. Several features of the resonant and near-resonant effects based on the the self-induced transparency, the self-phase modulation and the nonlinear group velocity dispersion are described and the influences of such effects on the photoionization efficiency are analyzed.
Spatial Uncertainty Model for Visual Features Using a Kinect™ Sensor
Directory of Open Access Journals (Sweden)
Jae-Han Park
2012-06-01
Full Text Available This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.
Spatial uncertainty model for visual features using a Kinect™ sensor.
Park, Jae-Han; Shin, Yong-Deuk; Bae, Ji-Hun; Baeg, Moon-Hong
2012-01-01
This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.
Discussion of OECD LWR Uncertainty Analysis in Modelling Benchmark
International Nuclear Information System (INIS)
Ivanov, K.; Avramova, M.; Royer, E.; Gillford, J.
2013-01-01
The demand for best estimate calculations in nuclear reactor design and safety evaluations has increased in recent years. Uncertainty quantification has been highlighted as part of the best estimate calculations. The modelling aspects of uncertainty and sensitivity analysis are to be further developed and validated on scientific grounds in support of their performance and application to multi-physics reactor simulations. The Organization for Economic Co-operation and Development (OECD) / Nuclear Energy Agency (NEA) Nuclear Science Committee (NSC) has endorsed the creation of an Expert Group on Uncertainty Analysis in Modelling (EGUAM). Within the framework of activities of EGUAM/NSC the OECD/NEA initiated the Benchmark for Uncertainty Analysis in Modelling for Design, Operation, and Safety Analysis of Light Water Reactor (OECD LWR UAM benchmark). The general objective of the benchmark is to propagate the predictive uncertainties of code results through complex coupled multi-physics and multi-scale simulations. The benchmark is divided into three phases with Phase I highlighting the uncertainty propagation in stand-alone neutronics calculations, while Phase II and III are focused on uncertainty analysis of reactor core and system respectively. This paper discusses the progress made in Phase I calculations, the Specifications for Phase II and the incoming challenges in defining Phase 3 exercises. The challenges of applying uncertainty quantification to complex code systems, in particular the time-dependent coupled physics models are the large computational burden and the utilization of non-linear models (expected due to the physics coupling). (authors)
Assessment of SFR Wire Wrap Simulation Uncertainties
Energy Technology Data Exchange (ETDEWEB)
Delchini, Marc-Olivier G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Popov, Emilian L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Pointer, William David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Swiler, Laura P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2016-09-30
Predictive modeling and simulation of nuclear reactor performance and fuel are challenging due to the large number of coupled physical phenomena that must be addressed. Models that will be used for design or operational decisions must be analyzed for uncertainty to ascertain impacts to safety or performance. Rigorous, structured uncertainty analyses are performed by characterizing the model’s input uncertainties and then propagating the uncertainties through the model to estimate output uncertainty. This project is part of the ongoing effort to assess modeling uncertainty in Nek5000 simulations of flow configurations relevant to the advanced reactor applications of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. Three geometries are under investigation in these preliminary assessments: a 3-D pipe, a 3-D 7-pin bundle, and a single pin from the Thermal-Hydraulic Out-of-Reactor Safety (THORS) facility. Initial efforts have focused on gaining an understanding of Nek5000 modeling options and integrating Nek5000 with Dakota. These tasks are being accomplished by demonstrating the use of Dakota to assess parametric uncertainties in a simple pipe flow problem. This problem is used to optimize performance of the uncertainty quantification strategy and to estimate computational requirements for assessments of complex geometries. A sensitivity analysis to three turbulent models was conducted for a turbulent flow in a single wire wrapped pin (THOR) geometry. Section 2 briefly describes the software tools used in this study and provides appropriate references. Section 3 presents the coupling interface between Dakota and a computational fluid dynamic (CFD) code (Nek5000 or STARCCM+), with details on the workflow, the scripts used for setting up the run, and the scripts used for post-processing the output files. In Section 4, the meshing methods used to generate the THORS and 7-pin bundle meshes are explained. Sections 5, 6 and 7 present numerical results
Spike propagation in driven chain networks with dominant global inhibition
International Nuclear Information System (INIS)
Chang Wonil; Jin, Dezhe Z.
2009-01-01
Spike propagation in chain networks is usually studied in the synfire regime, in which successive groups of neurons are synaptically activated sequentially through the unidirectional excitatory connections. Here we study the dynamics of chain networks with dominant global feedback inhibition that prevents the synfire activity. Neural activity is driven by suprathreshold external inputs. We analytically and numerically demonstrate that spike propagation along the chain is a unique dynamical attractor in a wide parameter regime. The strong inhibition permits a robust winner-take-all propagation in the case of multiple chains competing via the inhibition.
A simple three dimensional wide-angle beam propagation method
Ma, Changbao; van Keuren, Edward
2006-05-01
The development of three dimensional (3-D) waveguide structures for chip scale planar lightwave circuits (PLCs) is hampered by the lack of effective 3-D wide-angle (WA) beam propagation methods (BPMs). We present a simple 3-D wide-angle beam propagation method (WA-BPM) using Hoekstra’s scheme along with a new 3-D wave equation splitting method. The applicability, accuracy and effectiveness of our method are demonstrated by applying it to simulations of wide-angle beam propagation and comparing them with analytical solutions.
International Nuclear Information System (INIS)
Kaul, Dean C.; Egbert, Stephen D.; Woolson, William A.
2005-01-01
In order to avoid the pitfalls that so discredited DS86 and its uncertainty estimates, and to provide DS02 uncertainties that are both defensible and credible, this report not only presents the ensemble uncertainties assembled from uncertainties in individual computational elements and radiation dose components but also describes how these relate to comparisons between observed and computed quantities at critical intervals in the computational process. These comparisons include those between observed and calculated radiation free-field components, where observations include thermal- and fast-neutron activation and gamma-ray thermoluminescence, which are relevant to the estimated systematic uncertainty for DS02. The comparisons also include those between calculated and observed survivor shielding, where the observations consist of biodosimetric measurements for individual survivors, which are relevant to the estimated random uncertainty for DS02. (J.P.N.)
Analytic continuation of dual Feynman amplitudes
International Nuclear Information System (INIS)
Bleher, P.M.
1981-01-01
A notion of dual Feynman amplitude is introduced and a theorem on the existence of analytic continuation of this amplitude from the convergence domain to the whole complex is proved. The case under consideration corresponds to massless power propagators and the analytic continuation is constructed on the propagators powers. Analytic continuation poles and singular set of external impulses are found explicitly. The proof of the theorem on the existence of analytic continuation is based on the introduction of α-representation for dual Feynman amplitudes. In proving, the so-called ''trees formula'' and ''trees-with-cycles formula'' are established that are dual by formulation to the trees and 2-trees formulae for usual Feynman amplitudes. (Auth.)
Model uncertainty and probability
International Nuclear Information System (INIS)
Parry, G.W.
1994-01-01
This paper discusses the issue of model uncertainty. The use of probability as a measure of an analyst's uncertainty as well as a means of describing random processes has caused some confusion, even though the two uses are representing different types of uncertainty with respect to modeling a system. The importance of maintaining the distinction between the two types is illustrated with a simple example
Uncertainty in artificial intelligence
Kanal, LN
1986-01-01
How to deal with uncertainty is a subject of much controversy in Artificial Intelligence. This volume brings together a wide range of perspectives on uncertainty, many of the contributors being the principal proponents in the controversy.Some of the notable issues which emerge from these papers revolve around an interval-based calculus of uncertainty, the Dempster-Shafer Theory, and probability as the best numeric model for uncertainty. There remain strong dissenting opinions not only about probability but even about the utility of any numeric method in this context.
Uncertainties in hydrogen combustion
International Nuclear Information System (INIS)
Stamps, D.W.; Wong, C.C.; Nelson, L.S.
1988-01-01
Three important areas of hydrogen combustion with uncertainties are identified: high-temperature combustion, flame acceleration and deflagration-to-detonation transition, and aerosol resuspension during hydrogen combustion. The uncertainties associated with high-temperature combustion may affect at least three different accident scenarios: the in-cavity oxidation of combustible gases produced by core-concrete interactions, the direct containment heating hydrogen problem, and the possibility of local detonations. How these uncertainties may affect the sequence of various accident scenarios is discussed and recommendations are made to reduce these uncertainties. 40 references
Uncertainty in hydrological signatures
McMillan, Hilary; Westerberg, Ida
2015-04-01
Information that summarises the hydrological behaviour or flow regime of a catchment is essential for comparing responses of different catchments to understand catchment organisation and similarity, and for many other modelling and water-management applications. Such information types derived as an index value from observed data are known as hydrological signatures, and can include descriptors of high flows (e.g. mean annual flood), low flows (e.g. mean annual low flow, recession shape), the flow variability, flow duration curve, and runoff ratio. Because the hydrological signatures are calculated from observed data such as rainfall and flow records, they are affected by uncertainty in those data. Subjective choices in the method used to calculate the signatures create a further source of uncertainty. Uncertainties in the signatures may affect our ability to compare different locations, to detect changes, or to compare future water resource management scenarios. The aim of this study was to contribute to the hydrological community's awareness and knowledge of data uncertainty in hydrological signatures, including typical sources, magnitude and methods for its assessment. We proposed a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrated it for a variety of commonly used signatures. The study was made for two data rich catchments, the 50 km2 Mahurangi catchment in New Zealand and the 135 km2 Brue catchment in the UK. For rainfall data the uncertainty sources included point measurement uncertainty, the number of gauges used in calculation of the catchment spatial average, and uncertainties relating to lack of quality control. For flow data the uncertainty sources included uncertainties in stage/discharge measurement and in the approximation of the true stage-discharge relation by a rating curve. The resulting uncertainties were compared across the different signatures and catchments, to quantify uncertainty
Propagation environments [Chapter 4
Douglass F. Jacobs; Thomas D. Landis; Tara Luna
2009-01-01
An understanding of all factors influencing plant growth in a nursery environment is needed for the successful growth and production of high-quality container plants. Propagation structures modify the atmospheric conditions of temperature, light, and relative humidity. Native plant nurseries are different from typical horticultural nurseries because plants must be...
International Nuclear Information System (INIS)
Dvoeglazov, V.V.
1997-01-01
An analog of the j = 1/2 Feynman-Dyson propagator is presented in the framework of the j = 1 Weinberg's theory. The basis for this construction is the concept of the Weinberg field as a system of four field functions differing by parity and by dual transformations. (orig.)
Czech Academy of Sciences Publication Activity Database
Schejbal, V.; Bezoušek, P.; Čermák, D.; NĚMEC, Z.; Fišer, Ondřej; Hájek, M.
2006-01-01
Roč. 15, č. 1 (2006), s. 17-24 ISSN 1210-2512 R&D Projects: GA MPO(CZ) FT-TA2/030 Institutional research plan: CEZ:AV0Z30420517 Keywords : Ultra wide band * UWB antennas * UWB propagation * multipath effects Subject RIV: JB - Sensors, Measurment, Regulation
National Research Council Canada - National Science Library
Gray, William
1994-01-01
This paper discusses the question of tropical cyclone propagation or why the average tropical cyclone moves 1-2 m/s faster and usually 10-20 deg to the left of its surrounding (or 5-7 deg radius) deep layer (850-300 mb) steering current...
CSIR Research Space (South Africa)
Bidgood, Peter M
2017-01-01
Full Text Available The estimation of balance uncertainty using conventional statistical and error propagation methods has been found to be both approximate and laborious to the point of being untenable. Direct Simulation by Monte Carlo (DSMC) has been shown...
Javed, A.; Kamphues, E.; Hartuc, T.; Pecnik, R.; Van Buijtenen, J.P.
2015-01-01
The compressor impellers for mass-produced turbochargers are generally die-casted and machined to their final configuration. Manufacturing uncertainties are inherently introduced as stochastic dimensional deviations in the impeller geometry. These deviations eventually propagate into the compressor
Monte Carlo eigenfunction strategies and uncertainties
International Nuclear Information System (INIS)
Gast, R.C.; Candelore, N.R.
1974-01-01
Comparisons of convergence rates for several possible eigenfunction source strategies led to the selection of the ''straight'' analog of the analytic power method as the source strategy for Monte Carlo eigenfunction calculations. To insure a fair game strategy, the number of histories per iteration increases with increasing iteration number. The estimate of eigenfunction uncertainty is obtained from a modification of a proposal by D. B. MacMillan and involves only estimates of the usual purely statistical component of uncertainty and a serial correlation coefficient of lag one. 14 references. (U.S.)
Analysis of uncertainty in modeling perceived risks
International Nuclear Information System (INIS)
Melnyk, R.; Sandquist, G.M.
2005-01-01
Expanding on a mathematical model developed for quantifying and assessing perceived risks, the distribution functions, variances, and uncertainties associated with estimating the model parameters are quantified. The analytical model permits the identification and assignment of any number of quantifiable risk perception factors that can be incorporated within standard risk methodology. Those risk perception factors associated with major technical issues are modeled using lognormal probability density functions to span the potentially large uncertainty variations associated with these risk perceptions. The model quantifies the logic of public risk perception and provides an effective means for measuring and responding to perceived risks. (authors)
Uncertainties associated with inertial-fusion ignition
International Nuclear Information System (INIS)
McCall, G.H.
1981-01-01
An estimate is made of a worst case driving energy which is derived from analytic and computer calculations. It will be shown that the uncertainty can be reduced by a factor of 10 to 100 if certain physical effects are understood. That is not to say that the energy requirement can necessarily be reduced below that of the worst case, but it is possible to reduce the uncertainty associated with ignition energy. With laser costs in the $0.5 to 1 billion per MJ range, it can be seen that such an exercise is worthwhile
Managing uncertainty in multiple-criteria decision making related to sustainability assessment
DEFF Research Database (Denmark)
Dorini, Gianluca Fabio; Kapelan, Zoran; Azapagic, Adisa
2011-01-01
In real life, decisions are usually made by comparing different options with respect to several, often conflicting criteria. This requires subjective judgements on the importance of different criteria by DMs and increases uncertainty in decision making. This article demonstrates how uncertainty can......: (1) no uncertainty, (2) uncertainty in data/models and (3) uncertainty in models and decision-makers’ preferences. The results shows how characterising and propagating uncertainty can help increase the effectiveness of multi-criteria decision making processes and lead to more informed decision....... be handled in multi-criteria decision situations using Compromise Programming, one of the Multi-criteria Decision Analysis (MCDA) techniques. Uncertainty is characterised using a probabilistic approach and propagated using a Monte Carlo simulation technique. The methodological approach is illustrated...
Analytical work on local faults in LMFBR subassembly
International Nuclear Information System (INIS)
Yoshikawa, H.; Miyaguchi, K.; Hirata, N.; Kasahara, F.
1979-01-01
Analytical codes have been developed for evaluating various severe but highly unlikely events of local faults in the LMFBR subassembly (S/A). These include: (1) local flow blockage, (2) two-phase thermohydraulics under fission gas release, and (3) inter-S/A failure propagation. A simple inter-S/A thermal failure propagation analysis code, FUMES, is described that allows an easy parametric study of propagation potential of fuel fog in a S/A. 7 refs
Uncertainty Quantification in High Throughput Screening ...
Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of biochemical and cellular processes, including endocrine disruption, cytotoxicity, and zebrafish development. Over 2.6 million concentration response curves are fit to models to extract parameters related to potency and efficacy. Models built on ToxCast results are being used to rank and prioritize the toxicological risk of tested chemicals and to predict the toxicity of tens of thousands of chemicals not yet tested in vivo. However, the data size also presents challenges. When fitting the data, the choice of models, model selection strategy, and hit call criteria must reflect the need for computational efficiency and robustness, requiring hard and somewhat arbitrary cutoffs. When coupled with unavoidable noise in the experimental concentration response data, these hard cutoffs cause uncertainty in model parameters and the hit call itself. The uncertainty will then propagate through all of the models built on the data. Left unquantified, this uncertainty makes it difficult to fully interpret the data for risk assessment. We used bootstrap resampling methods to quantify the uncertainty in fitting models to the concentration response data. Bootstrap resampling determines confidence intervals for
Estimating Coastal Digital Elevation Model (DEM) Uncertainty
Amante, C.; Mesick, S.
2017-12-01
Integrated bathymetric-topographic digital elevation models (DEMs) are representations of the Earth's solid surface and are fundamental to the modeling of coastal processes, including tsunami, storm surge, and sea-level rise inundation. Deviations in elevation values from the actual seabed or land surface constitute errors in DEMs, which originate from numerous sources, including: (i) the source elevation measurements (e.g., multibeam sonar, lidar), (ii) the interpolative gridding technique (e.g., spline, kriging) used to estimate elevations in areas unconstrained by source measurements, and (iii) the datum transformation used to convert bathymetric and topographic data to common vertical reference systems. The magnitude and spatial distribution of the errors from these sources are typically unknown, and the lack of knowledge regarding these errors represents the vertical uncertainty in the DEM. The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) has developed DEMs for more than 200 coastal communities. This study presents a methodology developed at NOAA NCEI to derive accompanying uncertainty surfaces that estimate DEM errors at the individual cell-level. The development of high-resolution (1/9th arc-second), integrated bathymetric-topographic DEMs along the southwest coast of Florida serves as the case study for deriving uncertainty surfaces. The estimated uncertainty can then be propagated into the modeling of coastal processes that utilize DEMs. Incorporating the uncertainty produces more reliable modeling results, and in turn, better-informed coastal management decisions.
Uncertainty information in climate data records from Earth observation
Merchant, C. J.
2017-12-01
How to derive and present uncertainty in climate data records (CDRs) has been debated within the European Space Agency Climate Change Initiative, in search of common principles applicable across a range of essential climate variables. Various points of consensus have been reached, including the importance of improving provision of uncertainty information and the benefit of adopting international norms of metrology for language around the distinct concepts of uncertainty and error. Providing an estimate of standard uncertainty per datum (or the means to readily calculate it) emerged as baseline good practice, and should be highly relevant to users of CDRs when the uncertainty in data is variable (the usual case). Given this baseline, the role of quality flags is clarified as being complementary to and not repetitive of uncertainty information. Data with high uncertainty are not poor quality if a valid estimate of the uncertainty is available. For CDRs and their applications, the error correlation properties across spatio-temporal scales present important challenges that are not fully solved. Error effects that are negligible in the uncertainty of a single pixel may dominate uncertainty in the large-scale and long-term. A further principle is that uncertainty estimates should themselves be validated. The concepts of estimating and propagating uncertainty are generally acknowledged in geophysical sciences, but less widely practised in Earth observation and development of CDRs. Uncertainty in a CDR depends in part (and usually significantly) on the error covariance of the radiances and auxiliary data used in the retrieval. Typically, error covariance information is not available in the fundamental CDR (FCDR) (i.e., with the level-1 radiances), since provision of adequate level-1 uncertainty information is not yet standard practice. Those deriving CDRs thus cannot propagate the radiance uncertainty to their geophysical products. The FIDUCEO project (www.fiduceo.eu) is
Instability Versus Equilibrium Propagation of Laser Beam in Plasma
Lushnikov, Pavel M.; Rose, Harvey A.
2003-01-01
We obtain, for the first time, an analytic theory of the forward stimulated Brillouin scattering instability of a spatially and temporally incoherent laser beam, that controls the transition between statistical equilibrium and non-equilibrium (unstable) self-focusing regimes of beam propagation. The stability boundary may be used as a comprehensive guide for inertial confinement fusion designs. Well into the stable regime, an analytic expression for the angular diffusion coefficient is obtain...
Aspects of uncertainty analysis in accident consequence modeling
International Nuclear Information System (INIS)
Travis, C.C.; Hoffman, F.O.
1981-01-01
Mathematical models are frequently used to determine probable dose to man from an accidental release of radionuclides by a nuclear facility. With increased emphasis on the accuracy of these models, the incorporation of uncertainty analysis has become one of the most crucial and sensitive components in evaluating the significance of model predictions. In the present paper, we address three aspects of uncertainty in models used to assess the radiological impact to humans: uncertainties resulting from the natural variability in human biological parameters; the propagation of parameter variability by mathematical models; and comparison of model predictions to observational data
Exploring Heterogeneous Multicore Architectures for Advanced Embedded Uncertainty Quantification.
Energy Technology Data Exchange (ETDEWEB)
Phipps, Eric T.; Edwards, Harold C.; Hu, Jonathan J.
2014-09-01
We explore rearrangements of classical uncertainty quantification methods with the aim of achieving higher aggregate performance for uncertainty quantification calculations on emerging multicore and manycore architectures. We show a rearrangement of the stochastic Galerkin method leads to improved performance and scalability on several computational architectures whereby un- certainty information is propagated at the lowest levels of the simulation code improving memory access patterns, exposing new dimensions of fine grained parallelism, and reducing communica- tion. We also develop a general framework for implementing such rearrangements for a diverse set of uncertainty quantification algorithms as well as computational simulation codes to which they are applied.
Some target assay uncertainties for passive neutron coincidence counting
International Nuclear Information System (INIS)
Ensslin, N.; Langner, D.G.; Menlove, H.O.; Miller, M.C.; Russo, P.A.
1990-01-01
This paper provides some target assay uncertainties for passive neutron coincidence counting of plutonium metal, oxide, mixed oxide, and scrap and waste. The target values are based in part on past user experience and in part on the estimated results from new coincidence counting techniques that are under development. The paper summarizes assay error sources and the new coincidence techniques, and recommends the technique that is likely to yield the lowest assay uncertainty for a given material type. These target assay uncertainties are intended to be useful for NDA instrument selection and assay variance propagation studies for both new and existing facilities. 14 refs., 3 tabs
Implicit knowledge of visual uncertainty guides decisions with asymmetric outcomes
DEFF Research Database (Denmark)
Whiteley, Louise Emma; Sahani, Maneesh
2008-01-01
under conditions of uncertainty. Here we show that human observers performing a simple visual choice task under an externally imposed loss function approach the optimal strategy, as defined by Bayesian probability and decision theory (Berger, 1985; Cox, 1961). In concert with earlier work, this suggests...... are pre-existing, widespread, and can be propagated to decision-making areas of the brain....... that observers possess a model of their internal uncertainty and can utilize this model in the neural computations that underlie their behavior (Knill & Pouget, 2004). In our experiment, optimal behavior requires that observers integrate the loss function with an estimate of their internal uncertainty rather...
Uncertainty in social dilemmas
Kwaadsteniet, Erik Willem de
2007-01-01
This dissertation focuses on social dilemmas, and more specifically, on environmental uncertainty in these dilemmas. Real-life social dilemma situations are often characterized by uncertainty. For example, fishermen mostly do not know the exact size of the fish population (i.e., resource size
Uncertainty and Climate Change
Berliner, L. Mark
2003-01-01
Anthropogenic, or human-induced, climate change is a critical issue in science and in the affairs of humankind. Though the target of substantial research, the conclusions of climate change studies remain subject to numerous uncertainties. This article presents a very brief review of the basic arguments regarding anthropogenic climate change with particular emphasis on uncertainty.
Deterministic uncertainty analysis
International Nuclear Information System (INIS)
Worley, B.A.
1987-01-01
Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig
Design Against Propagating Shear Failure in Pipelines
Leis, B. N.; Gray, J. Malcolm
Propagating shear failure can occur in gas and certain hazardous liquid transmission pipelines, potentially leading to a large long-burning fire and/or widespread pollution, depending on the transported product. Such consequences require that the design of the pipeline and specification of the steel effectively preclude the chance of propagating shear failure. Because the phenomenology of such failures is complex, design against such occurrences historically has relied on full-scale demonstration experiments coupled with empirically calibrated analytical models. However, as economic drivers have pushed toward larger diameter higher pressure pipelines made of tough higher-strength grades, the design basis to ensure arrest has been severely compromised. Accordingly, for applications where the design basis becomes less certain, as has occurred increasing as steel grade and toughness has increased, it has become necessary to place greater reliance on the use and role of full-scale testing.
Working fluid selection for organic Rankine cycles - Impact of uncertainty of fluid properties
DEFF Research Database (Denmark)
Frutiger, Jerome; Andreasen, Jesper Graa; Liu, Wei
2016-01-01
of processmodels and constraints 2) selection of property models, i.e. Penge Robinson equation of state 3)screening of 1965 possible working fluid candidates including identification of optimal process parametersbased on Monte Carlo sampling 4) propagating uncertainty of fluid parameters to the ORC netpower output......This study presents a generic methodology to select working fluids for ORC (Organic Rankine Cycles)taking into account property uncertainties of the working fluids. A Monte Carlo procedure is described as a tool to propagate the influence of the input uncertainty of the fluid parameters on the ORC....... The net power outputs of all the feasible working fluids were ranked including their uncertainties. The method could propagate and quantify the input property uncertainty of the fluidproperty parameters to the ORC model, giving an additional dimension to the fluid selection process. In the given analysis...
Conditional uncertainty principle
Gour, Gilad; Grudka, Andrzej; Horodecki, Michał; Kłobus, Waldemar; Łodyga, Justyna; Narasimhachar, Varun
2018-04-01
We develop a general operational framework that formalizes the concept of conditional uncertainty in a measure-independent fashion. Our formalism is built upon a mathematical relation which we call conditional majorization. We define conditional majorization and, for the case of classical memory, we provide its thorough characterization in terms of monotones, i.e., functions that preserve the partial order under conditional majorization. We demonstrate the application of this framework by deriving two types of memory-assisted uncertainty relations, (1) a monotone-based conditional uncertainty relation and (2) a universal measure-independent conditional uncertainty relation, both of which set a lower bound on the minimal uncertainty that Bob has about Alice's pair of incompatible measurements, conditioned on arbitrary measurement that Bob makes on his own system. We next compare the obtained relations with their existing entropic counterparts and find that they are at least independent.
Physical Uncertainty Bounds (PUB)
Energy Technology Data Exchange (ETDEWEB)
Vaughan, Diane Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Dean L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.
Measurement uncertainty and probability
Willink, Robin
2013-01-01
A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.
Conclusions on measurement uncertainty in microbiology.
Forster, Lynne I
2009-01-01
Since its first issue in 1999, testing laboratories wishing to comply with all the requirements of ISO/IEC 17025 have been collecting data for estimating uncertainty of measurement for quantitative determinations. In the microbiological field of testing, some debate has arisen as to whether uncertainty needs to be estimated for each method performed in the laboratory for each type of sample matrix tested. Queries also arise concerning the estimation of uncertainty when plate/membrane filter colony counts are below recommended method counting range limits. A selection of water samples (with low to high contamination) was tested in replicate with the associated uncertainty of measurement being estimated from the analytical results obtained. The analyses performed on the water samples included total coliforms, fecal coliforms, fecal streptococci by membrane filtration, and heterotrophic plate counts by the pour plate technique. For those samples where plate/membrane filter colony counts were > or =20, uncertainty estimates at a 95% confidence level were very similar for the methods, being estimated as 0.13, 0.14, 0.14, and 0.12, respectively. For those samples where plate/membrane filter colony counts were <20, estimated uncertainty values for each sample showed close agreement with published confidence limits established using a Poisson distribution approach.
Decision analytic tools for resolving uncertainty in the energy debate
International Nuclear Information System (INIS)
Renn, O.
1986-01-01
Within the context of a Social Compatibility Study on Energy Supply Systems a complex decision making model was used to incorporate scientific expertize and public participation into the process of policy formulation and evaluation. The study was directed by the program group ''Technology and Society'' of the Nuclear Research Centre Juelich. It consisted of three parts: First, with the aid of value tree analysis the whole spectrum of concern and dimensions relevant to the energy issue in Germany was collected and structured in a combined value tree representing the values and criteria of nine important interest groups in the Federal Republic of Germany. Second, the revealed criteria were translated into indicators. Four different energy scenarios were evaluated with respect to each indicator making use of physical measurement, literature review and expert surveys. Third, the weights for each indicator were elicited by interviewing randomly chosen citizens. Those citizens were informed about the scenarios and their impacts prior to the weighting process in a four day seminar. As a result most citizens favoured more moderate energy scenarios assigning high priority to energy conservation. Nuclear energy was perceived as necessary energy source in the long run, but should be restricted to meet only the demand that cannot be covered by other energy means. (orig.)
Practical estimation of the uncertainty of analytical measurement standards
Peters, R.J.B.; Elbers, I.J.W.; Klijnstra, M.D.; Stolker, A.A.M.
2011-01-01
Nowadays, a lot of time and resources are used to determine the quality of goods and services. As a consequence, the quality of measurements themselves, e.g., the metrological traceability of the measured quantity values is essential to allow a proper evaluation of the results with regard to
Propagator of stochastic electrodynamics
International Nuclear Information System (INIS)
Cavalleri, G.
1981-01-01
The ''elementary propagator'' for the position of a free charged particle subject to the zero-point electromagnetic field with Lorentz-invariant spectral density proportionalω 3 is obtained. The nonstationary process for the position is solved by the stationary process for the acceleration. The dispersion of the position elementary propagator is compared with that of quantum electrodynamics. Finally, the evolution of the probability density is obtained starting from an initial distribution confined in a small volume and with a Gaussian distribution in the velocities. The resulting probability density for the position turns out to be equal, to within radiative corrections, to psipsi* where psi is the Kennard wave packet. If the radiative corrections are retained, the present result is new since the corresponding expression in quantum electrodynamics has not yet been found. Besides preceding quantum electrodynamics for this problem, no renormalization is required in stochastic electrodynamics
Preventing Unofficial Information Propagation
Le, Zhengyi; Ouyang, Yi; Xu, Yurong; Ford, James; Makedon, Fillia
Digital copies are susceptible to theft and vulnerable to leakage, copying, or manipulation. When someone (or some group), who has stolen, leaked, copied, or manipulated digital documents propagates the documents over the Internet and/or distributes those through physical distribution channels many challenges arise which document holders must overcome in order to mitigate the impact to their privacy or business. This paper focuses on the propagation problem of digital credentials, which may contain sensitive information about a credential holder. Existing work such as access control policies and the Platform for Privacy Preferences (P3P) assumes that qualified or certified credential viewers are honest and reliable. The proposed approach in this paper uses short-lived credentials based on reverse forward secure signatures to remove this assumption and mitigate the damage caused by a dishonest or honest but compromised viewer.
Helrich, Carl S
2017-01-01
This advanced undergraduate textbook begins with the Lagrangian formulation of Analytical Mechanics and then passes directly to the Hamiltonian formulation and the canonical equations, with constraints incorporated through Lagrange multipliers. Hamilton's Principle and the canonical equations remain the basis of the remainder of the text. Topics considered for applications include small oscillations, motion in electric and magnetic fields, and rigid body dynamics. The Hamilton-Jacobi approach is developed with special attention to the canonical transformation in order to provide a smooth and logical transition into the study of complex and chaotic systems. Finally the text has a careful treatment of relativistic mechanics and the requirement of Lorentz invariance. The text is enriched with an outline of the history of mechanics, which particularly outlines the importance of the work of Euler, Lagrange, Hamilton and Jacobi. Numerous exercises with solutions support the exceptionally clear and concise treatment...
Uncertainty and validation. Effect of model complexity on uncertainty estimates
International Nuclear Information System (INIS)
Elert, M.
1996-09-01
In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root
Farrance, Ian; Frenkel, Robert
2014-01-01
The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more ‘constants’, each of which has an empirically derived numerical value. Such empirically derived ‘constants’ must also have associated uncertainties which propagate through the functional
Farrance, Ian; Frenkel, Robert
2014-02-01
The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship
Simulation of excitation and propagation of pico-second ultrasound
International Nuclear Information System (INIS)
Yang, Seung Yong; Kim, No Hyu
2016-01-01
This paper presents an analytic and numerical simulation of the generation and propagation of pico-second ultrasound with nano-scale wavelength, enabling the production of bulk waves in thin films. An analytic model of laser-matter interaction and elasto-dynamic wave propagation is introduced to calculate the elastic strain pulse in microstructures. The model includes the laser-pulse absorption on the material surface, heat transfer from a photon to the elastic energy of a phonon, and acoustic wave propagation to formulate the governing equations of ultra-short ultrasound. The excitation and propagation of acoustic pulses produced by ultra-short laser pulses are numerically simulated for an aluminum substrate using the finite-difference method and compared with the analytical solution. Furthermore, Fourier analysis was performed to investigate the frequency spectrum of the simulated elastic wave pulse. It is concluded that a pico-second bulk wave with a very high frequency of up to hundreds of gigahertz is successfully generated in metals using a 100-fs laser pulse and that it can be propagated in the direction of thickness for thickness less than 100 nm
Simulation of excitation and propagation of pico-second ultrasound
Energy Technology Data Exchange (ETDEWEB)
Yang, Seung Yong; Kim, No Hyu [Dept. of Mechanical Engineering, Korea University of Technology and Education, Chunan (Korea, Republic of)
2016-12-15
This paper presents an analytic and numerical simulation of the generation and propagation of pico-second ultrasound with nano-scale wavelength, enabling the production of bulk waves in thin films. An analytic model of laser-matter interaction and elasto-dynamic wave propagation is introduced to calculate the elastic strain pulse in microstructures. The model includes the laser-pulse absorption on the material surface, heat transfer from a photon to the elastic energy of a phonon, and acoustic wave propagation to formulate the governing equations of ultra-short ultrasound. The excitation and propagation of acoustic pulses produced by ultra-short laser pulses are numerically simulated for an aluminum substrate using the finite-difference method and compared with the analytical solution. Furthermore, Fourier analysis was performed to investigate the frequency spectrum of the simulated elastic wave pulse. It is concluded that a pico-second bulk wave with a very high frequency of up to hundreds of gigahertz is successfully generated in metals using a 100-fs laser pulse and that it can be propagated in the direction of thickness for thickness less than 100 nm.
Simulation of excitation and propagation of pico-second ultrasound
Energy Technology Data Exchange (ETDEWEB)
Yang, Seung Yong; Kim, No Kyu [Dept. of Mechanical Engineering, Korea University of Technology and Education, Chunan (Korea, Republic of)
2014-12-15
This paper presents an analytic and numerical simulation of the generation and propagation of pico-second ultrasound with nano-scale wavelength, enabling the production of bulk waves in thin films. An analytic model of laser-matter interaction and elasto-dynamic wave propagation is introduced to calculate the elastic strain pulse in microstructures. The model includes the laser-pulse absorption on the material surface, heat transfer from a photon to the elastic energy of a phonon, and acoustic wave propagation to formulate the governing equations of ultra-short ultrasound. The excitation and propagation of acoustic pulses produced by ultra-short laser pulses are numerically simulated for an aluminum substrate using the finite-difference method and compared with the analytical solution. Furthermore, Fourier analysis was performed to investigate the frequency spectrum of the simulated elastic wave pulse. It is concluded that a pico-second bulk wave with a very high frequency of up to hundreds of gigahertz is successfully generated in metals using a 100-fs laser pulse and that it can be propagated in the direction of thickness for thickness less than 100 nm.
Nuclear data uncertainties for local power densities in the Martin-Hoogenboom benchmark
International Nuclear Information System (INIS)
Van der Marck, S.C.; Rochman, D.A.
2013-01-01
The recently developed method of fast Total Monte Carlo to propagate nuclear data uncertainties was applied to the Martin-Hoogenboom benchmark. This Martin- Hoogenboom benchmark prescribes that one calculates local pin powers (of light water cooled reactor) with a statistical uncertainty lower than 1% everywhere. Here we report, for the first time, an estimate of the nuclear data uncertainties for these local pin powers. For each of the more than 6 million local power tallies, the uncertainty due to nuclear data uncertainties was calculated, based on random variation of data for 235 U, 238 U, 239 Pu and H in H 2 O thermal scattering. In the center of the core region, the nuclear data uncertainty is 0.9%. Towards the edges of the core, this uncertainty increases to roughly 3%. The nuclear data uncertainties have been shown to be larger than the statistical uncertainties that the benchmark prescribes
International Nuclear Information System (INIS)
Hofer, E.; Hoffman, F.O.
1987-02-01
The uncertainty analysis of model predictions has to discriminate between two fundamentally different types of uncertainty. The presence of stochastic variability (Type 1 uncertainty) necessitates the use of a probabilistic model instead of the much simpler deterministic one. Lack of knowledge (Type 2 uncertainty), however, applies to deterministic as well as to probabilistic model predictions and often dominates over uncertainties of Type 1. The term ''probability'' is interpreted differently in the probabilistic analysis of either type of uncertainty. After these discriminations have been explained the discussion centers on the propagation of parameter uncertainties through the model, the derivation of quantitative uncertainty statements for model predictions and the presentation and interpretation of the results of a Type 2 uncertainty analysis. Various alternative approaches are compared for a very simple deterministic model
Dilaton cosmology and the modified uncertainty principle
International Nuclear Information System (INIS)
Majumder, Barun
2011-01-01
Very recently Ali et al. (2009) proposed a new generalized uncertainty principle (with a linear term in Plank length which is consistent with doubly special relativity and string theory. The classical and quantum effects of this generalized uncertainty principle (termed as modified uncertainty principle or MUP) are investigated on the phase space of a dilatonic cosmological model with an exponential dilaton potential in a flat Friedmann-Robertson-Walker background. Interestingly, as a consequence of MUP, we found that it is possible to get a late time acceleration for this model. For the quantum mechanical description in both commutative and MUP framework, we found the analytical solutions of the Wheeler-DeWitt equation for the early universe and compare our results. We have used an approximation method in the case of MUP.
BetaShape: A new code for improved analytical calculations of beta spectra
Directory of Open Access Journals (Sweden)
Mougeot Xavier
2017-01-01
Full Text Available The new code BetaShape has been developed in order to improve the nuclear data related to beta decays. An analytical model was considered, except for the relativistic electron wave functions, for ensuring fast calculations. Output quantities are mean energies, log ft values and beta and neutrino spectra for single and multiple transitions. The uncertainties from the input parameters, read from an ENSDF file, are propagated. A database of experimental shape factors is included. A comparison over the entire ENSDF database with the standard code currently used in nuclear data evaluations shows consistent results for the vast majority of the transitions and highlights the improvements that can be expected with the use of BetaShape.
Understanding Climate Uncertainty with an Ocean Focus
Tokmakian, R. T.
2009-12-01
Uncertainty in climate simulations arises from various aspects of the end-to-end process of modeling the Earth’s climate. First, there is uncertainty from the structure of the climate model components (e.g. ocean/ice/atmosphere). Even the most complex models are deficient, not only in the complexity of the processes they represent, but in which processes are included in a particular model. Next, uncertainties arise from the inherent error in the initial and boundary conditions of a simulation. Initial conditions are the state of the weather or climate at the beginning of the simulation and other such things, and typically come from observations. Finally, there is the uncertainty associated with the values of parameters in the model. These parameters may represent physical constants or effects, such as ocean mixing, or non-physical aspects of modeling and computation. The uncertainty in these input parameters propagates through the non-linear model to give uncertainty in the outputs. The models in 2020 will no doubt be better than today’s models, but they will still be imperfect, and development of uncertainty analysis technology is a critical aspect of understanding model realism and prediction capability. Smith [2002] and Cox and Stephenson [2007] discuss the need for methods to quantify the uncertainties within complicated systems so that limitations or weaknesses of the climate model can be understood. In making climate predictions, we need to have available both the most reliable model or simulation and a methods to quantify the reliability of a simulation. If quantitative uncertainty questions of the internal model dynamics are to be answered with complex simulations such as AOGCMs, then the only known path forward is based on model ensembles that characterize behavior with alternative parameter settings [e.g. Rougier, 2007]. The relevance and feasibility of using "Statistical Analysis of Computer Code Output" (SACCO) methods for examining uncertainty in
Verification of uncertainty budgets
DEFF Research Database (Denmark)
Heydorn, Kaj; Madsen, B.S.
2005-01-01
, and therefore it is essential that the applicability of the overall uncertainty budget to actual measurement results be verified on the basis of current experimental data. This should be carried out by replicate analysis of samples taken in accordance with the definition of the measurand, but representing...... the full range of matrices and concentrations for which the budget is assumed to be valid. In this way the assumptions made in the uncertainty budget can be experimentally verified, both as regards sources of variability that are assumed negligible, and dominant uncertainty components. Agreement between...
Communicating spatial uncertainty to non-experts using R
Luzzi, Damiano; Sawicka, Kasia; Heuvelink, Gerard; de Bruin, Sytze
2016-04-01
Effective visualisation methods are important for the efficient use of uncertainty information for various groups of users. Uncertainty propagation analysis is often used with spatial environmental models to quantify the uncertainty within the information. A challenge arises when trying to effectively communicate the uncertainty information to non-experts (not statisticians) in a wide range of cases. Due to the growing popularity and applicability of the open source programming language R, we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. The package has implemented Monte Carlo algorithms for uncertainty propagation, the output of which is represented by an ensemble of model outputs (i.e. a sample from a probability distribution). Numerous visualisation methods exist that aim to present such spatial uncertainty information both statically, dynamically and interactively. To provide the most universal visualisation tools for non-experts, we conducted a survey on a group of 20 university students and assessed the effectiveness of selected static and interactive methods for visualising uncertainty in spatial variables such as DEM and land cover. The static methods included adjacent maps and glyphs for continuous variables. Both allow for displaying maps with information about the ensemble mean, variance/standard deviation and prediction intervals. Adjacent maps were also used for categorical data, displaying maps of the most probable class, as well as its associated probability. The interactive methods included a graphical user interface, which in addition to displaying the previously mentioned variables also allowed for comparison of joint uncertainties at multiple locations. The survey indicated that users could understand the basics of the uncertainty information displayed in the static maps, with the interactive interface allowing for more in-depth information. Subsequently, the R
Seismic wave propagation in fractured media: A discontinuous Galerkin approach
De Basabe, Jonás D.
2011-01-01
We formulate and implement a discontinuous Galekin method for elastic wave propagation that allows for discontinuities in the displacement field to simulate fractures or faults using the linear- slip model. We show numerical results using a 2D model with one linear- slip discontinuity and different frequencies. The results show a good agreement with analytic solutions. © 2011 Society of Exploration Geophysicists.
A depth-dependent formula for shallow water propagation
Sertlek, H.O.; Ainslie, M.A.
2014-01-01
In shallow water propagation, the sound field depends on the proximity of the receiver to the sea surface, the seabed, the source depth, and the complementary source depth. While normal mode theory can predict this depth dependence, it can be computationally intensive. In this work, an analytical
Uncertainty and sensitivity studies supporting the interpretation of the results of TVO I/II PRA
International Nuclear Information System (INIS)
Holmberg, J.
1992-01-01
A comprehensive Level 1 probabilistic risk assessment (PRA) has been performed for the TVO I/II nuclear power units. As a part of the PRA project, uncertainties of risk models and methods were systematically studied in order to describe them and to demonstrate their impact by way of results. The uncertainty study was divided into two phases: a qualitative and a quantitative study. The qualitative study contained identification of uncertainties and qualitative assessments of their importance. The PRA was introduced, and identified assumptions and uncertainties behind the models were documented. The most significant uncertainties were selected by importance measures or other judgements for further quantitative studies. The quantitative study included sensitivity studies and propagation of uncertainty ranges. In the sensitivity studies uncertain assumptions or parameters were varied in order to illustrate the sensitivity of the models. The propagation of the uncertainty ranges demonstrated the impact of the statistical uncertainties of the parameter values. The Monte Carlo method was used as a propagation method. The most significant uncertainties were those involved in modelling human interactions, dependences and common cause failures (CCFs), loss of coolant accident (LOCA) frequencies and pressure suppression. The qualitative mapping out of the uncertainty factors turned out to be useful in planning quantitative studies. It also served as internal review of the assumptions made in the PRA. The sensitivity studies were perhaps the most advantageous part of the quantitative study because they allowed individual analyses of the significance of uncertainty sources identified. The uncertainty study was found reasonable in systematically and critically assessing uncertainties in a risk analysis. The usefulness of this study depends on the decision maker (power company) since uncertainty studies are primarily carried out to support decision making when uncertainties are
Uncertainty and validation. Effect of model complexity on uncertainty estimates
Energy Technology Data Exchange (ETDEWEB)
Elert, M. [Kemakta Konsult AB, Stockholm (Sweden)] [ed.
1996-09-01
In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root
Effective propagation in a perturbed periodic structure
Maurel, Agnès; Pagneux, Vincent
2008-08-01
In a recent paper [D. Torrent, A. Hakansson, F. Cervera, and J. Sánchez-Dehesa, Phys. Rev. Lett. 96, 204302 (2006)] inspected the effective parameters of a cluster containing an ensemble of scatterers with a periodic or a weakly disordered arrangement. A small amount of disorder is shown to have a small influence on the characteristics of the acoustic wave propagation with respect to the periodic case. In this Brief Report, we inspect further the effect of a deviation in the scatterer distribution from the periodic distribution. The quasicrystalline approximation is shown to be an efficient tool to quantify this effect. An analytical formula for the effective wave number is obtained in one-dimensional acoustic medium and is compared with the Berryman result in the low-frequency limit. Direct numerical calculations show a good agreement with the analytical predictions.
Effective propagation in a perturbed periodic structure
International Nuclear Information System (INIS)
Maurel, Agnes; Pagneux, Vincent
2008-01-01
In a recent paper [D. Torrent, A. Hakansson, F. Cervera, and J. Sanchez-Dehesa, Phys. Rev. Lett. 96, 204302 (2006)] inspected the effective parameters of a cluster containing an ensemble of scatterers with a periodic or a weakly disordered arrangement. A small amount of disorder is shown to have a small influence on the characteristics of the acoustic wave propagation with respect to the periodic case. In this Brief Report, we inspect further the effect of a deviation in the scatterer distribution from the periodic distribution. The quasicrystalline approximation is shown to be an efficient tool to quantify this effect. An analytical formula for the effective wave number is obtained in one-dimensional acoustic medium and is compared with the Berryman result in the low-frequency limit. Direct numerical calculations show a good agreement with the analytical predictions
Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.
2016-12-01
Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.
Multi-scenario modelling of uncertainty in stochastic chemical systems
International Nuclear Information System (INIS)
Evans, R. David; Ricardez-Sandoval, Luis A.
2014-01-01
Uncertainty analysis has not been well studied at the molecular scale, despite extensive knowledge of uncertainty in macroscale systems. The ability to predict the effect of uncertainty allows for robust control of small scale systems such as nanoreactors, surface reactions, and gene toggle switches. However, it is difficult to model uncertainty in such chemical systems as they are stochastic in nature, and require a large computational cost. To address this issue, a new model of uncertainty propagation in stochastic chemical systems, based on the Chemical Master Equation, is proposed in the present study. The uncertain solution is approximated by a composite state comprised of the averaged effect of samples from the uncertain parameter distributions. This model is then used to study the effect of uncertainty on an isomerization system and a two gene regulation network called a repressilator. The results of this model show that uncertainty in stochastic systems is dependent on both the uncertain distribution, and the system under investigation. -- Highlights: •A method to model uncertainty on stochastic systems was developed. •The method is based on the Chemical Master Equation. •Uncertainty in an isomerization reaction and a gene regulation network was modelled. •Effects were significant and dependent on the uncertain input and reaction system. •The model was computationally more efficient than Kinetic Monte Carlo
Uncertainty analysis in the applications of nuclear probabilistic risk assessment
International Nuclear Information System (INIS)
Le Duy, T.D.
2011-01-01
The aim of this thesis is to propose an approach to model parameter and model uncertainties affecting the results of risk indicators used in the applications of nuclear Probabilistic Risk assessment (PRA). After studying the limitations of the traditional probabilistic approach to represent uncertainty in PRA model, a new approach based on the Dempster-Shafer theory has been proposed. The uncertainty analysis process of the proposed approach consists in five main steps. The first step aims to model input parameter uncertainties by belief and plausibility functions according to the data PRA model. The second step involves the propagation of parameter uncertainties through the risk model to lay out the uncertainties associated with output risk indicators. The model uncertainty is then taken into account in the third step by considering possible alternative risk models. The fourth step is intended firstly to provide decision makers with information needed for decision making under uncertainty (parametric and model) and secondly to identify the input parameters that have significant uncertainty contributions on the result. The final step allows the process to be continued in loop by studying the updating of beliefs functions given new data. The proposed methodology was implemented on a real but simplified application of PRA model. (author)
Including uncertainty in hazard analysis through fuzzy measures
International Nuclear Information System (INIS)
Bott, T.F.; Eisenhawer, S.W.
1997-12-01
This paper presents a method for capturing the uncertainty expressed by an Hazard Analysis (HA) expert team when estimating the frequencies and consequences of accident sequences and provides a sound mathematical framework for propagating this uncertainty to the risk estimates for these accident sequences. The uncertainty is readily expressed as distributions that can visually aid the analyst in determining the extent and source of risk uncertainty in HA accident sequences. The results also can be expressed as single statistics of the distribution in a manner analogous to expressing a probabilistic distribution as a point-value statistic such as a mean or median. The study discussed here used data collected during the elicitation portion of an HA on a high-level waste transfer process to demonstrate the techniques for capturing uncertainty. These data came from observations of the uncertainty that HA team members expressed in assigning frequencies and consequences to accident sequences during an actual HA. This uncertainty was captured and manipulated using ideas from possibility theory. The result of this study is a practical method for displaying and assessing the uncertainty in the HA team estimates of the frequency and consequences for accident sequences. This uncertainty provides potentially valuable information about accident sequences that typically is lost in the HA process
Uncertainties in the simulation of groundwater recharge at different scales
Directory of Open Access Journals (Sweden)
H. Bogena
2005-01-01
Full Text Available Digital spatial data always imply some kind of uncertainty. The source of this uncertainty can be found in their compilation as well as the conceptual design that causes a more or less exact abstraction of the real world, depending on the scale under consideration. Within the framework of hydrological modelling, in which numerous data sets from diverse sources of uneven quality are combined, the various uncertainties are accumulated. In this study, the GROWA model is taken as an example to examine the effects of different types of uncertainties on the calculated groundwater recharge. Distributed input errors are determined for the parameters' slope and aspect using a Monte Carlo approach. Landcover classification uncertainties are analysed by using the conditional probabilities of a remote sensing classification procedure. The uncertainties of data ensembles at different scales and study areas are discussed. The present uncertainty analysis showed that the Gaussian error propagation method is a useful technique for analysing the influence of input data on the simulated groundwater recharge. The uncertainties involved in the land use classification procedure and the digital elevation model can be significant in some parts of the study area. However, for the specific model used in this study it was shown that the precipitation uncertainties have the greatest impact on the total groundwater recharge error.
Assessing Groundwater Model Uncertainty for the Central Nevada Test Area
International Nuclear Information System (INIS)
Pohll, Greg; Pohlmann, Karl; Hassan, Ahmed; Chapman, Jenny; Mihevc, Todd
2002-01-01
The purpose of this study is to quantify the flow and transport model uncertainty for the Central Nevada Test Area (CNTA). Six parameters were identified as uncertain, including the specified head boundary conditions used in the flow model, the spatial distribution of the underlying welded tuff unit, effective porosity, sorption coefficients, matrix diffusion coefficient, and the geochemical release function which describes nuclear glass dissolution. The parameter uncertainty was described by assigning prior statistical distributions for each of these parameters. Standard Monte Carlo techniques were used to sample from the parameter distributions to determine the full prediction uncertainty. Additional analysis is performed to determine the most cost-beneficial characterization activities. The maximum radius of the tritium and strontium-90 contaminant boundary was used as the output metric for evaluation of prediction uncertainty. The results indicate that combining all of the uncertainty in the parameters listed above propagates to a prediction uncertainty in the maximum radius of the contaminant boundary of 234 to 308 m and 234 to 302 m, for tritium and strontium-90, respectively. Although the uncertainty in the input parameters is large, the prediction uncertainty in the contaminant boundary is relatively small. The relatively small prediction uncertainty is primarily due to the small transport velocities such that large changes in the uncertain input parameters causes small changes in the contaminant boundary. This suggests that the model is suitable in terms of predictive capability for the contaminant boundary delineation
Wave propagation in elastic solids
Achenbach, Jan
1984-01-01
The propagation of mechanical disturbances in solids is of interest in many branches of the physical scienses and engineering. This book aims to present an account of the theory of wave propagation in elastic solids. The material is arranged to present an exposition of the basic concepts of mechanical wave propagation within a one-dimensional setting and a discussion of formal aspects of elastodynamic theory in three dimensions, followed by chapters expounding on typical wave propagation phenomena, such as radiation, reflection, refraction, propagation in waveguides, and diffraction. The treat
Wit, de A.J.W.; Bruin, de S.
2006-01-01
Previous analyses of the effects of uncertainty in precipitation fields on the output of EU Crop Growth Monitoring System (CGMS) demonstrated that the influence on simulated crop yield was limited at national scale, but considerable at local and regional scales. We aim to propagate uncertainty due
Uncertainty in biodiversity science, policy and management: a conceptual overview
Directory of Open Access Journals (Sweden)
Yrjö Haila
2014-10-01
Full Text Available The protection of biodiversity is a complex societal, political and ultimately practical imperative of current global society. The imperative builds upon scientific knowledge on human dependence on the life-support systems of the Earth. This paper aims at introducing main types of uncertainty inherent in biodiversity science, policy and management, as an introduction to a companion paper summarizing practical experiences of scientists and scholars (Haila et al. 2014. Uncertainty is a cluster concept: the actual nature of uncertainty is inherently context-bound. We use semantic space as a conceptual device to identify key dimensions of uncertainty in the context of biodiversity protection; these relate to [i] data; [ii] proxies; [iii] concepts; [iv] policy and management; and [v] normative goals. Semantic space offers an analytic perspective for drawing critical distinctions between types of uncertainty, identifying fruitful resonances that help to cope with the uncertainties, and building up collaboration between different specialists to support mutual social learning.
A Web tool for calculating k0-NAA uncertainties
International Nuclear Information System (INIS)
Younes, N.; Robouch, P.
2003-01-01
The calculation of uncertainty budgets is becoming a standard step in reporting analytical results. This gives rise to the need for simple, easily accessed tools to calculate uncertainty budgets. An example of such a tool is the Excel spreadsheet approach of Robouch et al. An internet application which calculates uncertainty budgets for k 0 -NAA is presented. The Web application has built in 'Literature' values for standard isotopes and accepts as inputs fixed information such as the thermal to epithermal neutron flux ratio, as well as experiment specific data such as the mass of the sample. The application calculates and displays intermediate uncertainties as well as the final combined uncertainty of the element concentration in the sample. The interface only requires access to a standard browser and is thus easily accessible to researchers and laboratories. This may facilitate and standardize the calculation of k 0 -NAA uncertainty budgets. (author)
Evaluating prediction uncertainty
International Nuclear Information System (INIS)
McKay, M.D.
1995-03-01
The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented
International Nuclear Information System (INIS)
Limperopoulos, G.J.
1995-01-01
This report presents an oil project valuation under uncertainty by means of two well-known financial techniques: The Capital Asset Pricing Model (CAPM) and The Black-Scholes Option Pricing Formula. CAPM gives a linear positive relationship between expected rate of return and risk but does not take into consideration the aspect of flexibility which is crucial for an irreversible investment as an oil price is. Introduction of investment decision flexibility by using real options can increase the oil project value substantially. Some simple tests for the importance of uncertainty in stock market for oil investments are performed. Uncertainty in stock returns is correlated with aggregate product market uncertainty according to Pindyck (1991). The results of the tests are not satisfactory due to the short data series but introducing two other explanatory variables the interest rate and Gross Domestic Product make the situation better. 36 refs., 18 figs., 6 tabs
On the cosmological propagation of high energy particles in magnetic fields
International Nuclear Information System (INIS)
Alves Batista, Rafael
2015-04-01
In the present work the connection between high energy particles and cosmic magnetic fields is explored. Particularly, the focus lies on the propagation of ultra-high energy cosmic rays (UHECRs) and very-high energy gamma rays (VHEGRs) over cosmological distances, under the influence of cosmic magnetic fields. The first part of this work concerns the propagation of UHECRs in the magnetized cosmic web, which was studied both analytically and numerically. A parametrization for the suppression of the UHECR flux at energies ∝ 10 18 eV due to diffusion in extragalactic magnetic fields was found, making it possible to set an upper limit on the energy at which this magnetic horizon effect sets in, which is
International Nuclear Information System (INIS)
Bosko, A.; Croft, St.; Gulbransen, E.
2009-01-01
General purpose gamma scanners are often used to assay unknown drums that differ from those used to create the default calibration. This introduces a potential source of bias into the matrix correction when the correction is based on the estimation of the mean density of the drum contents from a weigh scale measurement. In this paper we evaluate the magnitude of this bias that may be introduced by performing assay measurements with a system whose matrix correction algorithm was calibrated with a set of standard drums but applied to a population of drums whose tare weight may be different. The matrix correction factors are perturbed in such cases because the unknown difference in tare weight gets reflected as a bias in the derived matrix density. This would be the only impact if the difference in tare weight was due solely to the weight of the lid or base, say. But in reality the reason for the difference may be because the steel wall of the drum is of a different thickness. Thus, there is an opposing interplay at work which tends to compensate. The purpose of this work is to evaluate and bound the magnitude of the resulting assay uncertainty introduced by tare weight variation. We compare the results obtained using simple analytical models and the 3-D ray tracing with ISOCS software to illustrate and quantify the problem. The numerical results allow a contribution to the Total Measurement Uncertainty (TMU) to be propagated into the final assay result. (authors)
Uncertainty information in climate data records from Earth observation
Merchant, Christopher J.; Paul, Frank; Popp, Thomas; Ablain, Michael; Bontemps, Sophie; Defourny, Pierre; Hollmann, Rainer; Lavergne, Thomas; Laeng, Alexandra; de Leeuw, Gerrit; Mittaz, Jonathan; Poulsen, Caroline; Povey, Adam C.; Reuter, Max; Sathyendranath, Shubha; Sandven, Stein; Sofieva, Viktoria F.; Wagner, Wolfgang
2017-07-01
error distribution, and the propagation of the uncertainty to the geophysical variable in the CDR accounting for its error correlation properties. Uncertainty estimates can and should be validated as part of CDR validation when possible. These principles are quite general, but the approach to providing uncertainty information appropriate to different ECVs is varied, as confirmed by a brief review across different ECVs in the CCI. User requirements for uncertainty information can conflict with each other, and a variety of solutions and compromises are possible. The concept of an ensemble CDR as a simple means of communicating rigorous uncertainty information to users is discussed. Our review concludes by providing eight concrete recommendations for good practice in providing and communicating uncertainty in EO-based climate data records.
Streamer propagation velocity to anode and to cathode in He, Xe, N2 and SF6
International Nuclear Information System (INIS)
Yakovlenko, S.I.
2004-01-01
One studied mechanism of ionization propagation in a solid gas associated with propagation of background electrons in heterogenous electric field. The mentioned mechanism does not depend on the sign of field projection on ionization propagation direction. One derived analytical expression for ionization front rate. The mentioned expression conforms well with numerical calculations. Dependence of wave ionization front rate as a function of a field intensity on boundary of streamer for He, Xe, N 2 and SF 6 is tabulated [ru
Uncertainties and climatic change
International Nuclear Information System (INIS)
De Gier, A.M.; Opschoor, J.B.; Van de Donk, W.B.H.J.; Hooimeijer, P.; Jepma, J.; Lelieveld, J.; Oerlemans, J.; Petersen, A.
2008-01-01
Which processes in the climate system are misunderstood? How are scientists dealing with uncertainty about climate change? What will be done with the conclusions of the recently published synthesis report of the IPCC? These and other questions were answered during the meeting 'Uncertainties and climate change' that was held on Monday 26 November 2007 at the KNAW in Amsterdam. This report is a compilation of all the presentations and provides some conclusions resulting from the discussions during this meeting. [mk] [nl
Lemaire, Maurice
2014-01-01
Science is a quest for certainty, but lack of certainty is the driving force behind all of its endeavors. This book, specifically, examines the uncertainty of technological and industrial science. Uncertainty and Mechanics studies the concepts of mechanical design in an uncertain setting and explains engineering techniques for inventing cost-effective products. Though it references practical applications, this is a book about ideas and potential advances in mechanical science.
Uncertainty: lotteries and risk
Ávalos, Eloy
2011-01-01
In this paper we develop the theory of uncertainty in a context where the risks assumed by the individual are measurable and manageable. We primarily use the definition of lottery to formulate the axioms of the individual's preferences, and its representation through the utility function von Neumann - Morgenstern. We study the expected utility theorem and its properties, the paradoxes of choice under uncertainty and finally the measures of risk aversion with monetary lotteries.
Uncertainty calculations made easier
International Nuclear Information System (INIS)
Hogenbirk, A.
1994-07-01
The results are presented of a neutron cross section sensitivity/uncertainty analysis performed in a complicated 2D model of the NET shielding blanket design inside the ITER torus design, surrounded by the cryostat/biological shield as planned for ITER. The calculations were performed with a code system developed at ECN Petten, with which sensitivity/uncertainty calculations become relatively simple. In order to check the deterministic neutron transport calculations (performed with DORT), calculations were also performed with the Monte Carlo code MCNP. Care was taken to model the 2.0 cm wide gaps between two blanket segments, as the neutron flux behind the vacuum vessel is largely determined by neutrons streaming through these gaps. The resulting neutron flux spectra are in excellent agreement up to the end of the cryostat. It is noted, that at this position the attenuation of the neutron flux is about 1 l orders of magnitude. The uncertainty in the energy integrated flux at the beginning of the vacuum vessel and at the beginning of the cryostat was determined in the calculations. The uncertainty appears to be strongly dependent on the exact geometry: if the gaps are filled with stainless steel, the neutron spectrum changes strongly, which results in an uncertainty of 70% in the energy integrated flux at the beginning of the cryostat in the no-gap-geometry, compared to an uncertainty of only 5% in the gap-geometry. Therefore, it is essential to take into account the exact geometry in sensitivity/uncertainty calculations. Furthermore, this study shows that an improvement of the covariance data is urgently needed in order to obtain reliable estimates of the uncertainties in response parameters in neutron transport calculations. (orig./GL)
Temporal scaling in information propagation
Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi
2014-06-01
For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
Examples of measurement uncertainty evaluations in accordance with the revised GUM
Runje, B.; Horvatic, A.; Alar, V.; Medic, S.; Bosnjakovic, A.
2016-11-01
The paper presents examples of the evaluation of uncertainty components in accordance with the current and revised Guide to the expression of uncertainty in measurement (GUM). In accordance with the proposed revision of the GUM a Bayesian approach was conducted for both type A and type B evaluations.The law of propagation of uncertainty (LPU) and the law of propagation of distribution applied through the Monte Carlo method, (MCM) were used to evaluate associated standard uncertainties, expanded uncertainties and coverage intervals. Furthermore, the influence of the non-Gaussian dominant input quantity and asymmetric distribution of the output quantity y on the evaluation of measurement uncertainty was analyzed. In the case when the probabilistically coverage interval is not symmetric, the coverage interval for the probability P is estimated from the experimental probability density function using the Monte Carlo method. Key highlights of the proposed revision of the GUM were analyzed through a set of examples.
Charged particle beam propagation studies at the Naval Research Laboratory
International Nuclear Information System (INIS)
Meger, R.A.; Hubbard, R.F.; Antoniades, J.A.; Fernsler, R.F.; Lampe, M.; Murphy, D.P.; Myers, M.C.; Pechacek, R.E.; Peyser, T.A.; Santos, J.; Slinker, S.P.
1993-01-01
The Plasma Physics Division of the Naval Research Laboratory has been performing research into the propagation of high current electron beams for 20 years. Recent efforts have focused on the stabilization of the resistive hose instability. Experiments have utilized the SuperIBEX e-beam generator (5-MeV, 100-kA, 40-ns pulse) and a 2-m diameter, 5-m long propagation chamber. Full density air propagation experiments have successfully demonstrated techniques to control the hose instability allowing stable 5-m transport of 1-2 cm radius, 10-20 kA total current beams. Analytic theory and particle simulations have been used to both guide and interpret the experimental results. This paper will provide background on the program and summarize the achievements of the NRL propagation program up to this point. Further details can be found in other papers presented in this conference
Uncertainties assessment for safety margins evaluation in MTR reactors core thermal-hydraulic design
International Nuclear Information System (INIS)
Gimenez, M.; Schlamp, M.; Vertullo, A.
2002-01-01
This report contains a bibliographic review and a critical analysis of different methodologies used for uncertainty evaluation in research reactors core safety related parameters. Different parameters where uncertainties are considered are also presented and discussed, as well as their intrinsic nature regarding the way their uncertainty combination must be done. Finally a combined statistical method with direct propagation of uncertainties and a set of basic parameters as wall and DNB temperatures, CHF, PRD and their respective ratios where uncertainties should be considered is proposed. (author)
Plurality of Type A evaluations of uncertainty
Possolo, Antonio; Pintar, Adam L.
2017-10-01
The evaluations of measurement uncertainty involving the application of statistical methods to measurement data (Type A evaluations as specified in the Guide to the Expression of Uncertainty in Measurement, GUM) comprise the following three main steps: (i) developing a statistical model that captures the pattern of dispersion or variability in the experimental data, and that relates the data either to the measurand directly or to some intermediate quantity (input quantity) that the measurand depends on; (ii) selecting a procedure for data reduction that is consistent with this model and that is fit for the purpose that the results are intended to serve; (iii) producing estimates of the model parameters, or predictions based on the fitted model, and evaluations of uncertainty that qualify either those estimates or these predictions, and that are suitable for use in subsequent uncertainty propagation exercises. We illustrate these steps in uncertainty evaluations related to the measurement of the mass fraction of vanadium in a bituminous coal reference material, including the assessment of the homogeneity of the material, and to the calibration and measurement of the amount-of-substance fraction of a hydrochlorofluorocarbon in air, and of the age of a meteorite. Our goal is to expose the plurality of choices that can reasonably be made when taking each of the three steps outlined above, and to show that different choices typically lead to different estimates of the quantities of interest, and to different evaluations of the associated uncertainty. In all the examples, the several alternatives considered represent choices that comparably competent statisticians might make, but who differ in the assumptions that they are prepared to rely on, and in their selection of approach to statistical inference. They represent also alternative treatments that the same statistician might give to the same data when the results are intended for different purposes.
Uncertainty in Seismic Capacity of Masonry Buildings
Directory of Open Access Journals (Sweden)
Nicola Augenti
2012-07-01
Full Text Available Seismic assessment of masonry structures is plagued by both inherent randomness and model uncertainty. The former is referred to as aleatory uncertainty, the latter as epistemic uncertainty because it depends on the knowledge level. Pioneering studies on reinforced concrete buildings have revealed a significant influence of modeling parameters on seismic vulnerability. However, confidence in mechanical properties of existing masonry buildings is much lower than in the case of reinforcing steel and concrete. This paper is aimed at assessing whether and how uncertainty propagates from material properties to seismic capacity of an entire masonry structure. A typical two-story unreinforced masonry building is analyzed. Based on previous statistical characterization of mechanical properties of existing masonry types, the following random variables have been considered in this study: unit weight, uniaxial compressive strength, shear strength at zero confining stress, Young’s modulus, shear modulus, and available ductility in shear. Probability density functions were implemented to generate a significant number of realizations and static pushover analysis of the case-study building was performed for each vector of realizations, load combination and lateral load pattern. Analysis results show a large dispersion in displacement capacity and lower dispersion in spectral acceleration capacity. This can directly affect decision-making because both design and retrofit solutions depend on seismic capacity predictions. Therefore, engineering judgment should always be used when assessing structural safety of existing masonry constructions against design earthquakes, based on a series of seismic analyses under uncertain parameters.
Propagation speed of gamma radiation in brass
International Nuclear Information System (INIS)
Cavalcante, Jose T.P.D.; Silva, Paulo R.J.; Saitovich, Henrique
2009-01-01
The propagation speed (PS) of visible light -represented by a short frequency range in the large frame of electromagnetic radiations (ER) frequencies- in air was measured during the last century, using a great deal of different methods, with high precision results being achieved. Presently, a well accepted value, with very small uncertainty, is c= 299,792.458 Km/s) (c reporting to the Latin word celeritas: 'speed swiftness'). When propagating in denser material media (MM), such value is always lower when compared to the air value, with the propagating MM density playing an important role. Until present, such studies focusing propagation speeds, refractive indexes, dispersions were specially related to visible light, or to ER in wavelengths ranges dose to it, and with a transparent MM. A first incursion in this subject dealing with γ-rays was performed using an electronic coincidence counting system, when the value of it's PS was measured in air, C γ(air) 298,300.15 Km/s; a method that went on with later electronic improvements. always in air. To perform such measurements the availability of a γ-radiation source in which two γ-rays are emitted simultaneously in opposite directions -as already used as well as applied in the present case- turns out to be essential to the feasibility of the experiment, as far as no reflection techniques could be used. Such a suitable source was the positron emitter 22 Na placed in a thin wall metal container in which the positrons are stopped and annihilated when reacting with the medium electrons, in such way originating -as it is very well established from momentum/energy conservation laws - two gamma-rays, energy 511 KeV each, both emitted simultaneously in opposite directions. In all the previous experiments were used photomultiplier detectors coupled to NaI(Tl) crystal scintillators, which have a good energy resolution but a deficient time resolution for such purposes. Presently, as an innovative improvement, were used BaF 2
Propagation error simulations concerning the CLIC active prealignment
Touzé, T; Missiaen, D
2009-01-01
The CLIC1 components will have to be prealigned within a thirty times more demanding tolerance than the existing CERNmachines. It is a technical challenge and a key issue for the CLIC feasibility. Simulations have been undertaken concerning the propagation error due to the measurement uncertainties of the prealignment systems. The uncertainties of measurement, taken as hypothesis for the simulations, are based on the data obtained on several dedicated facilities. This paper introduces the simulations and the latest results obtained, as well as the facilities.
OpenTURNS, an open source uncertainty engineering software
International Nuclear Information System (INIS)
Popelin, A.L.; Dufoy, A.
2013-01-01
The needs to assess robust performances for complex systems have lead to the emergence of a new industrial simulation challenge: to take into account uncertainties when dealing with complex numerical simulation frameworks. EDF has taken part in the development of an Open Source software platform dedicated to uncertainty propagation by probabilistic methods, named OpenTURNS for Open source Treatment of Uncertainty, Risk and Statistics. OpenTURNS includes a large variety of qualified algorithms in order to manage uncertainties in industrial studies, from the uncertainty quantification step (with possibilities to model stochastic dependence thanks to the copula theory and stochastic processes), to the uncertainty propagation step (with some innovative simulation algorithms as the ziggurat method for normal variables) and the sensitivity analysis one (with some sensitivity index based on the evaluation of means conditioned to the realization of a particular event). It also enables to build some response surfaces that can include the stochastic modeling (with the chaos polynomial method for example). Generic wrappers to link OpenTURNS to the modeling software are proposed. At last, OpenTURNS is largely documented to provide rules to help use and contribution
Application of high-order uncertainty for severe accident management
International Nuclear Information System (INIS)
Yu, Donghan; Ha, Jaejoo
1998-01-01
The use of probability distribution to represent uncertainty about point-valued probabilities has been a controversial subject. Probability theorists have argued that it is inherently meaningless to be uncertain about a probability since this appears to violate the subjectivists' assumption that individual can develop unique and precise probability judgments. However, many others have found the concept of uncertainty about the probability to be both intuitively appealing and potentially useful. Especially, high-order uncertainty, i.e., the uncertainty about the probability, can be potentially relevant to decision-making when expert's judgment is needed under very uncertain data and imprecise knowledge and where the phenomena and events are frequently complicated and ill-defined. This paper presents two approaches for evaluating the uncertainties inherent in accident management strategies: 'a fuzzy probability' and 'an interval-valued subjective probability'. At first, this analysis considers accident management as a decision problem (i.e., 'applying a strategy' vs. 'do nothing') and uses an influence diagram. Then, the analysis applies two approaches above to evaluate imprecise node probabilities in the influence diagram. For the propagation of subjective probabilities, the analysis uses the Monte-Carlo simulation. In case of fuzzy probabilities, the fuzzy logic is applied to propagate them. We believe that these approaches can allow us to understand uncertainties associated with severe accident management strategy since they offer not only information similar to the classical approach using point-estimate values but also additional information regarding the impact from imprecise input data
Statistically based uncertainty assessments in nuclear risk analysis
International Nuclear Information System (INIS)
Spencer, F.W.; Diegert, K.V.; Easterling, R.G.
1987-01-01
Over the last decade, the problems of estimation and uncertainty assessment in probabilistics risk assessment (PRAs) have been addressed in a variety of NRC and industry-sponsored projects. These problems have received attention because of a recognition that major uncertainties in risk estimation exist, which can be reduced by collecting more and better data and other information, and because of a recognition that better methods for assessing these uncertainties are needed. In particular, a clear understanding of the nature and magnitude of various sources of uncertainty is needed to facilitate descision-making on possible plant changes and research options. Recent PRAs have employed methods of probability propagation, sometimes involving the use of Bayes Theorem, and intended to formalize the use of ''engineering judgment'' or ''expert opinion.'' All sources, or feelings, of uncertainty are expressed probabilistically, so that uncertainty analysis becomes simply a matter of probability propagation. Alternatives to forcing a probabilistic framework at all stages of a PRA are a major concern in this paper, however
Bolt beam propagation analysis
Shokair, I. R.
BOLT (Beam on Laser Technology) is a rocket experiment to demonstrate electron beam propagation on a laser ionized plasma channel across the geomagnetic field in the ion focused regime (IFR). The beam parameters for BOLT are: beam current I(sub b) = 100 Amps, beam energy of 1--1.5 MeV (gamma =3-4), and a Gaussian beam and channel of radii r(sub b) = r(sub c) = 1.5 cm. The N+1 ionization scheme is used to ionize atomic oxygen in the upper atmosphere. This scheme utilizes 130 nm light plus three IR lasers to excite and then ionize atomic oxygen. The limiting factor for the channel strength is the energy of the 130 nm laser, which is assumed to be 1.6 mJ for BOLT. At a fixed laser energy and altitude (fixing the density of atomic oxygen), the range can be varied by adjusting the laser tuning, resulting in a neutralization fraction axial profile of the form: f(z) = f(sub 0) e(exp minus z)/R, where R is the range. In this paper we consider the propagation of the BOLT beam and calculate the range of the electron beam taking into account the fact that the erosion rates (magnetic and inductive) vary with beam length as the beam and channel dynamically respond to sausage and hose instabilities.
How uncertainty in socio-economic variables affects large-scale transport model forecasts
DEFF Research Database (Denmark)
Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo
2015-01-01
A strategic task assigned to large-scale transport models is to forecast the demand for transport over long periods of time to assess transport projects. However, by modelling complex systems transport models have an inherent uncertainty which increases over time. As a consequence, the longer...... the period forecasted the less reliable is the forecasted model output. Describing uncertainty propagation patterns over time is therefore important in order to provide complete information to the decision makers. Among the existing literature only few studies analyze uncertainty propagation patterns over...
Wave propagation in spatially modulated tubes
Energy Technology Data Exchange (ETDEWEB)
Ziepke, A., E-mail: ziepke@itp.tu-berlin.de; Martens, S.; Engel, H. [Institut für Theoretische Physik, Hardenbergstraße 36, EW 7-1, Technische Universität Berlin, 10623 Berlin (Germany)
2016-09-07
We investigate wave propagation in rotationally symmetric tubes with a periodic spatial modulation of cross section. Using an asymptotic perturbation analysis, the governing quasi-two-dimensional reaction-diffusion equation can be reduced into a one-dimensional reaction-diffusion-advection equation. Assuming a weak perturbation by the advection term and using projection method, in a second step, an equation of motion for traveling waves within such tubes can be derived. Both methods predict properly the nonlinear dependence of the propagation velocity on the ratio of the modulation period of the geometry to the intrinsic width of the front, or pulse. As a main feature, we observe finite intervals of propagation failure of waves induced by the tube’s modulation and derive an analytically tractable condition for their occurrence. For the highly diffusive limit, using the Fick-Jacobs approach, we show that wave velocities within modulated tubes are governed by an effective diffusion coefficient. Furthermore, we discuss the effects of a single bottleneck on the period of pulse trains. We observe period changes by integer fractions dependent on the bottleneck width and the period of the entering pulse train.
Key uncertainties in climate change policy: Results from ICAM-2
Energy Technology Data Exchange (ETDEWEB)
Dowlatabadi, H.; Kandlikar, M.
1995-12-31
A critical aspect of climate change decision-making is uncertainties in current understanding of the socioeconomic, climatic and biogeochemical processes involved. Decision-making processes are much better informed if these uncertainties are characterized and their implications understood. Quantitative analysis of these uncertainties serve to: inform decision makers about the likely outcome of policy initiatives; and help set priorities for research so that outcome ambiguities faced by the decision-makers are reduced. A family of integrated assessment models of climate change have been developed at Carnegie Mellon. These models are distinguished from other integrated assessment efforts in that they were designed from the outset to characterize and propagate parameter, model, value, and decision-rule uncertainties. The most recent of these models is ICAM 2.0. This model includes demographics, economic activities, emissions, atmospheric chemistry, climate change, sea level rise and other impact modules and the numerous associated feedbacks. The model has over 700 objects of which over 1/3 are uncertain. These have been grouped into seven different classes of uncertain items. The impact of uncertainties in each of these items can be considered individually or in combinations with the others. In this paper we demonstrate the relative contribution of various sources of uncertainty to different outcomes in the model. The analysis shows that climatic uncertainties are most important, followed by uncertainties in damage calculations, economic uncertainties and direct aerosol forcing uncertainties. Extreme uncertainties in indirect aerosol forcing and behavioral response to climate change (adaptation) were characterized by using bounding analyses; the results suggest that these extreme uncertainties can dominate the choice of policy outcomes.
Uncertainty Categorization, Modeling, and Management for Regional Water Supply Planning
Fletcher, S.; Strzepek, K. M.; AlSaati, A.; Alhassan, A.
2016-12-01
decision making under uncertainty methods from the state of the art. We will compare the efficiency of alternative approaches to the two case studies. Finally, we will present a hybrid decision analytic tool to address the synthesis of uncertainties.
Uncertainty in reactive transport geochemical modelling
International Nuclear Information System (INIS)
Oedegaard-Jensen, A.; Ekberg, C.
2005-01-01
Full text of publication follows: Geochemical modelling is one way of predicting the transport of i.e. radionuclides in a rock formation. In a rock formation there will be fractures in which water and dissolved species can be transported. The composition of the water and the rock can either increase or decrease the mobility of the transported entities. When doing simulations on the mobility or transport of different species one has to know the exact water composition, the exact flow rates in the fracture and in the surrounding rock, the porosity and which minerals the rock is composed of. The problem with simulations on rocks is that the rock itself it not uniform i.e. larger fractures in some areas and smaller in other areas which can give different water flows. The rock composition can be different in different areas. In additions to this variance in the rock there are also problems with measuring the physical parameters used in a simulation. All measurements will perturb the rock and this perturbation will results in more or less correct values of the interesting parameters. The analytical methods used are also encumbered with uncertainties which in this case are added to the uncertainty from the perturbation of the analysed parameters. When doing simulation the effect of the uncertainties must be taken into account. As the computers are getting faster and faster the complexity of simulated systems are increased which also increase the uncertainty in the results from the simulations. In this paper we will show how the uncertainty in the different parameters will effect the solubility and mobility of different species. Small uncertainties in the input parameters can result in large uncertainties in the end. (authors)
Debry, E.; Malherbe, L.; Schillinger, C.; Bessagnet, B.; Rouil, L.
2009-04-01
Evaluation of human exposure to atmospheric pollution usually requires the knowledge of pollutants concentrations in ambient air. In the framework of PAISA project, which studies the influence of socio-economical status on relationships between air pollution and short term health effects, the concentrations of gas and particle pollutants are computed over Strasbourg with the ADMS-Urban model. As for any modeling result, simulated concentrations come with uncertainties which have to be characterized and quantified. There are several sources of uncertainties related to input data and parameters, i.e. fields used to execute the model like meteorological fields, boundary conditions and emissions, related to the model formulation because of incomplete or inaccurate treatment of dynamical and chemical processes, and inherent to the stochastic behavior of atmosphere and human activities [1]. Our aim is here to assess the uncertainties of the simulated concentrations with respect to input data and model parameters. In this scope the first step consisted in bringing out the input data and model parameters that contribute most effectively to space and time variability of predicted concentrations. Concentrations of several pollutants were simulated for two months in winter 2004 and two months in summer 2004 over five areas of Strasbourg. The sensitivity analysis shows the dominating influence of boundary conditions and emissions. Among model parameters, the roughness and Monin-Obukhov lengths appear to have non neglectable local effects. Dry deposition is also an important dynamic process. The second step of the characterization and quantification of uncertainties consists in attributing a probability distribution to each input data and model parameter and in propagating the joint distribution of all data and parameters into the model so as to associate a probability distribution to the modeled concentrations. Several analytical and numerical methods exist to perform an
Uncertainty Analysis of the NASA Glenn 8x6 Supersonic Wind Tunnel
Stephens, Julia; Hubbard, Erin; Walter, Joel; McElroy, Tyler
2016-01-01
This paper presents methods and results of a detailed measurement uncertainty analysis that was performed for the 8- by 6-foot Supersonic Wind Tunnel located at the NASA Glenn Research Center. The statistical methods and engineering judgments used to estimate elemental uncertainties are described. The Monte Carlo method of propagating uncertainty was selected to determine the uncertainty of calculated variables of interest. A detailed description of the Monte Carlo method as applied for this analysis is provided. Detailed uncertainty results for the uncertainty in average free stream Mach number as well as other variables of interest are provided. All results are presented as random (variation in observed values about a true value), systematic (potential offset between observed and true value), and total (random and systematic combined) uncertainty. The largest sources contributing to uncertainty are determined and potential improvement opportunities for the facility are investigated.
Causality and analyticity in optics
International Nuclear Information System (INIS)
Nussenzveig, H.M.
In order to provide an overall picture of the broad range of optical phenomena that are directly linked with the concepts of causality and analyticity, the following topics are briefly reviewed, emphasizing recent developments: 1) Derivation of dispersion relations for the optical constants of general linear media from causality. Application to the theory of natural optical activity. 2) Derivation of sum rules for the optical constants from causality and from the short-time response function (asymptotic high-frequency behavior). Average spectral behavior of optical media. Applications. 3) Role of spectral conditions. Analytic properties of coherence functions in quantum optics. Reconstruction theorem.4) Phase retrieval problems. 5) Inverse scattering problems. 6) Solution of nonlinear evolution equations in optics by inverse scattering methods. Application to self-induced transparency. Causality in nonlinear wave propagation. 7) Analytic continuation in frequency and angular momentum. Complex singularities. Resonances and natural-mode expansions. Regge poles. 8) Wigner's causal inequality. Time delay. Spatial displacements in total reflection. 9) Analyticity in diffraction theory. Complex angular momentum theory of Mie scattering. Diffraction as a barrier tunnelling effect. Complex trajectories in optics. (Author) [pt
Uncertainty and validation. Effect of user interpretation on uncertainty estimates
International Nuclear Information System (INIS)
Kirchner, G.; Peterson, R.
1996-11-01
Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the
Uncertainty and validation. Effect of user interpretation on uncertainty estimates
Energy Technology Data Exchange (ETDEWEB)
Kirchner, G. [Univ. of Bremen (Germany); Peterson, R. [AECL, Chalk River, ON (Canada)] [and others
1996-11-01
Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the
One Approach to the Fire PSA Uncertainty Analysis
International Nuclear Information System (INIS)
Simic, Z.; Mikulicic, V.; Vukovic, I.
2002-01-01
Experienced practical events and findings from the number of fire probabilistic safety assessment (PSA) studies show that fire has high relative importance for nuclear power plant safety. Fire PSA is a very challenging phenomenon and a number of issues are still in the area of research and development. This has a major impact on the conservatism of fire PSA findings. One way to reduce the level of conservatism is to conduct uncertainty analysis. At the top-level, uncertainty of the fire PSA can be separated in to three segments. The first segment is related to fire initiating events frequencies. The second uncertainty segment is connected to the uncertainty of fire damage. Finally, there is uncertainty related to the PSA model, which propagates this fire-initiated damage to the core damage or other analyzed risk. This paper discusses all three segments of uncertainty. Some recent experience with fire PSA study uncertainty analysis, usage of fire analysis code COMPBRN IIIe, and uncertainty evaluation importance to the final result is presented.(author)