DEFF Research Database (Denmark)
Hong, Jinglan; Shaked, Shanna; Rosenbaum, Ralph K.
2010-01-01
heavy to conduct, especially for the comparison of multiple scenarios, often limiting its use to research or to inventory only. Furthermore, Monte Carlo simulations do not automatically assess the sensitivity and contribution to overall uncertainty of individual parameters. The present paper aims...... to develop and apply to both inventory and impact assessment an explicit and transparent analytical approach to uncertainty. This approach applies Taylor series expansions to the uncertainty propagation of lognormally distributed parameters. Materials and methods We first apply the Taylor series expansion...... this approach to the comparison of two or more LCA scenarios. Since in LCA it is crucial to account for both common inventory processes and common impact assessment characterization factors among the different scenarios, we further develop the approach to address this dependency. We provide a method to easily...
Sciacchitano, A.; Wieneke, Bernhard
2016-01-01
This paper discusses the propagation of the instantaneous uncertainty of PIV measurements to statistical and instantaneous quantities of interest derived from the velocity field. The expression of the uncertainty of vorticity, velocity divergence, mean value and Reynolds stresses is derived. It
Analytical Propagation of Uncertainty in Life Cycle Assessment Using Matrix Formulation
DEFF Research Database (Denmark)
Imbeault-Tétreault, Hugues; Jolliet, Olivier; Deschênes, Louise
2013-01-01
of the output uncertainty. Moreover, the sensitivity analysis reveals that the uncertainty of the most sensitive input parameters was not initially considered in the case study. The uncertainty analysis of the comparison of two scenarios is a useful means of highlighting the effects of correlation...
Propagation of dynamic measurement uncertainty
Hessling, J. P.
2011-10-01
The time-dependent measurement uncertainty has been evaluated in a number of recent publications, starting from a known uncertain dynamic model. This could be defined as the 'downward' propagation of uncertainty from the model to the targeted measurement. The propagation of uncertainty 'upward' from the calibration experiment to a dynamic model traditionally belongs to system identification. The use of different representations (time, frequency, etc) is ubiquitous in dynamic measurement analyses. An expression of uncertainty in dynamic measurements is formulated for the first time in this paper independent of representation, joining upward as well as downward propagation. For applications in metrology, the high quality of the characterization may be prohibitive for any reasonably large and robust model to pass the whiteness test. This test is therefore relaxed by not directly requiring small systematic model errors in comparison to the randomness of the characterization. Instead, the systematic error of the dynamic model is propagated to the uncertainty of the measurand, analogously but differently to how stochastic contributions are propagated. The pass criterion of the model is thereby transferred from the identification to acceptance of the total accumulated uncertainty of the measurand. This increases the relevance of the test of the model as it relates to its final use rather than the quality of the calibration. The propagation of uncertainty hence includes the propagation of systematic model errors. For illustration, the 'upward' propagation of uncertainty is applied to determine if an appliance box is damaged in an earthquake experiment. In this case, relaxation of the whiteness test was required to reach a conclusive result.
Propagation of dynamic measurement uncertainty
International Nuclear Information System (INIS)
Hessling, J P
2011-01-01
The time-dependent measurement uncertainty has been evaluated in a number of recent publications, starting from a known uncertain dynamic model. This could be defined as the 'downward' propagation of uncertainty from the model to the targeted measurement. The propagation of uncertainty 'upward' from the calibration experiment to a dynamic model traditionally belongs to system identification. The use of different representations (time, frequency, etc) is ubiquitous in dynamic measurement analyses. An expression of uncertainty in dynamic measurements is formulated for the first time in this paper independent of representation, joining upward as well as downward propagation. For applications in metrology, the high quality of the characterization may be prohibitive for any reasonably large and robust model to pass the whiteness test. This test is therefore relaxed by not directly requiring small systematic model errors in comparison to the randomness of the characterization. Instead, the systematic error of the dynamic model is propagated to the uncertainty of the measurand, analogously but differently to how stochastic contributions are propagated. The pass criterion of the model is thereby transferred from the identification to acceptance of the total accumulated uncertainty of the measurand. This increases the relevance of the test of the model as it relates to its final use rather than the quality of the calibration. The propagation of uncertainty hence includes the propagation of systematic model errors. For illustration, the 'upward' propagation of uncertainty is applied to determine if an appliance box is damaged in an earthquake experiment. In this case, relaxation of the whiteness test was required to reach a conclusive result
Simplified propagation of standard uncertainties
International Nuclear Information System (INIS)
Shull, A.H.
1997-01-01
An essential part of any measurement control program is adequate knowledge of the uncertainties of the measurement system standards. Only with an estimate of the standards'' uncertainties can one determine if the standard is adequate for its intended use or can one calculate the total uncertainty of the measurement process. Purchased standards usually have estimates of uncertainty on their certificates. However, when standards are prepared and characterized by a laboratory, variance propagation is required to estimate the uncertainty of the standard. Traditional variance propagation typically involves tedious use of partial derivatives, unfriendly software and the availability of statistical expertise. As a result, the uncertainty of prepared standards is often not determined or determined incorrectly. For situations meeting stated assumptions, easier shortcut methods of estimation are now available which eliminate the need for partial derivatives and require only a spreadsheet or calculator. A system of simplifying the calculations by dividing into subgroups of absolute and relative uncertainties is utilized. These methods also incorporate the International Standards Organization (ISO) concepts for combining systematic and random uncertainties as published in their Guide to the Expression of Measurement Uncertainty. Details of the simplified methods and examples of their use are included in the paper
Uncertainty Propagation in an Ecosystem Nutrient Budget.
New aspects and advancements in classical uncertainty propagation methods were used to develop a nutrient budget with associated error for a northern Gulf of Mexico coastal embayment. Uncertainty was calculated for budget terms by propagating the standard error and degrees of fr...
Stochastic and epistemic uncertainty propagation in LCA
DEFF Research Database (Denmark)
Clavreul, Julie; Guyonnet, Dominique; Tonini, Davide
2013-01-01
When performing uncertainty propagation, most LCA practitioners choose to represent uncertainties by single probability distributions and to propagate them using stochastic methods. However, the selection of single probability distributions appears often arbitrary when faced with scarce information...... manner and apply it to LCA. A case study is used to show the uncertainty propagation performed with the proposed method and compare it to propagation performed using probability and possibility theories alone.Basic knowledge on the probability theory is first recalled, followed by a detailed description...
Uncertainty and its propagation in dynamics models
International Nuclear Information System (INIS)
Devooght, J.
1994-01-01
The purpose of this paper is to bring together some characteristics due to uncertainty when we deal with dynamic models and therefore to propagation of uncertainty. The respective role of uncertainty and inaccuracy is examined. A mathematical formalism based on Chapman-Kolmogorov equation allows to define a open-quotes subdynamicsclose quotes where the evolution equation takes the uncertainty into account. The problem of choosing or combining models is examined through a loss function associated to a decision
Representation and propagation of uncertainty in seismic fragilities
International Nuclear Information System (INIS)
Phillips, D.W.
1983-01-01
Probabilistic seismic risk assessment involves the estimation of site seismic hazard for very low annual exceedance frequencies, and plant failure probabilities for beyond design basis seismic loading. Both of these estimates naturally involve uncertainties, and the way in which the uncertainties are represented can affect significantly the overall assessed seismic risk. To date, the usual representation of uncertainty in seismic fragility has been the log-normal distribution, although other analytic representations are equally consistent with the available seismic fragility information in many instances. The influence of such alternative forms of uncertainty representation is examined and, in addition, the compounding of these influences by propagation of the uncertainties through event trees or fault trees is discussed in the context of general methods of propagation
Uncertainty propagation within the UNEDF models
Haverinen, T.; Kortelainen, M.
2017-04-01
The parameters of the nuclear energy density have to be adjusted to experimental data. As a result they carry certain uncertainty which then propagates to calculated values of observables. In the present work we quantify the statistical uncertainties of binding energies, proton quadrupole moments and proton matter radius for three UNEDF Skyrme energy density functionals by taking advantage of the knowledge of the model parameter uncertainties. We find that the uncertainty of UNEDF models increases rapidly when going towards proton or neutron rich nuclei. We also investigate the impact of each model parameter on the total error budget.
Quantification and propagation of disciplinary uncertainty via Bayesian statistics
Mantis, George Constantine
2002-08-01
Several needs exist in the military, commercial, and civil sectors for new hypersonic systems. These needs remain unfulfilled, due in part to the uncertainty encountered in designing these systems. This uncertainty takes a number of forms, including disciplinary uncertainty, that which is inherent in the analytical tools utilized during the design process. Yet, few efforts to date empower the designer with the means to account for this uncertainty within the disciplinary analyses. In the current state-of-the-art in design, the effects of this unquantifiable uncertainty significantly increase the risks associated with new design efforts. Typically, the risk proves too great to allow a given design to proceed beyond the conceptual stage. To that end, the research encompasses the formulation and validation of a new design method, a systematic process for probabilistically assessing the impact of disciplinary uncertainty. The method implements Bayesian Statistics theory to quantify this source of uncertainty, and propagate its effects to the vehicle system level. Comparison of analytical and physical data for existing systems, modeled a priori in the given analysis tools, leads to quantification of uncertainty in those tools' calculation of discipline-level metrics. Then, after exploration of the new vehicle's design space, the quantified uncertainty is propagated probabilistically through the design space. This ultimately results in the assessment of the impact of disciplinary uncertainty on the confidence in the design solution: the final shape and variability of the probability functions defining the vehicle's system-level metrics. Although motivated by the hypersonic regime, the proposed treatment of uncertainty applies to any class of aerospace vehicle, just as the problem itself affects the design process of any vehicle. A number of computer programs comprise the environment constructed for the implementation of this work. Application to a single
Towards a complete propagation uncertainties in depletion calculations
Energy Technology Data Exchange (ETDEWEB)
Martinez, J.S. [Universidad Politecnica de Madrid (Spain). Dept. of Nuclear Engineering; Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Garching (Germany); Zwermann, W.; Gallner, L.; Puente-Espel, Federico; Velkov, K.; Hannstein, V. [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Garching (Germany); Cabellos, O. [Universidad Politecnica de Madrid (Spain). Dept. of Nuclear Engineering
2013-07-01
Propagation of nuclear data uncertainties to calculated values is interesting for design purposes and libraries evaluation. XSUSA, developed at GRS, propagates cross section uncertainties to nuclear calculations. In depletion simulations, fission yields and decay data are also involved and are a possible source of uncertainty that should be taken into account. We have developed tools to generate varied fission yields and decay libraries and to propagate uncertainties through depletion in order to complete the XSUSA uncertainty assessment capabilities. A generic test to probe the methodology is defined and discussed. (orig.)
Measuring Analytical Quality: Total Analytical Error Versus Measurement Uncertainty.
Westgard, James O; Westgard, Sten A
2017-03-01
To characterize analytical quality of a laboratory test, common practice is to estimate Total Analytical Error (TAE) which includes both imprecision and trueness (bias). The metrologic approach is to determine Measurement Uncertainty (MU), which assumes bias can be eliminated, corrected, or ignored. Resolving the differences in these concepts and approaches is currently a global issue. Copyright © 2016 Elsevier Inc. All rights reserved.
Quantifying uncertainty in nuclear analytical measurements
International Nuclear Information System (INIS)
2004-07-01
The lack of international consensus on the expression of uncertainty in measurements was recognised by the late 1970s and led, after the issuance of a series of rather generic recommendations, to the publication of a general publication, known as GUM, the Guide to the Expression of Uncertainty in Measurement. This publication, issued in 1993, was based on co-operation over several years by the Bureau International des Poids et Mesures, the International Electrotechnical Commission, the International Federation of Clinical Chemistry, the International Organization for Standardization (ISO), the International Union of Pure and Applied Chemistry, the International Union of Pure and Applied Physics and the Organisation internationale de metrologie legale. The purpose was to promote full information on how uncertainty statements are arrived at and to provide a basis for harmonized reporting and the international comparison of measurement results. The need to provide more specific guidance to different measurement disciplines was soon recognized and the field of analytical chemistry was addressed by EURACHEM in 1995 in the first edition of a guidance report on Quantifying Uncertainty in Analytical Measurements, produced by a group of experts from the field. That publication translated the general concepts of the GUM into specific applications for analytical laboratories and illustrated the principles with a series of selected examples as a didactic tool. Based on feedback from the actual practice, the EURACHEM publication was extensively reviewed in 1997-1999 under the auspices of the Co-operation on International Traceability in Analytical Chemistry (CITAC), and a second edition was published in 2000. Still, except for a single example on the measurement of radioactivity in GUM, the field of nuclear and radiochemical measurements was not covered. The explicit requirement of ISO standard 17025:1999, General Requirements for the Competence of Testing and Calibration
Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I.
Bautista, M. A.; Fivet, V.; Quinet, P.; Dunn, J.; Gull, T. R.; Kallman, T. R.; Mendoza, C.
2013-01-01
We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data.We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of Oiii and Fe ii and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe ii]. Key words: atomic data - atomic processes - line: formation - methods: data analysis - molecular data - molecular processes - techniques: spectroscopic
Manufacturing Data Uncertainties Propagation Method in Burn-Up Problems
Directory of Open Access Journals (Sweden)
Thomas Frosio
2017-01-01
Full Text Available A nuclear data-based uncertainty propagation methodology is extended to enable propagation of manufacturing/technological data (TD uncertainties in a burn-up calculation problem, taking into account correlation terms between Boltzmann and Bateman terms. The methodology is applied to reactivity and power distributions in a Material Testing Reactor benchmark. Due to the inherent statistical behavior of manufacturing tolerances, Monte Carlo sampling method is used for determining output perturbations on integral quantities. A global sensitivity analysis (GSA is performed for each manufacturing parameter and allows identifying and ranking the influential parameters whose tolerances need to be better controlled. We show that the overall impact of some TD uncertainties, such as uranium enrichment, or fuel plate thickness, on the reactivity is negligible because the different core areas induce compensating effects on the global quantity. However, local quantities, such as power distributions, are strongly impacted by TD uncertainty propagations. For isotopic concentrations, no clear trends appear on the results.
Calibration and Propagation of Uncertainty for Independence
Energy Technology Data Exchange (ETDEWEB)
Holland, Troy Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kress, Joel David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bhat, Kabekode Ghanasham [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-06-30
This document reports on progress and methods for the calibration and uncertainty quantification of the Independence model developed at UT Austin. The Independence model is an advanced thermodynamic and process model framework for piperazine solutions as a high-performance CO_{2} capture solvent. Progress is presented in the framework of the CCSI standard basic data model inference framework. Recent work has largely focused on the thermodynamic submodels of Independence.
New challenges on uncertainty propagation assessment of flood risk analysis
Martins, Luciano; Aroca-Jiménez, Estefanía; Bodoque, José M.; Díez-Herrero, Andrés
2016-04-01
Natural hazards, such as floods, cause considerable damage to the human life, material and functional assets every year and around the World. Risk assessment procedures has associated a set of uncertainties, mainly of two types: natural, derived from stochastic character inherent in the flood process dynamics; and epistemic, that are associated with lack of knowledge or the bad procedures employed in the study of these processes. There are abundant scientific and technical literature on uncertainties estimation in each step of flood risk analysis (e.g. rainfall estimates, hydraulic modelling variables); but very few experience on the propagation of the uncertainties along the flood risk assessment. Therefore, epistemic uncertainties are the main goal of this work, in particular,understand the extension of the propagation of uncertainties throughout the process, starting with inundability studies until risk analysis, and how far does vary a proper analysis of the risk of flooding. These methodologies, such as Polynomial Chaos Theory (PCT), Method of Moments or Monte Carlo, are used to evaluate different sources of error, such as data records (precipitation gauges, flow gauges...), hydrologic and hydraulic modelling (inundation estimation), socio-demographic data (damage estimation) to evaluate the uncertainties propagation (UP) considered in design flood risk estimation both, in numerical and cartographic expression. In order to consider the total uncertainty and understand what factors are contributed most to the final uncertainty, we used the method of Polynomial Chaos Theory (PCT). It represents an interesting way to handle to inclusion of uncertainty in the modelling and simulation process. PCT allows for the development of a probabilistic model of the system in a deterministic setting. This is done by using random variables and polynomials to handle the effects of uncertainty. Method application results have a better robustness than traditional analysis
Quantile arithmetic methodology for uncertainty propagation in fault trees
International Nuclear Information System (INIS)
Abdelhai, M.; Ragheb, M.
1986-01-01
A methodology based on quantile arithmetic, the probabilistic analog to interval analysis, is proposed for the computation of uncertainties propagation in fault tree analysis. The basic events' continuous probability density functions (pdf's) are represented by equivalent discrete distributions by dividing them into a number of quantiles N. Quantile arithmetic is then used to performthe binary arithmetical operations corresponding to the logical gates in the Boolean expression of the top event expression of a given fault tree. The computational advantage of the present methodology as compared with the widely used Monte Carlo method was demonstrated for the cases of summation of M normal variables through the efficiency ratio defined as the product of the labor and error ratios. The efficiency ratio values obtained by the suggested methodology for M = 2 were 2279 for N = 5, 445 for N = 25, and 66 for N = 45 when compared with the results for 19,200 Monte Carlo samples at the 40th percentile point. Another advantage of the approach is that the exact analytical value of the median is always obtained for the top event
Stochastic Systems Uncertainty Quantification and Propagation
Grigoriu, Mircea
2012-01-01
Uncertainty is an inherent feature of both properties of physical systems and the inputs to these systems that needs to be quantified for cost effective and reliable designs. The states of these systems satisfy equations with random entries, referred to as stochastic equations, so that they are random functions of time and/or space. The solution of stochastic equations poses notable technical difficulties that are frequently circumvented by heuristic assumptions at the expense of accuracy and rigor. The main objective of Stochastic Systems is to promoting the development of accurate and efficient methods for solving stochastic equations and to foster interactions between engineers, scientists, and mathematicians. To achieve these objectives Stochastic Systems presents: · A clear and brief review of essential concepts on probability theory, random functions, stochastic calculus, Monte Carlo simulation, and functional analysis · Probabilistic models for random variables an...
Uncertainty and Cooperation: Analytical Results and a Simulated Agent Society
Peter Andras; John Lazarus; Gilbert Roberts; Steven J Lynden
2005-01-01
Uncertainty is an important factor that influences social evolution in natural and artificial environments. Here we distinguish between three aspects of uncertainty. Environmental uncertainty is the variance of resources in the environment, perceived uncertainty is the variance of the resource distribution as perceived by the organism and effective uncertainty is the variance of resources effectively enjoyed by individuals. We show analytically that perceived uncertainty is larger than enviro...
Orbit uncertainty propagation and sensitivity analysis with separated representations
Balducci, Marc; Jones, Brandon; Doostan, Alireza
2017-09-01
Most approximations for stochastic differential equations with high-dimensional, non-Gaussian inputs suffer from a rapid (e.g., exponential) increase of computational cost, an issue known as the curse of dimensionality. In astrodynamics, this results in reduced accuracy when propagating an orbit-state probability density function. This paper considers the application of separated representations for orbit uncertainty propagation, where future states are expanded into a sum of products of univariate functions of initial states and other uncertain parameters. An accurate generation of separated representation requires a number of state samples that is linear in the dimension of input uncertainties. The computation cost of a separated representation scales linearly with respect to the sample count, thereby improving tractability when compared to methods that suffer from the curse of dimensionality. In addition to detailed discussions on their construction and use in sensitivity analysis, this paper presents results for three test cases of an Earth orbiting satellite. The first two cases demonstrate that approximation via separated representations produces a tractable solution for propagating the Cartesian orbit-state uncertainty with up to 20 uncertain inputs. The third case, which instead uses Equinoctial elements, reexamines a scenario presented in the literature and employs the proposed method for sensitivity analysis to more thoroughly characterize the relative effects of uncertain inputs on the propagated state.
Martinez, J. S.; Zwermann, W.; Gallner, L.; Puente-Espel, F.; Cabellos, O.; Velkov, K.; Hannstein, V.
2014-04-01
Propagation of nuclear data uncertainties in reactor calculations is interesting for design purposes and libraries evaluation. Previous versions of the GRS XSUSA library propagated only neutron cross section uncertainties. We have extended XSUSA uncertainty assessment capabilities by including propagation of fission yields and decay data uncertainties due to the their relevance in depletion simulations. We apply this extended methodology to the UAM6 PWR Pin-Cell Burnup Benchmark, which involves uncertainty propagation through burnup.
Uncertainty propagation through dynamic models of assemblies of mechanical structures
International Nuclear Information System (INIS)
Daouk, Sami
2016-01-01
When studying the behaviour of mechanical systems, mathematical models and structural parameters are usually considered deterministic. Return on experience shows however that these elements are uncertain in most cases, due to natural variability or lack of knowledge. Therefore, quantifying the quality and reliability of the numerical model of an industrial assembly remains a major question in low-frequency dynamics. The purpose of this thesis is to improve the vibratory design of bolted assemblies through setting up a dynamic connector model that takes account of different types and sources of uncertainty on stiffness parameters, in a simple, efficient and exploitable in industrial context. This work has been carried out in the framework of the SICODYN project, led by EDF R and D, that aims to characterise and quantify, numerically and experimentally, the uncertainties in the dynamic behaviour of bolted industrial assemblies. Comparative studies of several numerical methods of uncertainty propagation demonstrate the advantage of using the Lack-Of-Knowledge theory. An experimental characterisation of uncertainties in bolted structures is performed on a dynamic test rig and on an industrial assembly. The propagation of many small and large uncertainties through different dynamic models of mechanical assemblies leads to the assessment of the efficiency of the Lack-Of-Knowledge theory and its applicability in an industrial environment. (author)
Second-Order Analytical Uncertainty Analysis in Life Cycle Assessment.
von Pfingsten, Sarah; Broll, David Oliver; von der Assen, Niklas; Bardow, André
2017-11-21
Life cycle assessment (LCA) results are inevitably subject to uncertainties. Since the complete elimination of uncertainties is impossible, LCA results should be complemented by an uncertainty analysis. However, the approaches currently used for uncertainty analysis have some shortcomings: statistical uncertainty analysis via Monte Carlo simulations are inherently uncertain due to their statistical nature and can become computationally inefficient for large systems; analytical approaches use a linear approximation to the uncertainty by a first-order Taylor series expansion and thus, they are only precise for small input uncertainties. In this article, we refine the analytical uncertainty analysis by a more precise, second-order Taylor series expansion. The presented approach considers uncertainties from process data, allocation, and characterization factors. We illustrate the refined approach for hydrogen production from methane-cracking. The production system contains a recycling loop leading to nonlinearities. By varying the strength of the loop, we analyze the precision of the first- and second-order analytical uncertainty approaches by comparing analytical variances to variances from statistical Monte Carlo simulations. For the case without loops, the second-order approach is practically exact. In all cases, the second-order Taylor series approach is more precise than the first-order approach, in particular for large uncertainties and for production systems with nonlinearities, for example, from loops. For analytical uncertainty analysis, we recommend using the second-order approach since it is more precise and still computationally cheap.
Uncertainty propagation for statistical impact prediction of space debris
Hoogendoorn, R.; Mooij, E.; Geul, J.
2018-01-01
Predictions of the impact time and location of space debris in a decaying trajectory are highly influenced by uncertainties. The traditional Monte Carlo (MC) method can be used to perform accurate statistical impact predictions, but requires a large computational effort. A method is investigated that directly propagates a Probability Density Function (PDF) in time, which has the potential to obtain more accurate results with less computational effort. The decaying trajectory of Delta-K rocket stages was used to test the methods using a six degrees-of-freedom state model. The PDF of the state of the body was propagated in time to obtain impact-time distributions. This Direct PDF Propagation (DPP) method results in a multi-dimensional scattered dataset of the PDF of the state, which is highly challenging to process. No accurate results could be obtained, because of the structure of the DPP data and the high dimensionality. Therefore, the DPP method is less suitable for practical uncontrolled entry problems and the traditional MC method remains superior. Additionally, the MC method was used with two improved uncertainty models to obtain impact-time distributions, which were validated using observations of true impacts. For one of the two uncertainty models, statistically more valid impact-time distributions were obtained than in previous research.
Comparison between two modern uncertainty expression and propagation approaches
International Nuclear Information System (INIS)
Pertile, M; Debei, S
2010-01-01
Two different uncertainty expression and propagation approaches are presented and compared. In particular, an implementation of the known probabilistic approach and a new Random-Fuzzy Variable (RFV) method based on the theory of Evidence. Both approaches use an explicit time correlation of input quantities to take into account systematic contributions. Numerical results show that both the type of uncertainty contribution (random or systematic) and the owned level of knowledge (Probability density Function PDF or simply a limited interval) must be carefully evaluated in uncertainty analysis. The new RFV approach allows to seamlessly deal with PDFs and limited intervals. This advantage is not present in the probabilistic approach, which yields questionable results in complete ignorance situations.
HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks
Energy Technology Data Exchange (ETDEWEB)
Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-05-01
This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.
DEM Uncertainty propagation in second derivatives geomorphometrical maps
Cosmin Sandric, Ionut; Ursaru, Petre
2013-04-01
In order to model the uncertainty from DEM a special model was created and implemented as Python script in ArcGIS Desktop using the ArcPy SDK provided by ESRI. The model is based on Monte Carlo simulation for generating noise and Map Algebra for adding the noise to DEM. The model can be used and independent script or combined with any other models. The inputs of the model are a DEM and an estimation of the DEM accuracy expressed as mean and standard deviation of the errors. The mean and standard deviation may be obtained from a crossvalidation/validation operation, if the model is obtained with geostatistics or by a simple validation with ground control points, if the model is obtained by other means than geostatistics. The DEM uncertainty propagation model assumes that the errors are normally distributed and thus the noise is normal distributed. This version of the model requires a Spatial Analyst extension, but the future versions may be used without or with Spatial Analyst extension. The main issue related with the addition of noise to DEM's in order to compensate for uncertainty is that the second derivatives are almost impossible to extract. This drawback was overcome by using and interpolated noisy surface in the uncertainty propagation model. Statistical analysis on raster obtained in each Monte Carlo simulation; for each realization of the model the following statistical analysis are performed: mean, minimum, maximum, range and standard deviation are extracted and saved as ESRI GRID format When the model finishes the specialist have an image about the uncertainties that might be contained by the DEM and in the same time have a collection of DEM that can be used to generate first and second order derivatives
Uncertainty propagation in a multiscale model of nanocrystalline plasticity
International Nuclear Information System (INIS)
Koslowski, M.; Strachan, Alejandro
2011-01-01
We characterize how uncertainties propagate across spatial and temporal scales in a physics-based model of nanocrystalline plasticity of fcc metals. Our model combines molecular dynamics (MD) simulations to characterize atomic-level processes that govern dislocation-based-plastic deformation with a phase field approach to dislocation dynamics (PFDD) that describes how an ensemble of dislocations evolve and interact to determine the mechanical response of the material. We apply this approach to a nanocrystalline Ni specimen of interest in micro-electromechanical (MEMS) switches. Our approach enables us to quantify how internal stresses that result from the fabrication process affect the properties of dislocations (using MD) and how these properties, in turn, affect the yield stress of the metallic membrane (using the PFMM model). Our predictions show that, for a nanocrystalline sample with small grain size (4 nm), a variation in residual stress of 20 MPa (typical in today's microfabrication techniques) would result in a variation on the critical resolved shear yield stress of approximately 15 MPa, a very small fraction of the nominal value of approximately 9 GPa. - Highlights: → Quantify how fabrication uncertainties affect yield stress in a microswitch component. → Propagate uncertainties in a multiscale model of single crystal plasticity. → Molecular dynamics quantifies how fabrication variations affect dislocations. → Dislocation dynamics relate variations in dislocation properties to yield stress.
Uncertainty propagation in probabilistic safety analysis of nuclear power plants
International Nuclear Information System (INIS)
Fleming, P.V.
1981-09-01
The uncertainty propagation in probabilistic safety analysis of nuclear power plants, is done. The methodology of the minimal cut is implemented in the computer code SVALON and the results for several cases are compared with corresponding results obtained with the SAMPLE code, which employs the Monte Carlo method to propagate the uncertanties. The results have show that, for a relatively small number of dominant minimal cut sets (n approximately 25) and error factors (r approximately 5) the SVALON code yields results which are comparable to those obtained with SAMPLE. An analysis of the unavailability of the low pressure recirculation system of Angra 1 for both the short and long term recirculation phases, are presented. The results for the short term phase are in good agreement with the corresponding one given in WASH-1400. (E.G.) [pt
Propagation of radar rainfall uncertainty in urban flood simulations
Liguori, Sara; Rico-Ramirez, Miguel
2013-04-01
This work discusses the results of the implementation of a novel probabilistic system designed to improve ensemble sewer flow predictions for the drainage network of a small urban area in the North of England. The probabilistic system has been developed to model the uncertainty associated to radar rainfall estimates and propagate it through radar-based ensemble sewer flow predictions. The assessment of this system aims at outlining the benefits of addressing the uncertainty associated to radar rainfall estimates in a probabilistic framework, to be potentially implemented in the real-time management of the sewer network in the study area. Radar rainfall estimates are affected by uncertainty due to various factors [1-3] and quality control and correction techniques have been developed in order to improve their accuracy. However, the hydrological use of radar rainfall estimates and forecasts remains challenging. A significant effort has been devoted by the international research community to the assessment of the uncertainty propagation through probabilistic hydro-meteorological forecast systems [4-5], and various approaches have been implemented for the purpose of characterizing the uncertainty in radar rainfall estimates and forecasts [6-11]. A radar-based ensemble stochastic approach, similar to the one implemented for use in the Southern-Alps by the REAL system [6], has been developed for the purpose of this work. An ensemble generator has been calibrated on the basis of the spatial-temporal characteristics of the residual error in radar estimates assessed with reference to rainfall records from around 200 rain gauges available for the year 2007, previously post-processed and corrected by the UK Met Office [12-13]. Each ensemble member is determined by summing a perturbation field to the unperturbed radar rainfall field. The perturbations are generated by imposing the radar error spatial and temporal correlation structure to purely stochastic fields. A
Propagation Delay Uncertainty in Time-Of Systems
Feehrer, John Ross
1995-01-01
This dissertation presents a study of how propagation delay uncertainty affects the performance of time-of-flight synchronized digital circuits. Time-of-flight synchronization is a new timing method suitable for technologies such as optoelectronics having highly controllable propagation delay. No bistable memory elements are required, and synchronization is accomplished by precise adjustments of interconnect lengths. Delay is distributed over connections so that, nominally, pulses arrive at a common destination simultaneously. Clock gating and pulse stretching are used to restore timing of pulses. Time multiplexing is used to increase computational throughput, whereby a major cycle is divided into a number of minor cycles, each representing an independent virtual machine. What limits the amount of multiplexing that is feasible is the controllability of delay. The principle focus of this research is methods for computing the minimum feasible minor cycle and the amount of stretch needed to prevent synchronization errors. Due to the unique circuit features, timing analysis differs significantly from analysis of conventional digital circuits. Models of delay uncertainty accounting for static and dynamic effects are discussed for discrete and integrated implementations. Methods for placing a minimal set of clock gates necessary for a functional circuit are presented. The minimum feasible major cycle is computed using nominal delays. A method for computing the arrival time and pulse width uncertainty at each node in the circuit is presented. The circuit graph is traversed and device uncertainty functions operating on worst-case input pulse parameters are applied at vertices. Using pulse timing parameters obtained from the traversal, timing constraints are generated. A constrained minimization problem to find the minimum feasible minor cycle is then presented and solved. Two variations on this problem are presented. Circuit structural issues that affect the accuracy of
Solar and nuclear physics uncertainties in cosmic-ray propagation
Tomassetti, Nicola
2017-11-01
Recent data released by the Alpha Magnetic Spectrometer (AMS) experiment on the primary spectra and secondary-to-primary ratios in cosmic rays (CRs) can pose tight constraints to astrophysical models of CR acceleration and transport in the Galaxy, thereby providing a robust baseline of the astrophysical background for a dark matter search via antimatter. However, models of CR propagation are affected by other important sources of uncertainties, notably from solar modulation and nuclear fragmentation, that cannot be improved with the sole use of the AMS data. The present work is aimed at assessing these uncertainties and their relevance in the interpretation of the new AMS data on the boron-to-carbon (B /C ) ratio. Uncertainties from solar modulation are estimated using improved models of CR transport in the heliosphere constrained against various types of measurements: monthly resolved CR data collected by balloon-born or space missions, interstellar flux data from the Voyager-1 spacecraft, and counting rates from ground-based neutron monitor detectors. Uncertainties from nuclear fragmentation are estimated using semiempirical cross-section formulas constrained by measurements on isotopically resolved and charge-changing reactions. We found that a proper data-driven treatment of solar modulation can guarantee the desired level of precision, in comparison with the improved accuracy of the recent data on the B /C ratio. On the other hand, nuclear uncertainties represent a serious limiting factor over a wide energy range. We therefore stress the need for establishing a dedicated program of cross-section measurements at the O (100 GeV ) energy scale.
Investigating the Propagation of Meteorological Model Uncertainty for Tracer Modeling
Lopez-Coto, I.; Ghosh, S.; Karion, A.; Martin, C.; Mueller, K. L.; Prasad, K.; Whetstone, J. R.
2016-12-01
The North-East Corridor project aims to use a top-down inversion method to quantify sources of Greenhouse Gas (GHG) emissions in the urban areas of Washington DC and Baltimore at approximately 1km2 resolutions. The aim of this project is to help establish reliable measurement methods for quantifying and validating GHG emissions independently of the inventory methods typically used to guide mitigation efforts. Since inversion methods depend strongly on atmospheric transport modeling, analyzing the uncertainties on the meteorological fields and their propagation through the sensitivities of observations to surface fluxes (footprints) is a fundamental step. To this end, six configurations of the Weather Research and Forecasting Model (WRF-ARW) version 3.8 were used to generate an ensemble of meteorological simulations. Specifically, we used 4 planetary boundary layer parameterizations (YSU, MYNN2, BOULAC, QNSE), 2 sources of initial and boundary conditions (NARR and HRRR) and 1 configuration including the building energy parameterization (BEP) urban canopy model. The simulations were compared with more than 150 meteorological surface stations, a wind profiler and radiosondes for a month (February) in 2016 to account for the uncertainties and the ensemble spread for wind speed, direction and mixing height. In addition, we used the Stochastic Time-Inverted Lagrangian Transport model (STILT) to derive the sensitivity of 12 hypothetical observations to surface emissions (footprints) with each WRF configuration. The footprints and integrated sensitivities were compared and the resulting uncertainties estimated.
Analytic uncertainty and sensitivity analysis of models with input correlations
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
Risk classification and uncertainty propagation for virtual water distribution systems
International Nuclear Information System (INIS)
Torres, Jacob M.; Brumbelow, Kelly; Guikema, Seth D.
2009-01-01
While the secrecy of real water distribution system data is crucial, it poses difficulty for research as results cannot be publicized. This data includes topological layouts of pipe networks, pump operation schedules, and water demands. Therefore, a library of virtual water distribution systems can be an important research tool for comparative development of analytical methods. A virtual city, 'Micropolis', has been developed, including a comprehensive water distribution system, as a first entry into such a library. This virtual city of 5000 residents is fully described in both geographic information systems (GIS) and EPANet hydraulic model frameworks. A risk classification scheme and Monte Carlo analysis are employed for an attempted water supply contamination attack. Model inputs to be considered include uncertainties in: daily water demand, seasonal demand, initial storage tank levels, the time of day a contamination event is initiated, duration of contamination event, and contaminant quantity. Findings show that reasonable uncertainties in model inputs produce high variability in exposure levels. It is also shown that exposure level distributions experience noticeable sensitivities to population clusters within the contaminant spread area. High uncertainties in exposure patterns lead to greater resources needed for more effective mitigation strategies.
Directory of Open Access Journals (Sweden)
Krivtchik Guillaume
2017-01-01
Full Text Available Scenario studies simulate the whole fuel cycle over a period of time, from extraction of natural resources to geological storage. Through the comparison of different reactor fleet evolutions and fuel management options, they constitute a decision-making support. Consequently uncertainty propagation studies, which are necessary to assess the robustness of the studies, are strategic. Among numerous types of physical model in scenario computation that generate uncertainty, the equivalence models, built for calculating fresh fuel enrichment (for instance plutonium content in PWR MOX so as to be representative of nominal fuel behavior, are very important. The equivalence condition is generally formulated in terms of end-of-cycle mean core reactivity. As this results from a physical computation, it is therefore associated with an uncertainty. A state-of-the-art of equivalence models is exposed and discussed. It is shown that the existing equivalent models implemented in scenario codes, such as COSI6, are not suited to uncertainty propagation computation, for the following reasons: (i existing analytical models neglect irradiation, which has a strong impact on the result and its uncertainty; (ii current black-box models are not suited to cross-section perturbations management; and (iii models based on transport and depletion codes are too time-consuming for stochastic uncertainty propagation. A new type of equivalence model based on Artificial Neural Networks (ANN has been developed, constructed with data calculated with neutron transport and depletion codes. The model inputs are the fresh fuel isotopy, the irradiation parameters (burnup, core fractionation, etc., cross-sections perturbations and the equivalence criterion (for instance the core target reactivity in pcm at the end of the irradiation cycle. The model output is the fresh fuel content such that target reactivity is reached at the end of the irradiation cycle. Those models are built and
Uncertainty propagation for systems of conservation laws, stochastic spectral methods
International Nuclear Information System (INIS)
Poette, G.
2009-09-01
Uncertainty quantification through stochastic spectral methods has been recently applied to several kinds of stochastic PDEs. This thesis deals with stochastic systems of conservation laws. These systems are non linear and develop discontinuities in finite times: these difficulties can trigger the loss of hyperbolicity of the truncated system resulting of the application of sG-gPC (stochastic Galerkin-generalized Polynomial Chaos). We introduce a formalism based on both kinetic theory and moments theory in order to close the truncated system in such a way that the hyperbolicity of the latter is ensured. The idea is to close the truncated system obtained by Galerkin projection via the introduction of an entropy - strictly convex function on the definition domain of our unknowns. In the case this entropy is the mathematical entropy of the non truncated system, the hyperbolicity is ensured. We state several properties of this truncated system from a general non truncated system of conservation laws. We then apply the method to the case of the stochastic inviscid Burgers' equation with random initial conditions and to the stochastic Euler system in one and two space dimensions. In the vicinity of discontinuities, the new method bounds the oscillations due to Gibbs phenomenon to a certain range through the entropy of the system without the use of any adaptative random space discretizations. It is found to be more precise than the stochastic Galerkin method for several test problems. In a last chapter, we present two prospective outlooks: we first suggest an uncertainty propagation method based on the coupling of intrusive and non intrusive methods. We finally emphasize the modelling possibilities of the intrusive Polynomial Chaos methods in order to take into account three dimensional perturbations of a mean one dimensional flow. (author)
International Nuclear Information System (INIS)
Pimentel, B.M.; Suzuki, A.T.; Tomazelli, J.L.
1992-01-01
The principle of analytic continuation can be used to derive causal distributions for covariant propagators. We apply this principle as a basis for deriving analytically continued causal distributions for algebraic non-covariant propagators. (author)
Propagation of nuclear data uncertainty: Exact or with covariances
Directory of Open Access Journals (Sweden)
van Veen D.
2010-10-01
Full Text Available Two distinct methods of propagation for basic nuclear data uncertainties to large scale systems will be presented and compared. The “Total Monte Carlo” method is using a statistical ensemble of nuclear data libraries randomly generated by means of a Monte Carlo approach with the TALYS system. These libraries are then directly used in a large number of reactor calculations (for instance with MCNP after which the exact probability distribution for the reactor parameter is obtained. The second method makes use of available covariance files and can be done in a single reactor calculation (by using the perturbation method. In this exercise, both methods are using consistent sets of data files, which implies that covariance files used in the second method are directly obtained from the randomly generated nuclear data libraries from the first method. This is a unique and straightforward comparison allowing to directly apprehend advantages and drawbacks of each method. Comparisons for different reactions and criticality-safety benchmarks from 19F to actinides will be presented. We can thus conclude whether current methods for using covariance data are good enough or not.
Propagation of Statistical and Nuclear Data Uncertainties in Monte-Carlo Burn-up Calculations
García Herranz, Nuria; Cabellos de Francisco, Oscar Luis; Sanz Gonzalo, Javier; Juan Ruiz, Jesús; Kuijper, Jim C.
2008-01-01
Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP–ACAB system, which comb...
Uncertainties in workplace external dosimetry--an analytical approach.
Ambrosi, P
2006-01-01
The uncertainties associated with external dosimetry measurements at workplaces depend on the type of dosemeter used together with its performance characteristics and the information available on the measurement conditions. Performance characteristics were determined in the course of a type test and information about the measurement conditions can either be general, e.g. 'research' and 'medicine', or specific, e.g. 'X-ray testing equipment for aluminium wheel rims'. This paper explains an analytical approach to determine the measurement uncertainty. It is based on the Draft IEC Technical Report IEC 62461 Radiation Protection Instrumentation-Determination of Uncertainty in Measurement. Both this paper and the report cannot eliminate the fact that the determination of the uncertainty requires a larger effort than performing the measurement itself. As a counterbalance, the process of determining the uncertainty results not only in a numerical value of the uncertainty but also produces the best estimate of the quantity to be measured, which may differ from the indication of the instrument. Thus it also improves the result of the measurement.
Uncertainties in workplace external dosimetry - An analytical approach
International Nuclear Information System (INIS)
Ambrosi, P.
2006-01-01
The uncertainties associated with external dosimetry measurements at workplaces depend on the type of dosemeter used together with its performance characteristics and the information available on the measurement conditions. Performance characteristics were determined in the course of a type test and information about the measurement conditions can either be general, e.g. 'research' and 'medicine', or specific, e.g. 'X-ray testing equipment for aluminium wheel rims'. This paper explains an analytical approach to determine the measurement uncertainty. It is based on the Draft IEC Technical Report IEC 62461 Radiation Protection Instrumentation - Determination of Uncertainty in Measurement. Both this paper and the report cannot eliminate the fact that the determination of the uncertainty requires a larger effort than performing the measurement itself. As a counterbalance, the process of determining the uncertainty results not only in a numerical value of the uncertainty but also produces the best estimate of the quantity to be measured, which may differ from the indication of the instrument. Thus it also improves the result of the measurement. (authors)
International Nuclear Information System (INIS)
Rocco Sanseverino, Claudio M.; Ramirez-Marquez, José Emmanuel
2014-01-01
The reliability of a system, notwithstanding it intended function, can be significantly affected by the uncertainty in the reliability estimate of the components that define the system. This paper implements the Unscented Transformation to quantify the effects of the uncertainty of component reliability through two approaches. The first approach is based on the concept of uncertainty propagation, which is the assessment of the effect that the variability of the component reliabilities produces on the variance of the system reliability. This assessment based on UT has been previously considered in the literature but only for system represented through series/parallel configuration. In this paper the assessment is extended to systems whose reliability cannot be represented through analytical expressions and require, for example, Monte Carlo Simulation. The second approach consists on the evaluation of the importance of components, i.e., the evaluation of the components that most contribute to the variance of the system reliability. An extension of the UT is proposed to evaluate the so called “main effects” of each component, as well to assess high order component interaction. Several examples with excellent results illustrate the proposed approach. - Highlights: • Simulation based approach for computing reliability estimates. • Computation of reliability variance via 2n+1 points. • Immediate computation of component importance. • Application to network systems
Estimation of the uncertainty of analyte concentration from the measurement uncertainty.
Brown, Simon; Cooke, Delwyn G; Blackwell, Leonard F
2015-09-01
Ligand-binding assays, such as immunoassays, are usually analysed using standard curves based on the four-parameter and five-parameter logistic models. An estimate of the uncertainty of an analyte concentration obtained from such curves is needed for confidence intervals or precision profiles. Using a numerical simulation approach, it is shown that the uncertainty of the analyte concentration estimate becomes significant at the extremes of the concentration range and that this is affected significantly by the steepness of the standard curve. We also provide expressions for the coefficient of variation of the analyte concentration estimate from which confidence intervals and the precision profile can be obtained. Using three examples, we show that the expressions perform well.
International Nuclear Information System (INIS)
Berge, Leonie
2015-01-01
The prompt fission neutron spectrum (PFNS) is very important for various nuclear physics applications. Yet, except for the 252 Cf spontaneous fission spectrum which is an international standard and is used for metrology purposes, the PFNS is still poorly known for most of the fissioning nuclides. In particular, few measurements exist for the fast fission spectrum (induced by a neutron whose energy exceeds about 100 keV), and the international evaluations show strong discrepancies. There are also very few data about covariances associated to the various PFNS evaluations. In this work we present three aspects of the PFNS evaluation. The first aspect is about the spectrum modeling with the FIFRELIN code, developed at CEA Cadarache, which simulates the fission fragment de-excitation by successive emissions of prompt neutrons and gammas, via the Monte-Carlo method. This code aims at calculating all fission observables in a single consistent calculation, starting from fission fragment distributions (mass, kinetic energy and spin). FIFRELIN is therefore more predictive than the analytical models used to describe the spectrum. A study of model parameters which impact the spectrum, like the fragment level density parameter, is presented in order to better reproduce the spectrum. The second aspect of this work is about the evaluation of the PFNS and its covariance matrix. We present a methodology to produce this evaluation in a rigorous way, with the CONRAD code, developed at CEA Cadarache. This implies modeling the spectrum through simple models, like the Madland-Nix model which is the most commonly used in the evaluations, by adjusting the model parameters to reproduce experimental data. The covariance matrix arises from the rigorous propagation of the sources of uncertainty involved in the calculation. In particular, the systematic uncertainties arising from the experimental set-up are propagated via a marginalization technique. The marginalization allows propagating
'spup' - an R package for uncertainty propagation analysis in spatial environmental modelling
Sawicka, Kasia; Heuvelink, Gerard
2017-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability and being able to deal with case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected visualization methods that are understandable by non-experts with limited background in
Measuring the Gas Constant "R": Propagation of Uncertainty and Statistics
Olsen, Robert J.; Sattar, Simeen
2013-01-01
Determining the gas constant "R" by measuring the properties of hydrogen gas collected in a gas buret is well suited for comparing two approaches to uncertainty analysis using a single data set. The brevity of the experiment permits multiple determinations, allowing for statistical evaluation of the standard uncertainty u[subscript…
Pragmatic aspects of uncertainty propagation: A conceptual review
Thacker, W.Carlisle
2015-09-11
When quantifying the uncertainty of the response of a computationally costly oceanographic or meteorological model stemming from the uncertainty of its inputs, practicality demands getting the most information using the fewest simulations. It is widely recognized that, by interpolating the results of a small number of simulations, results of additional simulations can be inexpensively approximated to provide a useful estimate of the variability of the response. Even so, as computing the simulations to be interpolated remains the biggest expense, the choice of these simulations deserves attention. When making this choice, two requirement should be considered: (i) the nature of the interpolation and ii) the available information about input uncertainty. Examples comparing polynomial interpolation and Gaussian process interpolation are presented for three different views of input uncertainty.
An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method
International Nuclear Information System (INIS)
Campolina, Daniel; Lima, Paulo Rubens I.; Pereira, Claubia; Veloso, Maria Auxiliadora F.
2015-01-01
Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)
An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method
Energy Technology Data Exchange (ETDEWEB)
Campolina, Daniel; Lima, Paulo Rubens I., E-mail: campolina@cdtn.br, E-mail: pauloinacio@cpejr.com.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil). Servico de Tecnologia de Reatores; Pereira, Claubia; Veloso, Maria Auxiliadora F., E-mail: claubia@nuclear.ufmg.br, E-mail: dora@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Dept. de Engenharia Nuclear
2015-07-01
Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k{sub eff} was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)
Preliminary Results on Uncertainty Quantification for Pattern Analytics
Energy Technology Data Exchange (ETDEWEB)
Stracuzzi, David John [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Brost, Randolph [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Chen, Maximillian Gene [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Malinas, Rebecca [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Peterson, Matthew Gregor [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Phillips, Cynthia A. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robinson, David G. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Woodbridge, Diane [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
This report summarizes preliminary research into uncertainty quantification for pattern ana- lytics within the context of the Pattern Analytics to Support High-Performance Exploitation and Reasoning (PANTHER) project. The primary focus of PANTHER was to make large quantities of remote sensing data searchable by analysts. The work described in this re- port adds nuance to both the initial data preparation steps and the search process. Search queries are transformed from does the specified pattern exist in the data? to how certain is the system that the returned results match the query? We show example results for both data processing and search, and discuss a number of possible improvements for each.
Campolina, Daniel de A. M.; Lima, Claubia P. B.; Veloso, Maria Auxiliadora F.
2014-06-01
For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95th percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input.
An analytical approach for the Propagation Saw Test
Benedetti, Lorenzo; Fischer, Jan-Thomas; Gaume, Johan
2016-04-01
The Propagation Saw Test (PST) [1, 2] is an experimental in-situ technique that has been introduced to assess crack propagation propensity in weak snowpack layers buried below cohesive snow slabs. This test attracted the interest of a large number of practitioners, being relatively easy to perform and providing useful insights for the evaluation of snow instability. The PST procedure requires isolating a snow column of 30 centimeters of width and -at least-1 meter in the downslope direction. Then, once the stratigraphy is known (e.g. from a manual snow profile), a saw is used to cut a weak layer which could fail, potentially leading to the release of a slab avalanche. If the length of the saw cut reaches the so-called critical crack length, the onset of crack propagation occurs. Furthermore, depending on snow properties, the crack in the weak layer can initiate the fracture and detachment of the overlying slab. Statistical studies over a large set of field data confirmed the relevance of the PST, highlighting the positive correlation between test results and the likelihood of avalanche release [3]. Recent works provided key information on the conditions for the onset of crack propagation [4] and on the evolution of slab displacement during the test [5]. In addition, experimental studies [6] and simplified models [7] focused on the qualitative description of snowpack properties leading to different failure types, namely full propagation or fracture arrest (with or without slab fracture). However, beside current numerical studies utilizing discrete elements methods [8], only little attention has been devoted to a detailed analytical description of the PST able to give a comprehensive mechanical framework of the sequence of processes involved in the test. Consequently, this work aims to give a quantitative tool for an exhaustive interpretation of the PST, stressing the attention on important parameters that influence the test outcomes. First, starting from a pure
Sonic Boom Pressure Signature Uncertainty Calculation and Propagation to Ground Noise
West, Thomas K., IV; Bretl, Katherine N.; Walker, Eric L.; Pinier, Jeremy T.
2015-01-01
The objective of this study was to outline an approach for the quantification of uncertainty in sonic boom measurements and to investigate the effect of various near-field uncertainty representation approaches on ground noise predictions. These approaches included a symmetric versus asymmetric uncertainty band representation and a dispersion technique based on a partial sum Fourier series that allows for the inclusion of random error sources in the uncertainty. The near-field uncertainty was propagated to the ground level, along with additional uncertainty in the propagation modeling. Estimates of perceived loudness were obtained for the various types of uncertainty representation in the near-field. Analyses were performed on three configurations of interest to the sonic boom community: the SEEB-ALR, the 69o DeltaWing, and the LM 1021-01. Results showed that representation of the near-field uncertainty plays a key role in ground noise predictions. Using a Fourier series based dispersion approach can double the amount of uncertainty in the ground noise compared to a pure bias representation. Compared to previous computational fluid dynamics results, uncertainty in ground noise predictions were greater when considering the near-field experimental uncertainty.
Addressing Uncertainty in Signal Propagation and Sensor Performance Predictions
2008-11-01
ance on how to place the sensors. Time of deployment is weeks or months from present, so weather/terrain conditions are only known in a clima ...temporal behavior based on the clima - tological/historical conditions. ERDC/CRREL TR-08-21 13 3 Describing Uncertainty “There are known knowns; there
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Propagation of uncertainties in problems of structural reliability
International Nuclear Information System (INIS)
Mazumdar, M.; Marshall, J.A.; Chay, S.C.
1978-01-01
The problem of controlling a variable Y such that the probability of its exceeding a specified design limit L is very small, is treated. This variable is related to a set of random variables Xsub(i) by means of a known function Y=f(Xsub(i)). The following approximate methods are considered for estimating the propagation of error in the Xsub(i)'s through the function f(-): linearization; method of moments; Monte Carlo methods; numerical integration. Response surface and associated design of experiments problems as well as statistical inference problems are discussed. (Auth.)
International Nuclear Information System (INIS)
Olsen, A.R.; Cunningham, M.E.
1980-01-01
With the increasing sophistication and use of computer codes in the nuclear industry, there is a growing awareness of the need to identify and quantify the uncertainties of these codes. In any effort to model physical mechanisms, the results obtained from the model are subject to some degree of uncertainty. This uncertainty has two primary sources. First, there is uncertainty in the model's representation of reality. Second, there is an uncertainty in the input data required by the model. If individual models are combined into a predictive sequence, the uncertainties from an individual model will propagate through the sequence and add to the uncertainty of results later obtained. Nuclear fuel rod stored-energy models, characterized as a combination of numerous submodels, exemplify models so affected. Each submodel depends on output from previous calculations and may involve iterative interdependent submodel calculations for the solution. The iterative nature of the model and the cost of running the model severely limit the uncertainty analysis procedures. An approach for uncertainty analysis under these conditions was designed for the particular case of stored-energy models. It is assumed that the complicated model is correct, that a simplified model based on physical considerations can be designed to approximate the complicated model, and that linear error propagation techniques can be used on the simplified model
International Nuclear Information System (INIS)
Hu Li; Cai Yangjian
2006-01-01
Based on the generalized diffraction integral formula for treating the propagation of a laser beam through a misaligned paraxial ABCD optical system in the cylindrical coordinate system, analytical formula for a circular flattened Gaussian beam propagating through such optical system is derived. Furthermore, an approximate analytical formula is derived for a circular flattened Gaussian beam propagating through an apertured misaligned ABCD optical system by expanding the hard aperture function as a finite sum of complex Gaussian functions. Numerical examples are given
Servin, Christian
2015-01-01
On various examples ranging from geosciences to environmental sciences, this book explains how to generate an adequate description of uncertainty, how to justify semiheuristic algorithms for processing uncertainty, and how to make these algorithms more computationally efficient. It explains in what sense the existing approach to uncertainty as a combination of random and systematic components is only an approximation, presents a more adequate three-component model with an additional periodic error component, and explains how uncertainty propagation techniques can be extended to this model. The book provides a justification for a practically efficient heuristic technique (based on fuzzy decision-making). It explains how the computational complexity of uncertainty processing can be reduced. The book also shows how to take into account that in real life, the information about uncertainty is often only partially known, and, on several practical examples, explains how to extract the missing information about uncer...
International Nuclear Information System (INIS)
Campolina, D. de A. M.; Lima, C.P.B.; Veloso, M.A.F.
2013-01-01
For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95. percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input. Particularly it was shown that during the burnup, the variances when considering all the parameters uncertainties is equivalent to the sum of variances if the parameter uncertainties are sampled separately
Real-Time Optimal Flood Control Decision Making and Risk Propagation Under Multiple Uncertainties
Zhu, Feilin; Zhong, Ping-An; Sun, Yimeng; Yeh, William W.-G.
2017-12-01
Multiple uncertainties exist in the optimal flood control decision-making process, presenting risks involving flood control decisions. This paper defines the main steps in optimal flood control decision making that constitute the Forecast-Optimization-Decision Making (FODM) chain. We propose a framework for supporting optimal flood control decision making under multiple uncertainties and evaluate risk propagation along the FODM chain from a holistic perspective. To deal with uncertainties, we employ stochastic models at each link of the FODM chain. We generate synthetic ensemble flood forecasts via the martingale model of forecast evolution. We then establish a multiobjective stochastic programming with recourse model for optimal flood control operation. The Pareto front under uncertainty is derived via the constraint method coupled with a two-step process. We propose a novel SMAA-TOPSIS model for stochastic multicriteria decision making. Then we propose the risk assessment model, the risk of decision-making errors and rank uncertainty degree to quantify the risk propagation process along the FODM chain. We conduct numerical experiments to investigate the effects of flood forecast uncertainty on optimal flood control decision making and risk propagation. We apply the proposed methodology to a flood control system in the Daduhe River basin in China. The results indicate that the proposed method can provide valuable risk information in each link of the FODM chain and enable risk-informed decisions with higher reliability.
International Nuclear Information System (INIS)
Campolina, Daniel de Almeida Magalhães
2015-01-01
There is an uncertainty for all the components that comprise the model of a nuclear system. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a realistic calculation that has been replacing conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. By analyzing the propagated uncertainty to the effective neutron multiplication factor (k eff ), the effects of the sample size, computational uncertainty and efficiency of a random number generator to represent the distributions that characterize physical uncertainty in a light water reactor was investigated. A program entitled GB s ample was implemented to enable the application of the random sampling method, which requires an automated process and robust statistical tools. The program was based on the black box model and the MCNPX code was used in and parallel processing for the calculation of particle transport. The uncertainties considered were taken from a benchmark experiment in which the effects in k eff due to physical uncertainties is done through a conservative method. In this work a script called GB s ample was implemented to automate the sampling based method, use multiprocessing and assure the necessary robustness. It has been found the possibility of improving the efficiency of the random sampling method by selecting distributions obtained from a random number generator in order to obtain a better representation of uncertainty figures. After the convergence of the method is achieved, in order to reduce the variance of the uncertainty propagated without increase in computational time, it was found the best number o components to be sampled. It was also observed that if the sampling method is used to calculate the effect on k eff due to physical uncertainties reported by
Multi-Fidelity Uncertainty Propagation for Cardiovascular Modeling
Fleeter, Casey; Geraci, Gianluca; Schiavazzi, Daniele; Kahn, Andrew; Marsden, Alison
2017-11-01
Hemodynamic models are successfully employed in the diagnosis and treatment of cardiovascular disease with increasing frequency. However, their widespread adoption is hindered by our inability to account for uncertainty stemming from multiple sources, including boundary conditions, vessel material properties, and model geometry. In this study, we propose a stochastic framework which leverages three cardiovascular model fidelities: 3D, 1D and 0D models. 3D models are generated from patient-specific medical imaging (CT and MRI) of aortic and coronary anatomies using the SimVascular open-source platform, with fluid structure interaction simulations and Windkessel boundary conditions. 1D models consist of a simplified geometry automatically extracted from the 3D model, while 0D models are obtained from equivalent circuit representations of blood flow in deformable vessels. Multi-level and multi-fidelity estimators from Sandia's open-source DAKOTA toolkit are leveraged to reduce the variance in our estimated output quantities of interest while maintaining a reasonable computational cost. The performance of these estimators in terms of computational cost reductions is investigated for a variety of output quantities of interest, including global and local hemodynamic indicators. Sandia National Labs is a multimission laboratory managed and operated by NTESS, LLC, for the U.S. DOE under contract DE-NA0003525. Funding for this project provided by NIH-NIBIB R01 EB018302.
Understanding uncertainty propagation in life cycle assessments of waste management systems
DEFF Research Database (Denmark)
Bisinella, Valentina; Conradsen, Knut; Christensen, Thomas Højlund
2015-01-01
Uncertainty analysis in Life Cycle Assessments (LCAs) of waste management systems often results obscure and complex, with key parameters rarely determined on a case-by-case basis. The paper shows an application of a simplified approach to uncertainty coupled with a Global Sensitivity Analysis (GSA......) perspective on three alternative waste management systems for Danish single-family household waste. The approach provides a fast and systematic method to select the most important parameters in the LCAs, understand their propagation and contribution to uncertainty....
Mangado, Nerea; Piella, Gemma; Noailly, Jérôme; Pons-Prats, Jordi; Ballester, Miguel Ángel González
2016-01-01
Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering.
International Nuclear Information System (INIS)
Mullor, R.; Sanchez, A.; Martorell, S.; Martinez-Alzamora, N.
2011-01-01
Safety related systems performance optimization is classically based on quantifying the effects that testing and maintenance activities have on reliability and cost (R+C). However, R+C quantification is often incomplete in the sense that important uncertainties may not be considered. An important number of studies have been published in the last decade in the field of R+C based optimization considering uncertainties. They have demonstrated that inclusion of uncertainties in the optimization brings the decision maker insights concerning how uncertain the R+C results are and how this uncertainty does matter as it can result in differences in the outcome of the decision making process. Several methods of uncertainty propagation based on the theory of tolerance regions have been proposed in the literature depending on the particular characteristics of the variables in the output and their relations. In this context, the objective of this paper focuses on the application of non-parametric and parametric methods to analyze uncertainty propagation, which will be implemented on a multi-objective optimization problem where reliability and cost act as decision criteria and maintenance intervals act as decision variables. Finally, a comparison of results of these applications and the conclusions obtained are presented.
Energy Technology Data Exchange (ETDEWEB)
Mullor, R. [Dpto. Estadistica e Investigacion Operativa, Universidad Alicante (Spain); Sanchez, A., E-mail: aisanche@eio.upv.e [Dpto. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica Valencia, Camino de Vera s/n 46022 (Spain); Martorell, S. [Dpto. Ingenieria Quimica y Nuclear, Universidad Politecnica Valencia (Spain); Martinez-Alzamora, N. [Dpto. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica Valencia, Camino de Vera s/n 46022 (Spain)
2011-06-15
Safety related systems performance optimization is classically based on quantifying the effects that testing and maintenance activities have on reliability and cost (R+C). However, R+C quantification is often incomplete in the sense that important uncertainties may not be considered. An important number of studies have been published in the last decade in the field of R+C based optimization considering uncertainties. They have demonstrated that inclusion of uncertainties in the optimization brings the decision maker insights concerning how uncertain the R+C results are and how this uncertainty does matter as it can result in differences in the outcome of the decision making process. Several methods of uncertainty propagation based on the theory of tolerance regions have been proposed in the literature depending on the particular characteristics of the variables in the output and their relations. In this context, the objective of this paper focuses on the application of non-parametric and parametric methods to analyze uncertainty propagation, which will be implemented on a multi-objective optimization problem where reliability and cost act as decision criteria and maintenance intervals act as decision variables. Finally, a comparison of results of these applications and the conclusions obtained are presented.
Analytical Lie-algebraic solution of a 3D sound propagation problem in the ocean
Energy Technology Data Exchange (ETDEWEB)
Petrov, P.S., E-mail: petrov@poi.dvo.ru [Il' ichev Pacific Oceanological Institute, 43 Baltiyskaya str., Vladivostok, 690041 (Russian Federation); Prants, S.V., E-mail: prants@poi.dvo.ru [Il' ichev Pacific Oceanological Institute, 43 Baltiyskaya str., Vladivostok, 690041 (Russian Federation); Petrova, T.N., E-mail: petrova.tn@dvfu.ru [Far Eastern Federal University, 8 Sukhanova str., 690950, Vladivostok (Russian Federation)
2017-06-21
The problem of sound propagation in a shallow sea with variable bottom slope is considered. The sound pressure field produced by a time-harmonic point source in such inhomogeneous 3D waveguide is expressed in the form of a modal expansion. The expansion coefficients are computed using the adiabatic mode parabolic equation theory. The mode parabolic equations are solved explicitly, and the analytical expressions for the modal coefficients are obtained using a Lie-algebraic technique. - Highlights: • A group-theoretical approach is applied to a problem of sound propagation in a shallow sea with variable bottom slope. • An analytical solution of this problem is obtained in the form of modal expansion with analytical expressions of the coefficients. • Our result is the only analytical solution of the 3D sound propagation problem with no translational invariance. • This solution can be used for the validation of the numerical propagation models.
International Nuclear Information System (INIS)
Sabouri, Pouya
2013-01-01
This thesis presents a comprehensive study of sensitivity/uncertainty analysis for reactor performance parameters (e.g. the k-effective) to the base nuclear data from which they are computed. The analysis starts at the fundamental step, the Evaluated Nuclear Data File and the uncertainties inherently associated with the data they contain, available in the form of variance/covariance matrices. We show that when a methodical and consistent computation of sensitivity is performed, conventional deterministic formalisms can be sufficient to propagate nuclear data uncertainties with the level of accuracy obtained by the most advanced tools, such as state-of-the-art Monte Carlo codes. By applying our developed methodology to three exercises proposed by the OECD (Uncertainty Analysis for Criticality Safety Assessment Benchmarks), we provide insights of the underlying physical phenomena associated with the used formalisms. (author)
Propagation of statistical and nuclear data uncertainties in Monte Carlo burn-up calculations
Energy Technology Data Exchange (ETDEWEB)
Garcia-Herranz, Nuria [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain)], E-mail: nuria@din.upm.es; Cabellos, Oscar [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain); Sanz, Javier [Departamento de Ingenieria Energetica, Universidad Nacional de Educacion a Distancia, UNED (Spain); Juan, Jesus [Laboratorio de Estadistica, Universidad Politecnica de Madrid, UPM (Spain); Kuijper, Jim C. [NRG - Fuels, Actinides and Isotopes Group, Petten (Netherlands)
2008-04-15
Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB. A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants. This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files.
Propagation of statistical and nuclear data uncertainties in Monte Carlo burn-up calculations
International Nuclear Information System (INIS)
Garcia-Herranz, Nuria; Cabellos, Oscar; Sanz, Javier; Juan, Jesus; Kuijper, Jim C.
2008-01-01
Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB. A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants. This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files
Analytical solution for the correlator with Gribov propagators
Czech Academy of Sciences Publication Activity Database
Šauli, Vladimír
2016-01-01
Roč. 14, č. 1 (2016), s. 570-578 E-ISSN 2391-5471 Institutional support: RVO:61389005 Keywords : confinement * Gribov propagator * Quantum Chromodynamics * dispersion relations * quantum field theory * Green's functions Subject RIV: BE - Theoretical Physics Impact factor: 0.745, year: 2016
New strategies for quantifying and propagating nuclear data uncertainty in CUSA
International Nuclear Information System (INIS)
Zhao, Qiang; Zhang, Chunyan; Hao, Chen; Li, Fu; Wang, Dongyong; Yu, Yan
2016-01-01
Highlights: • An efficient sampling method based on LHS combined with Cholesky decomposition conversion is proposed. • A code of generating multi-group covariance matrices has been developed. • The uncertainty and sensitivity results of CUSA agree well with TSUNAMI-1D. - Abstract: The uncertainties of nuclear cross sections are propagated to the key parameters of nuclear reactor core through transport calculation. The statistical sampling method can be used to quantify and propagate nuclear data uncertainty in nuclear reactor physics calculations. In order to use statistical sampling method two key technical problems, method of generating multi-group covariance matrices and sampling method, should be considered reasonably and efficiently. In this paper, a method of transforming nuclear cross section covariance matrix in multi-group form into users' group structures based on the flat-flux approximation has been studied in depth. And most notably, an efficient sampling method has been proposed, which is based on Latin Hypercube Sampling (LHS) combined with Cholesky decomposition conversion. Based on those method, two modules named T-COCCO and GUIDE have been developed and have been successfully added into the code for uncertainty and sensitivity analysis (CUSA). The new modules have been verified respectively. Numerical results for the TMI-1 pin-cell case are presented and compared to TSUNAMI-1D. The comparison of the results further support that the methods and the computational tool developed in this work can be used to conduct sensitivity and uncertainty analysis for nuclear cross sections.
UNCERTAINTY PROPAGATION ANALYSIS FOR YONGGWANG NUCLEAR UNIT 4 BY MCCARD/MASTER CORE ANALYSIS SYSTEM
Directory of Open Access Journals (Sweden)
HO JIN PARK
2014-06-01
Full Text Available This paper concerns estimating uncertainties of the core neutronics design parameters of power reactors by direct sampling method (DSM calculations based on the two-step McCARD/MASTER design system in which McCARD is used to generate the fuel assembly (FA homogenized few group constants (FGCs while MASTER is used to conduct the core neutronics design computation. It presents an extended application of the uncertainty propagation analysis method originally designed for uncertainty quantification of the FA FGCs as a way to produce the covariances between the FGCs of any pair of FAs comprising the core, or the covariance matrix of the FA FGCs required for random sampling of the FA FGCs input sets into direct sampling core calculations by MASTER. For illustrative purposes, the uncertainties of core design parameters such as the effective multiplication factor (keff, normalized FA power densities, power peaking factors, etc. for the beginning of life (BOL core of Yonggwang nuclear unit 4 (YGN4 at the hot zero power and all rods out are estimated by the McCARD/MASTER-based DSM computations. The results are compared with those from the uncertainty propagation analysis method based on the McCARD-predicted sensitivity coefficients of nuclear design parameters and the cross section covariance data.
Assessment of sampling and analytical uncertainty of trace element contents in arable field soils.
Buczko, Uwe; Kuchenbuch, Rolf O; Ubelhör, Walter; Nätscher, Ludwig
2012-07-01
Assessment of trace element contents in soils is required in Germany (and other countries) before sewage sludge application on arable soils. The reliability of measured element contents is affected by measurement uncertainty, which consists of components due to (1) sampling, (2) laboratory repeatability (intra-lab) and (3) reproducibility (between-lab). A complete characterization of average trace element contents in field soils should encompass the uncertainty of all these components. The objectives of this study were to elucidate the magnitude and relative proportions of uncertainty components for the metals As, B, Cd, Co, Cr, Mo, Ni, Pb, Tl and Zn in three arable fields of different field-scale heterogeneity, based on a collaborative trial (CT) (standardized procedure) and two sampling proficiency tests (PT) (individual sampling procedure). To obtain reference values and estimates of field-scale heterogeneity, a detailed reference sampling was conducted. Components of uncertainty (sampling person, sampling repetition, laboratory) were estimated by variance component analysis, whereas reproducibility uncertainty was estimated using results from numerous laboratory proficiency tests. Sampling uncertainty in general increased with field-scale heterogeneity; however, total uncertainty was mostly dominated by (total) laboratory uncertainty. Reproducibility analytical uncertainty was on average by a factor of about 3 higher than repeatability uncertainty. Therefore, analysis within one single laboratory and, for heterogeneous fields, a reduction of sampling uncertainty (for instance by larger numbers of sample increments and/or a denser coverage of the field area) would be most effective to reduce total uncertainty. On the other hand, when only intra-laboratory analytical uncertainty was considered, total sampling uncertainty on average prevailed over analytical uncertainty by a factor of 2. Both sampling and laboratory repeatability uncertainty were highly variable
International Nuclear Information System (INIS)
Amendola, A.; Astolfi, M.; Lisanti, B.
1983-01-01
The report describes the how-to-use of the codes: MUP (Monte Carlo Uncertainty Propagation) for uncertainty analysis by Monte Carlo simulation, including correlation analysis, extreme value identification and study of selected ranges of the variable space; CEC-DES (Central Composite Design) for building experimental matrices according to the requirements of Central Composite and Factorial Experimental Designs; and, STRADE (Stratified Random Design) for experimental designs based on the Latin Hypercube Sampling Techniques. Application fields, of the codes are probabilistic risk assessment, experimental design, sensitivity analysis and system identification problems
Non-parametric order statistics method applied to uncertainty propagation in fuel rod calculations
International Nuclear Information System (INIS)
Arimescu, V.E.; Heins, L.
2001-01-01
Advances in modeling fuel rod behavior and accumulations of adequate experimental data have made possible the introduction of quantitative methods to estimate the uncertainty of predictions made with best-estimate fuel rod codes. The uncertainty range of the input variables is characterized by a truncated distribution which is typically a normal, lognormal, or uniform distribution. While the distribution for fabrication parameters is defined to cover the design or fabrication tolerances, the distribution of modeling parameters is inferred from the experimental database consisting of separate effects tests and global tests. The final step of the methodology uses a Monte Carlo type of random sampling of all relevant input variables and performs best-estimate code calculations to propagate these uncertainties in order to evaluate the uncertainty range of outputs of interest for design analysis, such as internal rod pressure and fuel centerline temperature. The statistical method underlying this Monte Carlo sampling is non-parametric order statistics, which is perfectly suited to evaluate quantiles of populations with unknown distribution. The application of this method is straightforward in the case of one single fuel rod, when a 95/95 statement is applicable: 'with a probability of 95% and confidence level of 95% the values of output of interest are below a certain value'. Therefore, the 0.95-quantile is estimated for the distribution of all possible values of one fuel rod with a statistical confidence of 95%. On the other hand, a more elaborate procedure is required if all the fuel rods in the core are being analyzed. In this case, the aim is to evaluate the following global statement: with 95% confidence level, the expected number of fuel rods which are not exceeding a certain value is all the fuel rods in the core except only a few fuel rods. In both cases, the thresholds determined by the analysis should be below the safety acceptable design limit. An indirect
Propagation of registration uncertainty during multi-fraction cervical cancer brachytherapy
Amir-Khalili, A.; Hamarneh, G.; Zakariaee, R.; Spadinger, I.; Abugharbieh, R.
2017-10-01
Multi-fraction cervical cancer brachytherapy is a form of image-guided radiotherapy that heavily relies on 3D imaging during treatment planning, delivery, and quality control. In this context, deformable image registration can increase the accuracy of dosimetric evaluations, provided that one can account for the uncertainties associated with the registration process. To enable such capability, we propose a mathematical framework that first estimates the registration uncertainty and subsequently propagates the effects of the computed uncertainties from the registration stage through to the visualizations, organ segmentations, and dosimetric evaluations. To ensure the practicality of our proposed framework in real world image-guided radiotherapy contexts, we implemented our technique via a computationally efficient and generalizable algorithm that is compatible with existing deformable image registration software. In our clinical context of fractionated cervical cancer brachytherapy, we perform a retrospective analysis on 37 patients and present evidence that our proposed methodology for computing and propagating registration uncertainties may be beneficial during therapy planning and quality control. Specifically, we quantify and visualize the influence of registration uncertainty on dosimetric analysis during the computation of the total accumulated radiation dose on the bladder wall. We further show how registration uncertainty may be leveraged into enhanced visualizations that depict the quality of the registration and highlight potential deviations from the treatment plan prior to the delivery of radiation treatment. Finally, we show that we can improve the transfer of delineated volumetric organ segmentation labels from one fraction to the next by encoding the computed registration uncertainties into the segmentation labels.
Analytical Model for Fictitious Crack Propagation in Concrete Beams
DEFF Research Database (Denmark)
Ulfkjær, J. P.; Krenk, S.; Brincker, Rune
An analytical model for load-displacement curves of unreinforced notched and un-notched concrete beams is presented. The load displacement-curve is obtained by combining two simple models. The fracture is modelled by a fictitious crack in an elastic layer around the mid-section of the beam. Outside...... the elastic layer the deformations are modelled by the Timoshenko beam theory. The state of stress in the elastic layer is assumed to depend bi-lineary on local elongation corresponding to a linear softening relation for the fictitious crack. For different beam size results from the analytical model...... is compared with results from a more accurate model based on numerical methods. The analytical model is shown to be in good agreement with the numerical results if the thickness of the elastic layer is taken as half the beam depth. Several general results are obtained. It is shown that the point on the load...
Analytical Model for Fictitious Crack Propagation in Concrete Beams
DEFF Research Database (Denmark)
Ulfkjær, J. P.; Krenk, Steen; Brincker, Rune
1995-01-01
An analytical model for load-displacement curves of concrete beams is presented. The load-displacement curve is obtained by combining two simple models. The fracture is modeled by a fictitious crack in an elastic layer around the midsection of the beam. Outside the elastic layer the deformations...... are modeled by beam theory. The state of stress in the elastic layer is assumed to depend bilinearly on local elongation corresponding to a linear softening relation for the fictitious crack. Results from the analytical model are compared with results from a more detailed model based on numerical methods...... for different beam sizes. The analytical model is shown to be in agreement with the numerical results if the thickness of the elastic layer is taken as half the beam depth. It is shown that the point on the load-displacement curve where the fictitious crack starts to develop and the point where the real crack...
GCR Environmental Models III: GCR Model Validation and Propagated Uncertainties in Effective Dose
Slaba, Tony C.; Xu, Xiaojing; Blattnig, Steve R.; Norman, Ryan B.
2014-01-01
This is the last of three papers focused on quantifying the uncertainty associated with galactic cosmic rays (GCR) models used for space radiation shielding applications. In the first paper, it was found that GCR ions with Z>2 and boundary energy below 500 MeV/nucleon induce less than 5% of the total effective dose behind shielding. This is an important finding since GCR model development and validation have been heavily biased toward Advanced Composition Explorer/Cosmic Ray Isotope Spectrometer measurements below 500 MeV/nucleon. Weights were also developed that quantify the relative contribution of defined GCR energy and charge groups to effective dose behind shielding. In the second paper, it was shown that these weights could be used to efficiently propagate GCR model uncertainties into effective dose behind shielding. In this work, uncertainties are quantified for a few commonly used GCR models. A validation metric is developed that accounts for measurements uncertainty, and the metric is coupled to the fast uncertainty propagation method. For this work, the Badhwar-O'Neill (BON) 2010 and 2011 and the Matthia GCR models are compared to an extensive measurement database. It is shown that BON2011 systematically overestimates heavy ion fluxes in the range 0.5-4 GeV/nucleon. The BON2010 and BON2011 also show moderate and large errors in reproducing past solar activity near the 2000 solar maximum and 2010 solar minimum. It is found that all three models induce relative errors in effective dose in the interval [-20%, 20%] at a 68% confidence level. The BON2010 and Matthia models are found to have similar overall uncertainty estimates and are preferred for space radiation shielding applications.
International Nuclear Information System (INIS)
Benoit, J.-C.
2012-01-01
This PhD study is in the field of nuclear energy, the back end of nuclear fuel cycle and uncertainty calculations. The CEA must design the prototype ASTRID, a sodium cooled fast reactor (SFR) and one of the selected concepts of the Generation IV forum, for which the calculation of the value and the uncertainty of the decay heat have a significant impact. In this study is developed a code of propagation of uncertainties of nuclear data on the decay heat in SFR. The process took place in three stages. The first step has limited the number of parameters involved in the calculation of the decay heat. For this, an experiment on decay heat on the reactor PHENIX (PUIREX 2008) was studied to validate experimentally the DARWIN package for SFR and quantify the source terms of the decay heat. The second step was aimed to develop a code of propagation of uncertainties: CyRUS (Cycle Reactor Uncertainty and Sensitivity). A deterministic propagation method was chosen because calculations are fast and reliable. Assumptions of linearity and normality have been validated theoretically. The code has also been successfully compared with a stochastic code on the example of the thermal burst fission curve of 235 U. The last part was an application of the code on several experiments: decay heat of a reactor, isotopic composition of a fuel pin and the burst fission curve of 235 U. The code has demonstrated the possibility of feedback on nuclear data impacting the uncertainty of this problem. Two main results were highlighted. Firstly, the simplifying assumptions of deterministic codes are compatible with a precise calculation of the uncertainty of the decay heat. Secondly, the developed method is intrusive and allows feedback on nuclear data from experiments on the back end of nuclear fuel cycle. In particular, this study showed how important it is to measure precisely independent fission yields along with their covariance matrices in order to improve the accuracy of the calculation of
International Nuclear Information System (INIS)
Morales Prieto, M.; Ortega Saiz, P.
2011-01-01
Analysis of analytical uncertainties of the methodology of simulation of processes for obtaining isotopic ending inventory of spent fuel, the ARIANE experiment explores the part of simulation of burning.
International Nuclear Information System (INIS)
Heo, Jaeseok; Kim, Kyung Doo
2015-01-01
Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper
Parametric uncertainty analysis of pulse wave propagation in a model of a human arterial network
Xiu, Dongbin; Sherwin, Spencer J.
2007-10-01
Reduced models of human arterial networks are an efficient approach to analyze quantitative macroscopic features of human arterial flows. The justification for such models typically arise due to the significantly long wavelength associated with the system in comparison to the lengths of arteries in the networks. Although these types of models have been employed extensively and many issues associated with their implementations have been widely researched, the issue of data uncertainty has received comparatively little attention. Similar to many biological systems, a large amount of uncertainty exists in the value of the parameters associated with the models. Clearly reliable assessment of the system behaviour cannot be made unless the effect of such data uncertainty is quantified. In this paper we present a study of parametric data uncertainty in reduced modelling of human arterial networks which is governed by a hyperbolic system. The uncertain parameters are modelled as random variables and the governing equations for the arterial network therefore become stochastic. This type stochastic hyperbolic systems have not been previously systematically studied due to the difficulties introduced by the uncertainty such as a potential change in the mathematical character of the system and imposing boundary conditions. We demonstrate how the application of a high-order stochastic collocation method based on the generalized polynomial chaos expansion, combined with a discontinuous Galerkin spectral/hp element discretization in physical space, can successfully simulate this type of hyperbolic system subject to uncertain inputs with bounds. Building upon a numerical study of propagation of uncertainty and sensitivity in a simplified model with a single bifurcation, a systematical parameter sensitivity analysis is conducted on the wave dynamics in a multiple bifurcating human arterial network. Using the physical understanding of the dynamics of pulse waves in these types of
Fuzzy probability based fault tree analysis to propagate and quantify epistemic uncertainty
International Nuclear Information System (INIS)
Purba, Julwan Hendry; Sony Tjahyani, D.T.; Ekariansyah, Andi Sofrany; Tjahjono, Hendro
2015-01-01
Highlights: • Fuzzy probability based fault tree analysis is to evaluate epistemic uncertainty in fuzzy fault tree analysis. • Fuzzy probabilities represent likelihood occurrences of all events in a fault tree. • A fuzzy multiplication rule quantifies epistemic uncertainty of minimal cut sets. • A fuzzy complement rule estimate epistemic uncertainty of the top event. • The proposed FPFTA has successfully evaluated the U.S. Combustion Engineering RPS. - Abstract: A number of fuzzy fault tree analysis approaches, which integrate fuzzy concepts into the quantitative phase of conventional fault tree analysis, have been proposed to study reliabilities of engineering systems. Those new approaches apply expert judgments to overcome the limitation of the conventional fault tree analysis when basic events do not have probability distributions. Since expert judgments might come with epistemic uncertainty, it is important to quantify the overall uncertainties of the fuzzy fault tree analysis. Monte Carlo simulation is commonly used to quantify the overall uncertainties of conventional fault tree analysis. However, since Monte Carlo simulation is based on probability distribution, this technique is not appropriate for fuzzy fault tree analysis, which is based on fuzzy probabilities. The objective of this study is to develop a fuzzy probability based fault tree analysis to overcome the limitation of fuzzy fault tree analysis. To demonstrate the applicability of the proposed approach, a case study is performed and its results are then compared to the results analyzed by a conventional fault tree analysis. The results confirm that the proposed fuzzy probability based fault tree analysis is feasible to propagate and quantify epistemic uncertainties in fault tree analysis
Study of Monte Carlo approach to experimental uncertainty propagation with MSTW 2008 PDFs
Watt, G.
2012-01-01
We investigate the Monte Carlo approach to propagation of experimental uncertainties within the context of the established 'MSTW 2008' global analysis of parton distribution functions (PDFs) of the proton at next-to-leading order in the strong coupling. We show that the Monte Carlo approach using replicas of the original data gives PDF uncertainties in good agreement with the usual Hessian approach using the standard Delta(chi^2) = 1 criterion, then we explore potential parameterisation bias by increasing the number of free parameters, concluding that any parameterisation bias is likely to be small, with the exception of the valence-quark distributions at low momentum fractions x. We motivate the need for a larger tolerance, Delta(chi^2) > 1, by making fits to restricted data sets and idealised consistent or inconsistent pseudodata. Instead of using data replicas, we alternatively produce PDF sets randomly distributed according to the covariance matrix of fit parameters including appropriate tolerance values,...
Ruggeri, Bernardo
2009-10-01
The present work aims to obtain a methodology to score dangerous chemical pollutants related to the exposure scenarios of human risk and to evaluate the uncertainty of the scoring procedure. For chronic human risk evaluation, the problem of characterizing the most dangerous situation is posed. In this paper a ranking procedure was assessed in order to score eight pollutants through a "scoring model" approach. The scoring system was organized in a matrix form in order to put in evidence the strong connection between properties of the substances and exposure scenarios. Two different modelling approaches were considered as cause-effect relationships for risk evaluation: Risk Based Corrective Action (RBCA) and a "mobility and degradation matrix". The first takes into account the exposure pathways (soil, water and air) and the exposure routes (inhalation, ingestion and dermal contact), while the second considers the capacity of the chemicals to move into the environment and the rate of degradation associated with chemical-biological processes as measure of persistence. A specific score for each chemical along its uncertainty were evaluated. The uncertainty of the scoring procedure was evaluated by using the law of propagation of uncertainty; it was used to estimate the global uncertainty related to each exposure pathway for the eight substances for both models. Results of scoring as well uncertainty put in evidence that the ordering of chemicals is strongly dependent on the model used and on the available data. The procedure is simple and easy to use and its implementation allows users to compare several and several compounds.
Nikolopoulos, Efthymios I.; Polcher, Jan; Anagnostou, Emmanouil N.; Eisner, Stephanie; Fink, Gabriel; Kallos, George
2016-04-01
Precipitation is arguably one of the most important forcing variables that drive terrestrial water cycle processes. The process of precipitation exhibits significant variability in space and time, is associated with different water phases (liquid or solid) and depends on several other factors (aerosols, orography etc), which make estimation and modeling of this process a particularly challenging task. As such, precipitation information from different sensors/products is associated with uncertainty. Propagation of this uncertainty into hydrologic simulations can have a considerable impact on the accuracy of the simulated hydrologic variables. Therefore, to make hydrologic predictions more useful, it is important to investigate and assess the impact of precipitation uncertainty in hydrologic simulations in order to be able to quantify it and identify ways to minimize it. In this work we investigate the impact of precipitation uncertainty in hydrologic simulations using land surface models (e.g. ORCHIDEE) and global hydrologic models (e.g. WaterGAP3) for the simulation of several hydrologic variables (soil moisture, ET, runoff) over the Iberian Peninsula. Uncertainty in precipitation is assessed by utilizing various sources of precipitation input that include one reference precipitation dataset (SAFRAN), three widely-used satellite precipitation products (TRMM 3B42v7, CMORPH, PERSIANN) and a state-of-the-art reanalysis product (WFDEI) based on the ECMWF ERA-Interim reanalysis. Comparative analysis is based on using the SAFRAN-simulations as reference and it is carried out at different space (0.5deg or regional average) and time (daily or seasonal) scales. Furthermore, as an independent verification, simulated discharge is compared against available discharge observations for selected major rivers of Iberian region. Results allow us to draw conclusions regarding the impact of precipitation uncertainty with respect to i) hydrologic variable of interest, ii
Directory of Open Access Journals (Sweden)
Haileyesus B. Endeshaw
2017-11-01
Full Text Available Failure prediction of wind turbine gearboxes (WTGs is especially important since the maintenance of these components is not only costly but also causes the longest downtime. One of the most common causes of the premature fault of WTGs is attributed to the fatigue fracture of gear teeth due to fluctuating and cyclic torque, resulting from stochastic wind loading, transmitted to the gearbox. Moreover, the fluctuation of the torque, as well as the inherent uncertainties of the material properties, results in uncertain life prediction for WTGs. It is therefore essential to quantify these uncertainties in the life estimation of gears. In this paper, a framework, constituted by a dynamic model of a one-stage gearbox, a finite element method, and a degradation model for the estimation of fatigue crack propagation in gear, is presented. Torque time history data of a wind turbine rotor was scaled and used to simulate the stochastic characteristic of the loading and uncertainties in the material constants of the degradation model were also quantified. It was demonstrated that uncertainty quantification of load and material constants provides a reasonable estimation of the distribution of the crack length in the gear tooth at any time step.
Terranova, Nicholas; Serot, Olivier; Archier, Pascal; De Saint Jean, Cyrille; Sumini, Marco
2017-09-01
Fission product yields (FY) are fundamental nuclear data for several applications, including decay heat, shielding, dosimetry, burn-up calculations. To be safe and sustainable, modern and future nuclear systems require accurate knowledge on reactor parameters, with reduced margins of uncertainty. Present nuclear data libraries for FY do not provide consistent and complete uncertainty information which are limited, in many cases, to only variances. In the present work we propose a methodology to evaluate covariance matrices for thermal and fast neutron induced fission yields. The semi-empirical models adopted to evaluate the JEFF-3.1.1 FY library have been used in the Generalized Least Square Method available in CONRAD (COde for Nuclear Reaction Analysis and Data assimilation) to generate covariance matrices for several fissioning systems such as the thermal fission of U235, Pu239 and Pu241 and the fast fission of U238, Pu239 and Pu240. The impact of such covariances on nuclear applications has been estimated using deterministic and Monte Carlo uncertainty propagation techniques. We studied the effects on decay heat and reactivity loss uncertainty estimation for simplified test case geometries, such as PWR and SFR pin-cells. The impact on existing nuclear reactors, such as the Jules Horowitz Reactor under construction at CEA-Cadarache, has also been considered.
Uncertainty propagation on fuel cycle codes: Monte Carlo vs Sensitivity Analyses
Energy Technology Data Exchange (ETDEWEB)
García Martínez, M.; Alvarez-Velarde, F.
2015-07-01
Uncertainty propagation on fuel cycle calculations is usually limited by parametric restrictions that only allow the study of small sets of linearly correlated input and output parameters. A Monte Carlo tool has been developed in order to be able to address the simultaneous impact of several magnitudes’ uncertainties in the final results, no matter the relationship between them. TR{sub E}VOL code has been updated and optimized in order to be able to run a significant number of perturbed samples of the same reference scenario. Both a Sensitivity Analysis and a Monte Carlo technique have been implemented in the code. The first aims to address the contribution of each input parameter on the output magnitudes, while the second one is intended to provide a better estimation of the global uncertainty when non-linear relations do not allow such approach. These two methodologies have been applied to the study of a series of scenarios developed from a OECD/NEA study, which is of particular interest for Europe. The results are presented in terms of materials’ mass according to their total accumulated value, final value or maximum reached value, as defined by the user. These results are given as mean values and their uncertainties as the standard deviation of the samples. Non-linear effects can be seen as biases that affect the shape of the results’ Gaussian curves. (Author)
Sabouri, P.; Bidaud, A.; Dabiran, S.; Lecarpentier, D.; Ferragut, F.
2014-04-01
The development of tools for nuclear data uncertainty propagation in lattice calculations are presented. The Total Monte Carlo method and the Generalized Perturbation Theory method are used with the code DRAGON to allow propagation of nuclear data uncertainties in transport calculations. Both methods begin the propagation of uncertainties at the most elementary level of the transport calculation - the Evaluated Nuclear Data File. The developed tools are applied to provide estimates for response uncertainties of a PWR cell as a function of burnup.
Propagation of neutron-reaction uncertainties through multi-physics models of novel LWR's
Directory of Open Access Journals (Sweden)
Hernandez-Solis Augusto
2017-01-01
Full Text Available The novel design of the renewable boiling water reactor (RBWR allows a breeding ratio greater than unity and thus, it aims at providing for a self-sustained fuel cycle. The neutron reactions that compose the different microscopic cross-sections and angular distributions are uncertain, so when they are employed in the determination of the spatial distribution of the neutron flux in a nuclear reactor, a methodology should be employed to account for these associated uncertainties. In this work, the Total Monte Carlo (TMC method is used to propagate the different neutron-reactions (as well as angular distributions covariances that are part of the TENDL-2014 nuclear data (ND library. The main objective is to propagate them through coupled neutronic and thermal-hydraulic models in order to assess the uncertainty of important safety parameters related to multi-physics, such as peak cladding temperature along the axial direction of an RBWR fuel assembly. The objective of this study is to quantify the impact that ND covariances of important nuclides such as U-235, U-238, Pu-239 and the thermal scattering of hydrogen in H2O have in the deterministic safety analysis of novel nuclear reactors designs.
Quantification of Dynamic Model Validation Metrics Using Uncertainty Propagation from Requirements
Brown, Andrew M.; Peck, Jeffrey A.; Stewart, Eric C.
2018-01-01
The Space Launch System, NASA's new large launch vehicle for long range space exploration, is presently in the final design and construction phases, with the first launch scheduled for 2019. A dynamic model of the system has been created and is critical for calculation of interface loads and natural frequencies and mode shapes for guidance, navigation, and control (GNC). Because of the program and schedule constraints, a single modal test of the SLS will be performed while bolted down to the Mobile Launch Pad just before the first launch. A Monte Carlo and optimization scheme will be performed to create thousands of possible models based on given dispersions in model properties and to determine which model best fits the natural frequencies and mode shapes from modal test. However, the question still remains as to whether this model is acceptable for the loads and GNC requirements. An uncertainty propagation and quantification (UP and UQ) technique to develop a quantitative set of validation metrics that is based on the flight requirements has therefore been developed and is discussed in this paper. There has been considerable research on UQ and UP and validation in the literature, but very little on propagating the uncertainties from requirements, so most validation metrics are "rules-of-thumb;" this research seeks to come up with more reason-based metrics. One of the main assumptions used to achieve this task is that the uncertainty in the modeling of the fixed boundary condition is accurate, so therefore that same uncertainty can be used in propagating the fixed-test configuration to the free-free actual configuration. The second main technique applied here is the usage of the limit-state formulation to quantify the final probabilistic parameters and to compare them with the requirements. These techniques are explored with a simple lumped spring-mass system and a simplified SLS model. When completed, it is anticipated that this requirements-based validation
International Nuclear Information System (INIS)
Iooss, B.
2009-01-01
The present document constitutes my Habilitation thesis report. It recalls my scientific activity of the twelve last years, since my PhD thesis until the works completed as a research engineer at CEA Cadarache. The two main chapters of this document correspond to two different research fields both referring to the uncertainty treatment in engineering problems. The first chapter establishes a synthesis of my work on high frequency wave propagation in random medium. It more specifically relates to the study of the statistical fluctuations of acoustic wave travel-times in random and/or turbulent media. The new results mainly concern the introduction of the velocity field statistical anisotropy in the analytical expressions of the travel-time statistical moments according to those of the velocity field. This work was primarily carried by requirements in geophysics (oil exploration and seismology). The second chapter is concerned by the probabilistic techniques to study the effect of input variables uncertainties in numerical models. My main applications in this chapter relate to the nuclear engineering domain which offers a large variety of uncertainty problems to be treated. First of all, a complete synthesis is carried out on the statistical methods of sensitivity analysis and global exploration of numerical models. The construction and the use of a meta-model (inexpensive mathematical function replacing an expensive computer code) are then illustrated by my work on the Gaussian process model (kriging). Two additional topics are finally approached: the high quantile estimation of a computer code output and the analysis of stochastic computer codes. We conclude this memory with some perspectives about the numerical simulation and the use of predictive models in industry. This context is extremely positive for future researches and application developments. (author)
Uncertainty Propagation Analysis for the Monte Carlo Time-Dependent Simulations
International Nuclear Information System (INIS)
Shaukata, Nadeem; Shim, Hyung Jin
2015-01-01
In this paper, a conventional method to control the neutron population for super-critical systems is implemented. Instead of considering the cycles, the simulation is divided in time intervals. At the end of each time interval, neutron population control is applied on the banked neutrons. Randomly selected neutrons are discarded, until the size of neutron population matches the initial neutron histories at the beginning of time simulation. A time-dependent simulation mode has also been implemented in the development version of SERPENT 2 Monte Carlo code. In this mode, sequential population control mechanism has been proposed for modeling of prompt super-critical systems. A Monte Carlo method has been properly used in TART code for dynamic criticality calculations. For super-critical systems, the neutron population is allowed to grow over a period of time. The neutron population is uniformly combed to return it to the neutron population started with at the beginning of time boundary. In this study, conventional time-dependent Monte Carlo (TDMC) algorithm is implemented. There is an exponential growth of neutron population in estimation of neutron density tally for super-critical systems and the number of neutrons being tracked exceed the memory of the computer. In order to control this exponential growth at the end of each time boundary, a conventional time cut-off controlling population strategy is included in TDMC. A scale factor is introduced to tally the desired neutron density at the end of each time boundary. The main purpose of this paper is the quantification of uncertainty propagation in neutron densities at the end of each time boundary for super-critical systems. This uncertainty is caused by the uncertainty resulting from the introduction of scale factor. The effectiveness of TDMC is examined for one-group infinite homogeneous problem (the rod model) and two-group infinite homogeneous problem. The desired neutron density is tallied by the introduction of
International Nuclear Information System (INIS)
Heo, Jaeseok; Kim, Kyung Doo
2015-01-01
Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM
DEFF Research Database (Denmark)
Quinonero, Joaquin; Girard, Agathe; Larsen, Jan
2003-01-01
The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models such as the Gaus......The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models...... such as the Gaussian process and the relevance vector machine. We derive novel analytic expressions for the predictive mean and variance for Gaussian kernel shapes under the assumption of a Gaussian input distribution in the static case, and of a recursive Gaussian predictive density in iterative forecasting...
Bilcke, Joke; Beutels, Philippe; Brisson, Marc; Jit, Mark
2011-01-01
Accounting for uncertainty is now a standard part of decision-analytic modeling and is recommended by many health technology agencies and published guidelines. However, the scope of such analyses is often limited, even though techniques have been developed for presenting the effects of methodological, structural, and parameter uncertainty on model results. To help bring these techniques into mainstream use, the authors present a step-by-step guide that offers an integrated approach to account for different kinds of uncertainty in the same model, along with a checklist for assessing the way in which uncertainty has been incorporated. The guide also addresses special situations such as when a source of uncertainty is difficult to parameterize, resources are limited for an ideal exploration of uncertainty, or evidence to inform the model is not available or not reliable. for identifying the sources of uncertainty that influence results most are also described. Besides guiding analysts, the guide and checklist may be useful to decision makers who need to assess how well uncertainty has been accounted for in a decision-analytic model before using the results to make a decision.
ANALYTIC APPROXIMATE SEISMOLOGY OF PROPAGATING MAGNETOHYDRODYNAMIC WAVES IN THE SOLAR CORONA
Energy Technology Data Exchange (ETDEWEB)
Goossens, M.; Soler, R. [Centre for Mathematical Plasma Astrophysics, Department of Mathematics, KU Leuven, Celestijnenlaan 200B, B-3001 Leuven (Belgium); Arregui, I. [Instituto de Astrofisica de Canarias, Via Lactea s/n, E-38205 La Laguna, Tenerife (Spain); Terradas, J., E-mail: marcel.goossens@wis.kuleuven.be [Solar Physics Group, Departament de Fisica, Universitat de les Illes Balears, E-07122 Palma de Mallorca (Spain)
2012-12-01
Observations show that propagating magnetohydrodynamic (MHD) waves are ubiquitous in the solar atmosphere. The technique of MHD seismology uses the wave observations combined with MHD wave theory to indirectly infer physical parameters of the solar atmospheric plasma and magnetic field. Here, we present an analytical seismological inversion scheme for propagating MHD waves. This scheme uses the observational information on wavelengths and damping lengths in a consistent manner, along with observed values of periods or phase velocities, and is based on approximate asymptotic expressions for the theoretical values of wavelengths and damping lengths. The applicability of the inversion scheme is discussed and an example is given.
Analyticity of effective coupling and propagators in massless models of quantum field theory
International Nuclear Information System (INIS)
Oehme, R.
1982-01-01
For massless models of quantum field theory, some general theorems are proved concerning the analytic continuation of the renormalization group functions as well as the effective coupling and the propagators. Starting points are analytic properties of the effective coupling and the propagators in the momentum variable k 2 , which can be converted into analyticity of β- and γ-functions in the coupling parameter lambda. It is shown that the β-function can have branch point singularities related to stationary points of the effective coupling as a function of k 2 . The type of these singularities of β(lambda) can be determined explicitly. Examples of possible physical interest are extremal values of the effective coupling at space-like points in the momentum variable, as well as complex conjugate stationary points close to the real k 2 -axis. The latter may be related to the sudden transition between weak and strong coupling regimes of the system. Finally, for the effective coupling and for the propagators, the analytic continuation in both variables k 2 and lambda is discussed. (orig.)
Propagation of Isotopic Bias and Uncertainty to Criticality Safety Analyses of PWR Waste Packages
Energy Technology Data Exchange (ETDEWEB)
Radulescu, Georgeta [ORNL
2010-06-01
predicted spent fuel compositions (i.e., determine the penalty in reactivity due to isotopic composition bias and uncertainty) for use in disposal criticality analysis employing burnup credit. The method used in this calculation to propagate the isotopic bias and bias-uncertainty values to k{sub eff} is the Monte Carlo uncertainty sampling method. The development of this report is consistent with 'Test Plan for: Isotopic Validation for Postclosure Criticality of Commercial Spent Nuclear Fuel'. This calculation report has been developed in support of burnup credit activities for the proposed repository at Yucca Mountain, Nevada, and provides a methodology that can be applied to other criticality safety applications employing burnup credit.
Uncertainty propagation for the coulometric measurement of the plutonium concentration in MOX-PU4.
Energy Technology Data Exchange (ETDEWEB)
None, None
2017-11-07
This GUM WorkbenchTM propagation of uncertainty is for the coulometric measurement of the plutonium concentration in a Pu standard material (C126) supplied as individual aliquots that were prepared by mass. The C126 solution had been prepared and as aliquoted as standard material. Samples are aliquoted into glass vials and heated to dryness for distribution as dried nitrate. The individual plutonium aliquots were not separated chemically or otherwise purified prior to measurement by coulometry in the F/H Laboratory. Hydrogen peroxide was used for valence adjustment. The Pu assay measurement results were corrected for the interference from trace iron in the solution measured for assay. Aliquot mass measurements were corrected for air buoyancy. The relative atomic mass (atomic weight) of the plutonium from X126 certoficate was used. The isotopic composition was determined by thermal ionization mass spectrometry (TIMS) for comparison but not used in calculations.
DEFF Research Database (Denmark)
Abdul Samad, Noor Asma Fazli Bin; Sin, Gürkan; Gernaey, Krist
2013-01-01
This paper presents the application of uncertainty and sensitivity analysis as part of a systematic modelbased process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty...
International Nuclear Information System (INIS)
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-01-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Energy Technology Data Exchange (ETDEWEB)
Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu
2016-09-15
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-09-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Analytic properties of the quark propagator from an effective infrared interaction model
Windisch, Andreas
2017-04-01
In this paper, I investigate the analytic properties of the quark propagator Dyson-Schwinger equation (DSE) in the Landau gauge. In the quark self-energy, the combined gluon propagator and quark-gluon vertex is modeled by an effective interaction (the so-called Maris-Tandy interaction), where the ultraviolet term is neglected. This renders the loop integrand of the quark self-energy analytic on the cut plane -π complex conjugation symmetry, this region fully covers the parabolic integration domain for Bethe-Salpeter equations (BSEs) for bound state masses of up to 4.5 GeV. Employing a novel numerical technique that is based on highly parallel computation on graphics processing units (GPUs), I extract more than 6500 poles in this region, which arise as the bare quark mass is varied over a wide range of closely spaced values. The poles are grouped in 23 individual trajectories that capture the movement of the poles in the complex region as the bare mass is varied. The raw data of the pole locations and residues is provided as Supplemental Material, which can be used to parametrize solutions of the complex quark propagator for a wide range of bare mass values and for large bound-state masses. This study is a first step towards an extension of previous work on the analytic continuation of perturbative one-loop integrals, with the long-term goal of establishing a framework that allows for the numerical extraction of the analytic properties of the quark propagator with a truncation that extends beyond the rainbow by making adequate adjustments in the contour of the radial integration of the quark self-energy.
Uncertainty Propagation of Spectral Matching Ratios Measured Using a Calibrated Spectroradiometer
Directory of Open Access Journals (Sweden)
Diego Pavanello
2018-01-01
Full Text Available The international standard IEC62670-3 (International Electrotechnical Committee “Photovoltaic Concentrators (CPV Performance Testing—Part 3—Performance Measurements and Power Rating” sets the guidelines for power measurements of a CPV device, both in indoor and outdoor conditions. When measuring in outdoor conditions, the acquired data have to be filtered a posteriori, in order to select only those points measured with ambient conditions close to the Concentrator Standard Operating Conditions (CSOC. The most stringent requirement to be met is related to the three Spectral Matching Ratios (SMR, which have all to be within the limit of 1.00 ± 0.03. SMR are usually determined by the ratio of the currents of component cells to monitor the outdoor spectral ratio conditions during the CPV device power measurements. Experience demonstrates that obtaining real world data meeting these strict conditions is very difficult in practice. However, increasing the acceptable range would make the entire filtering process less appropriate from a physical point of view. Given the importance of correctly measuring the SMR, an estimation of their associated measurement uncertainties is needed to allow a proper assessment of the validity of the 3% limit. In this study a Monte Carlo simulation has been used, to allow the estimation of the propagation of uncertainties in expressions having the and integral form. The method consists of applying both random and wavelength correlated errors to the measured spectra and to the measured spectral responses of the three CPV cell junctions, according to the measurement uncertainties of the European Solar Test Installation (ESTI. The experimental data used in this study have been acquired during clear sky conditions in May 2016, at ESTI’s facilities in Ispra, northern Italy (45°49′ N 8°37′ E.
Directory of Open Access Journals (Sweden)
M. E. Gorbunov
2018-01-01
Full Text Available A new reference occultation processing system (rOPS will include a Global Navigation Satellite System (GNSS radio occultation (RO retrieval chain with integrated uncertainty propagation. In this paper, we focus on wave-optics bending angle (BA retrieval in the lower troposphere and introduce (1 an empirically estimated boundary layer bias (BLB model then employed to reduce the systematic uncertainty of excess phases and bending angles in about the lowest 2 km of the troposphere and (2 the estimation of (residual systematic uncertainties and their propagation together with random uncertainties from excess phase to bending angle profiles. Our BLB model describes the estimated bias of the excess phase transferred from the estimated bias of the bending angle, for which the model is built, informed by analyzing refractivity fluctuation statistics shown to induce such biases. The model is derived from regression analysis using a large ensemble of Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC RO observations and concurrent European Centre for Medium-Range Weather Forecasts (ECMWF analysis fields. It is formulated in terms of predictors and adaptive functions (powers and cross products of predictors, where we use six main predictors derived from observations: impact altitude, latitude, bending angle and its standard deviation, canonical transform (CT amplitude, and its fluctuation index. Based on an ensemble of test days, independent of the days of data used for the regression analysis to establish the BLB model, we find the model very effective for bias reduction and capable of reducing bending angle and corresponding refractivity biases by about a factor of 5. The estimated residual systematic uncertainty, after the BLB profile subtraction, is lower bounded by the uncertainty from the (indirect use of ECMWF analysis fields but is significantly lower than the systematic uncertainty without BLB correction. The
Gorbunov, Michael E.; Kirchengast, Gottfried
2018-01-01
A new reference occultation processing system (rOPS) will include a Global Navigation Satellite System (GNSS) radio occultation (RO) retrieval chain with integrated uncertainty propagation. In this paper, we focus on wave-optics bending angle (BA) retrieval in the lower troposphere and introduce (1) an empirically estimated boundary layer bias (BLB) model then employed to reduce the systematic uncertainty of excess phases and bending angles in about the lowest 2 km of the troposphere and (2) the estimation of (residual) systematic uncertainties and their propagation together with random uncertainties from excess phase to bending angle profiles. Our BLB model describes the estimated bias of the excess phase transferred from the estimated bias of the bending angle, for which the model is built, informed by analyzing refractivity fluctuation statistics shown to induce such biases. The model is derived from regression analysis using a large ensemble of Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) RO observations and concurrent European Centre for Medium-Range Weather Forecasts (ECMWF) analysis fields. It is formulated in terms of predictors and adaptive functions (powers and cross products of predictors), where we use six main predictors derived from observations: impact altitude, latitude, bending angle and its standard deviation, canonical transform (CT) amplitude, and its fluctuation index. Based on an ensemble of test days, independent of the days of data used for the regression analysis to establish the BLB model, we find the model very effective for bias reduction and capable of reducing bending angle and corresponding refractivity biases by about a factor of 5. The estimated residual systematic uncertainty, after the BLB profile subtraction, is lower bounded by the uncertainty from the (indirect) use of ECMWF analysis fields but is significantly lower than the systematic uncertainty without BLB correction. The systematic and
Antoshchenkova, Ekaterina; Imbert, David; Richet, Yann; Bardet, Lise; Duluc, Claire-Marie; Rebour, Vincent; Gailler, Audrey; Hébert, Hélène
2016-04-01
The aim of this study is to assess evaluation the tsunamigenic potential of the Azores-Gibraltar Fracture Zone (AGFZ). This work is part of the French project TANDEM (Tsunamis in the Atlantic and English ChaNnel: Definition of the Effects through numerical Modeling; www-tandem.cea.fr), special attention is paid to French Atlantic coasts. Structurally, the AGFZ region is complex and not well understood. However, a lot of its faults produce earthquakes with significant vertical slip, of a type that can result in tsunami. We use the major tsunami event of the AGFZ on purpose to have a regional estimation of the tsunamigenic potential of this zone. The major reported event for this zone is the 1755 Lisbon event. There are large uncertainties concerning source location and focal mechanism of this earthquake. Hence, simple deterministic approach is not sufficient to cover on the one side the whole AGFZ with its geological complexity and on the other side the lack of information concerning the 1755 Lisbon tsunami. A parametric modeling environment Promethée (promethee.irsn.org/doku.php) was coupled to tsunami simulation software based on shallow water equations with the aim of propagation of uncertainties. Such a statistic point of view allows us to work with multiple hypotheses simultaneously. In our work we introduce the seismic source parameters in a form of distributions, thus giving a data base of thousands of tsunami scenarios and tsunami wave height distributions. Exploring our tsunami scenarios data base we present preliminary results for France. Tsunami wave heights (within one standard deviation of the mean) can be about 0.5 m - 1 m for the Atlantic coast and approaching 0.3 m for the English Channel.
Directory of Open Access Journals (Sweden)
Nerea Mangado
2016-11-01
Full Text Available Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering.
Statistical analysis tolerance using jacobian torsor model based on uncertainty propagation method
Directory of Open Access Journals (Sweden)
W Ghie
2016-04-01
Full Text Available One risk inherent in the use of assembly components is that the behaviourof these components is discovered only at the moment an assembly isbeing carried out. The objective of our work is to enable designers to useknown component tolerances as parameters in models that can be usedto predict properties at the assembly level. In this paper we present astatistical approach to assemblability evaluation, based on tolerance andclearance propagations. This new statistical analysis method for toleranceis based on the Jacobian-Torsor model and the uncertainty measurementapproach. We show how this can be accomplished by modeling thedistribution of manufactured dimensions through applying a probabilitydensity function. By presenting an example we show how statisticaltolerance analysis should be used in the Jacobian-Torsor model. This workis supported by previous efforts aimed at developing a new generation ofcomputational tools for tolerance analysis and synthesis, using theJacobian-Torsor approach. This approach is illustrated on a simple threepartassembly, demonstrating the method’s capability in handling threedimensionalgeometry.
Analytical Study on Propagation Dynamics of Optical Beam in Parity-Time Symmetric Optical Couplers
International Nuclear Information System (INIS)
Zhou Zheng; Zhang Li-Juan; Zhu Bo
2015-01-01
We present exact analytical solutions to parity-time (PT) symmetric optical system describing light transport in PT-symmetric optical couplers. We show that light intensity oscillates periodically between two waveguides for unbroken PT-symmetric phase, whereas light always leaves the system from the waveguide experiencing gain when light is initially input at either waveguide experiencing gain or waveguide experiencing loss for broken PT-symmetric phase. These analytical results agree with the recent experimental observation reported by Rüter et al. [Nat. Phys. 6 (2010) 192]. Besides, we present a scheme for manipulating PT symmetry by applying a periodic modulation. Our results provide an efficient way to control light propagation in periodically modulated PT-symmetric system by tuning the modulation amplitude and frequency. (paper)
Directory of Open Access Journals (Sweden)
S.V. Bystrov
2016-05-01
Full Text Available Subject of Research.We present research results for the signal uncertainty problem that naturally arises for the developers of servomechanisms, including analytical design of serial compensators, delivering the required quality indexes for servomechanisms. Method. The problem was solved with the use of Besekerskiy engineering approach, formulated in 1958. This gave the possibility to reduce requirements for input signal composition of servomechanisms by using only two of their quantitative characteristics, such as maximum speed and acceleration. Information about input signal maximum speed and acceleration allows entering into consideration the equivalent harmonic input signal with calculated amplitude and frequency. In combination with requirements for maximum tracking error, the amplitude and frequency of the equivalent harmonic effects make it possible to estimate analytically the value of the amplitude characteristics of the system by error and then convert it to amplitude characteristic of open-loop system transfer function. While previously Besekerskiy approach was mainly used in relation to the apparatus of logarithmic characteristics, we use this approach for analytical synthesis of consecutive compensators. Main Results. Proposed technique is used to create analytical representation of "input–output" and "error–output" polynomial dynamic models of the designed system. In turn, the desired model of the designed system in the "error–output" form of analytical representation of transfer functions is the basis for the design of consecutive compensator, that delivers the desired placement of state matrix eigenvalues and, consequently, the necessary set of dynamic indexes for the designed system. The given procedure of consecutive compensator analytical design on the basis of Besekerskiy engineering approach under conditions of signal uncertainty is illustrated by an example. Practical Relevance. The obtained theoretical results are
A two-dimensional analytical model for tidal wave propagation in convergent estuaries
Cai, Huayang; Toffolon, Marco; Savenije, Hubert H. G.; Chua, Vivien P.
2015-04-01
A knowledge of tidal dynamics in large-scale semi-closed estuaries, such as the Bay of Fundy, the Gulf of California, the Adriatic Sea, is very important since it affects the estuarine environment and its potential use of water resource in many ways (e.g., navigation, coastal safety, ecology). To obtain insight into physical mechanisms on tidal wave propagation in such systems, analytical models are invaluable tools. It is well known that the analytical solutions for tidal dynamics in semi-closed estuaries can be obtained by Taylor's method, where a cooscillating tide can be described as a superposition of an incident Kelvin wave, a reflected Kelvin wave, and Poincare waves. However, the method is usually limited to special conditions, e.g., prismatic channel with uniform depth, negligible friction etc. In this study, we extend the one-dimensional linear solution for tidal wave propagation in convergent estuaries to the two-dimensional case, explicitly accounting for both the channel convergence (width and depth convergence) and friction.
Directory of Open Access Journals (Sweden)
Soheil Salahshour
2015-02-01
Full Text Available In this paper, we apply the concept of Caputo’s H-differentiability, constructed based on the generalized Hukuhara difference, to solve the fuzzy fractional differential equation (FFDE with uncertainty. This is in contrast to conventional solutions that either require a quantity of fractional derivatives of unknown solution at the initial point (Riemann–Liouville or a solution with increasing length of their support (Hukuhara difference. Then, in order to solve the FFDE analytically, we introduce the fuzzy Laplace transform of the Caputo H-derivative. To the best of our knowledge, there is limited research devoted to the analytical methods to solve the FFDE under the fuzzy Caputo fractional differentiability. An analytical solution is presented to confirm the capability of the proposed method.
International Nuclear Information System (INIS)
Raut, Narendra M.; Huang, L.-S.; Lin, K.-C.; Aggarwal, Suresh K.
2005-01-01
Determination of rare earth elements by quadrupole based inductively coupled plasma mass spectrometry (ICP-QMS) shows several spectroscopic overlaps from M + , MO + and MOH + ions. Especially, the spectroscopic interferences are observed from the atomic and molecular species of lighter rare earth elements including Ba during the determination of Eu, Gd and Tb. Mathematical correction methods, knowing the at.% abundances of different interfering isotopes, and the extent of formation of molecular species determined experimentally, have been used to account for various spectroscopic interferences. However, the uncertainty propagated through the mathematical correction limits its applicability. The uncertainty propagation increases with the increase in contribution from interfering species. However, for the same extent of total contribution, the overall error decreases when the interfering species are more than one. In this work, chondrite as well as a few geological reference materials containing different proportions of various rare earth elements have been used to study the contributions of different interfering species and the corresponding uncertainty in determining the concentrations of rare earth elements. A number of high abundant isotopes are proposed for determining the concentrations of various rare earth elements. The proposed isotopes are tested experimentally for determining the concentrations of different rare earth elements in two USGS reference materials AGV-1 and G-2. The interferences over those isotopes are corrected mathematically and the uncertainties propagated due to correction methodology are determined for those isotopes. The uncertainties in the determined concentrations of rare earth elements due to interference correction using the proposed isotopes are found to be comparable with those obtained by the commonly used isotopes for various rare earth elements
Directory of Open Access Journals (Sweden)
Xiaosong Lin
2015-08-01
Full Text Available Because of uncertainties involved in modeling, construction, and measurement systems, the assessment of the FE model validation must be conducted based on stochastic measurements to provide designers with confidence for further applications. In this study, based on the updated model using response surface methodology, a practical model validation methodology via uncertainty propagation is presented. Several criteria of testing/analysis correlation are introduced, and the sources of model and testing uncertainties are also discussed. After that, Monte Carlo stochastic finite element (FE method is employed to perform the uncertainty quantification and propagation. The proposed methodology is illustrated with the examination of the validity of a large-span prestressed concrete continuous rigid frame bridge monitored under operational conditions. It can be concluded that the calculated frequencies and vibration modes of the updated FE model of Xiabaishi Bridge are consistent with the measured ones. The relative errors of each frequency are all less than 3.7%. Meanwhile, the overlap ratio indexes of each frequency are all more than 75%; The MAC values of each calculated vibration frequency are all more than 90%. The model of Xiabaishi Bridge is valid in the whole operation space including experimental design space, and its confidence level is upper than 95%. The validated FE model of Xiabaishi Bridge can reflect the current condition of Xiabaishi Bridge, and also can be used as basis of bridge health monitoring, damage identification and safety assessment.
International Nuclear Information System (INIS)
Barrado, A. I.; Garcia, S.; Perez, R. M.
2013-01-01
This paper presents an evaluation of uncertainty associated to analytical measurement of eighteen polycyclic aromatic compounds (PACs) in ambient air by liquid chromatography with fluorescence detection (HPLC/FD). The study was focused on analyses of PM 1 0, PM 2 .5 and gas phase fractions. Main analytical uncertainty was estimated for eleven polycyclic aromatic hydrocarbons (PAHs), four nitro polycyclic aromatic hydrocarbons (nitro-PAHs) and two hydroxy polycyclic aromatic hydrocarbons (OH-PAHs) based on the analytical determination, reference material analysis and extraction step. Main contributions reached 15-30% and came from extraction process of real ambient samples, being those for nitro- PAHs the highest (20-30%). Range and mean concentration of PAC mass concentrations measured in gas phase and PM 1 0/PM 2 .5 particle fractions during a full year are also presented. Concentrations of OH-PAHs were about 2-4 orders of magnitude lower than their parent PAHs and comparable to those sparsely reported in literature. (Author)
DEFF Research Database (Denmark)
He, Xiulan
Groundwater modeling plays an essential role in modern subsurface hydrology research. It’s generally recognized that simulations and predictions by groundwater models are associated with uncertainties that originate from various sources. The two major uncertainty sources are related to model...... parameters and model structures, which are the primary focuses of this PhD research. Parameter uncertainty was analyzed using an optimization tool (PEST: Parameter ESTimation) in combination with a random sampling method (LHS: Latin Hypercube Sampling). Model structure, namely geological architecture...
Efficiency of analytical methodologies in uncertainty analysis of seismic core damage frequency
International Nuclear Information System (INIS)
Kawaguchi, Kenji; Uchiyama, Tomoaki; Muramatsu, Ken
2012-01-01
Fault Tree and Event Tree analysis is almost exclusively relied upon in the assessments of seismic Core Damage Frequency (CDF). In this approach, Direct Quantification of Fault tree using Monte Carlo simulation (DQFM) method, or simply called Monte Carlo (MC) method, and Binary Decision Diagram (BDD) method were introduced as alternatives for a traditional approximation method, namely Minimal Cut Set (MCS) method. However, there is still no agreement as to which method should be used in a risk assessment of seismic CDF, especially for uncertainty analysis. The purpose of this study is to examine the efficiencies of the three methods in uncertainty analysis as well as in point estimation so that the decision of selecting a proper method can be made effectively. The results show that the most efficient method would be BDD method in terms of accuracy and computational time. However, it will be discussed that BDD method is not always applicable to PSA models while MC method is so in theory. In turn, MC method was confirmed to agree with the exact solution obtained by BDD method, but it took a large amount of time, in particular for uncertainty analysis. On the other hand, it was shown that the approximation error of MCS method may not be as bad in uncertainty analysis as it is in point estimation. Based on these results and previous works, this paper will propose a scheme to select an appropriate analytical method for a seismic PSA study. Throughout this study, SECOM2-DQFM code was expanded to be able to utilize BDD method and to conduct uncertainty analysis with both MC and BDD method. (author)
Energy Technology Data Exchange (ETDEWEB)
Pal Verma, Mahendra [Instituto de Investigaciones Electricas, Cuernavaca, Morelos (Mexico)
2008-07-01
A procedure was developed to consider the analytical uncertainty in each parameter of geochemical analysis of geothermal fluid. The estimation of the uncertainty is based on the results of the geochemical analyses of geothermal fluids (numbered from the 0 to the 14), obtained within the framework of the comparisons program among the geochemical laboratories in the last 30 years. Also the propagation of the analytical uncertainty was realized in the calculation of the parameters of the geothermal fluid in the reservoir, through the methods of interval of uncertainty and GUM (Guide to the expression of Uncertainty of Measurement). The application of the methods is illustrated in the pH calculation of the geothermal fluid in the reservoir, considering samples 10 and 11 as separated waters at atmospheric conditions. [Spanish] Se desarrollo un procedimiento para estimar la incertidumbre analitica en cada parametro de analisis geoquimico de fluido geotermico. La estimacion de la incertidumbre esta basada en los resultados de los analisis geoquimicos de fluidos geotermicos (numerados del 0 al 14), obtenidos en el marco del programa de comparaciones entre los laboratorios geoquimicos en los ultimos 30 anos. Tambien se realizo la propagacion de la incertidumbre analitica en el calculo de los parametros del fluido geotermico en el yacimiento, a traves de los metodos de intervalo de incertidumbre y GUM (Guide to the expression of Uncertainty of Measurement). La aplicacion de los metodos se ilustra en el calculo de pH del fluido geotermico en el yacimiento, considerando las muestras 10 y 11 como aguas separadas a las condiciones atmosfericas.
Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.
2014-01-01
Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.
Hobbs, J.; Turmon, M.; David, C. H.; Reager, J. T., II; Famiglietti, J. S.
2017-12-01
NASA's Western States Water Mission (WSWM) combines remote sensing of the terrestrial water cycle with hydrological models to provide high-resolution state estimates for multiple variables. The effort includes both land surface and river routing models that are subject to several sources of uncertainty, including errors in the model forcing and model structural uncertainty. Computational and storage constraints prohibit extensive ensemble simulations, so this work outlines efficient but flexible approaches for estimating and reporting uncertainty. Calibrated by remote sensing and in situ data where available, we illustrate the application of these techniques in producing state estimates with associated uncertainties at kilometer-scale resolution for key variables such as soil moisture, groundwater, and streamflow.
Energy Technology Data Exchange (ETDEWEB)
Morales-Arteaga, Maria [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-11-07
This GUM WorkbenchTM propagation of uncertainty is for the coulometric measurement of the plutonium concentration in a Pu standard material (C126) supplied as individual aliquots that were prepared by mass. The C126 solution had been prepared and as aliquoted as standard material. Samples are aliquoted into glass vials and heated to dryness for distribution as dried nitrate. The individual plutonium aliquots were not separated chemically or otherwise purified prior to measurement by coulometry in the F/H Laboratory. Hydrogen peroxide was used for valence adjustment.
Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model
Wang, Shitao
2016-05-27
Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model\\'s estimates of the plume\\'s trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate\\'s contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.
2017-12-18
importance to accurately calculating conjunction risk of satellites. Prior models utilize significant but sometimes arbitrary buffers to account for the...satellites. Current models utilize significant but sometimes arbitrary buffers to account for the unknown true statistical distribution of satellite...state solutions, mean satellite position paths are propagated to identify collision path satellite pairs over a predefined time horizon . At-risk
Analytical propagation of errors in dynamic SPECT: estimators, degrading factors, bias and noise
International Nuclear Information System (INIS)
Kadrmas, D.J.; Huesman, R.H.
1999-01-01
Dynamic SPECT is a relatively new technique that may potentially benefit many imaging applications. Though similar to dynamic PET, the accuracy and precision of dynamic SPECT parameter estimates are degraded by factors that differ from those encountered in PET. In this work we formulate a methodology for analytically studying the propagation of errors from dynamic projection data to kinetic parameter estimates. This methodology is used to study the relationships between reconstruction estimators, image degrading factors, bias and statistical noise for the application of dynamic cardiac imaging with 99m Tc-teboroxime. Dynamic data were simulated for a torso phantom, and the effects of attenuation, detector response and scatter were successively included to produce several data sets. The data were reconstructed to obtain both weighted and unweighted least squares solutions, and the kinetic rate parameters for a two- compartment model were estimated. The expected values and standard deviations describing the statistical distribution of parameters that would be estimated from noisy data were calculated analytically. The results of this analysis present several interesting implications for dynamic SPECT. Statistically weighted estimators performed only marginally better than unweighted ones, implying that more computationally efficient unweighted estimators may be appropriate. This also suggests that it may be beneficial to focus future research efforts upon regularization methods with beneficial bias-variance trade-offs. Other aspects of the study describe the fundamental limits of the bias-variance trade-off regarding physical degrading factors and their compensation. The results characterize the effects of attenuation, detector response and scatter, and they are intended to guide future research into dynamic SPECT reconstruction and compensation methods. (author)
Propagation of uncertainty by Monte Carlo simulations in case of basic geodetic computations
Wyszkowska, Patrycja
2017-12-01
The determination of the accuracy of functions of measured or adjusted values may be a problem in geodetic computations. The general law of covariance propagation or in case of the uncorrelated observations the propagation of variance (or the Gaussian formula) are commonly used for that purpose. That approach is theoretically justified for the linear functions. In case of the non-linear functions, the first-order Taylor series expansion is usually used but that solution is affected by the expansion error. The aim of the study is to determine the applicability of the general variance propagation law in case of the non-linear functions used in basic geodetic computations. The paper presents errors which are a result of negligence of the higher-order expressions and it determines the range of such simplification. The basis of that analysis is the comparison of the results obtained by the law of propagation of variance and the probabilistic approach, namely Monte Carlo simulations. Both methods are used to determine the accuracy of the following geodetic computations: the Cartesian coordinates of unknown point in the three-point resection problem, azimuths and distances of the Cartesian coordinates, height differences in the trigonometric and the geometric levelling. These simulations and the analysis of the results confirm the possibility of applying the general law of variance propagation in basic geodetic computations even if the functions are non-linear. The only condition is the accuracy of observations, which cannot be too low. Generally, this is not a problem with using present geodetic instruments.
Uncertainty propagation through an aeroelastic wind turbine model using polynomial surrogates
DEFF Research Database (Denmark)
Murcia Leon, Juan Pablo; Réthoré, Pierre-Elouan; Dimitrov, Nikolay Krasimirov
2018-01-01
-alignment. The methodology presented extends the deterministic power and thrust coefficient curves to uncertainty models and adds new variables like damage equivalent fatigue loads in different components of the turbine. These surrogate models can then be implemented inside other work-flows such as: estimation......Polynomial surrogates are used to characterize the energy production and lifetime equivalent fatigue loads for different components of the DTU 10 MW reference wind turbine under realistic atmospheric conditions. The variability caused by different turbulent inflow fields are captured by creating...... of the uncertainty in annual energy production due to wind resource variability and/or robust wind power plant layout optimization. It can be concluded that it is possible to capture the global behavior of a modern wind turbine and its uncertainty under realistic inflow conditions using polynomial response surfaces...
Calibration and Forward Uncertainty Propagation for Large-eddy Simulations of Engineering Flows
Energy Technology Data Exchange (ETDEWEB)
Templeton, Jeremy Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Blaylock, Myra L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Domino, Stefan P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hewson, John C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kumar, Pritvi Raj [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ling, Julia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Najm, Habib N. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ruiz, Anthony [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Safta, Cosmin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stewart, Alessia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wagner, Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-09-01
The objective of this work is to investigate the efficacy of using calibration strategies from Uncertainty Quantification (UQ) to determine model coefficients for LES. As the target methods are for engineering LES, uncertainty from numerical aspects of the model must also be quantified. 15 The ultimate goal of this research thread is to generate a cost versus accuracy curve for LES such that the cost could be minimized given an accuracy prescribed by an engineering need. Realization of this goal would enable LES to serve as a predictive simulation tool within the engineering design process.
Poznanski, R R
2010-09-01
A reaction-diffusion model is presented to encapsulate calcium-induced calcium release (CICR) as a potential mechanism for somatofugal bias of dendritic calcium movement in starburst amacrine cells. Calcium dynamics involves a simple calcium extrusion (pump) and a buffering mechanism of calcium binding proteins homogeneously distributed over the plasma membrane of the endoplasmic reticulum within starburst amacrine cells. The system of reaction-diffusion equations in the excess buffer (or low calcium concentration) approximation are reformulated as a nonlinear Volterra integral equation which is solved analytically via a regular perturbation series expansion in response to calcium feedback from a continuously and uniformly distributed calcium sources. Calculation of luminal calcium diffusion in the absence of buffering enables a wave to travel at distances of 120 μm from the soma to distal tips of a starburst amacrine cell dendrite in 100 msec, yet in the presence of discretely distributed calcium-binding proteins it is unknown whether the propagating calcium wave-front in the somatofugal direction is further impeded by endogenous buffers. If so, this would indicate CICR to be an unlikely mechanism of retinal direction selectivity in starburst amacrine cells.
Xu, Yanlong
2015-08-01
The coupled mode theory with coupling of diffraction modes and waveguide modes is usually used on the calculations of transmission and reflection coefficients for electromagnetic waves traveling through periodic sub-wavelength structures. In this paper, I extend this method to derive analytical solutions of high-order dispersion relations for shear horizontal (SH) wave propagation in elastic plates with periodic stubs. In the long wavelength regime, the explicit expression is obtained by this theory and derived specially by employing an effective medium. This indicates that the periodical stubs are equivalent to an effective homogenous layer in the long wavelength. Notably, in the short wavelength regime, high-order diffraction modes in the plate and high-order waveguide modes in the stubs are considered with modes coupling to compute the band structures. Numerical results of the coupled mode theory fit pretty well with the results of the finite element method (FEM). In addition, the band structures\\' evolution with the height of the stubs and the thickness of the plate shows clearly that the method can predict well the Bragg band gaps, locally resonant band gaps and high-order symmetric and anti-symmetric thickness-twist modes for the periodically structured plates. © 2015 Elsevier B.V.
Indian Academy of Sciences (India)
The imperfect understanding of some of the processes and physics in the carbon cycle and chemistry models generate uncertainties in the conversion of emissions to concentration. To reflect this uncertainty in the climate scenarios, the use of AOGCMs that explicitly simulate the carbon cycle and chemistry of all the ...
Chiadamrong, N.; Piyathanavong, V.
2017-12-01
Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.
Monte-Carlo-based uncertainty propagation with hierarchical models—a case study in dynamic torque
Klaus, Leonard; Eichstädt, Sascha
2018-04-01
For a dynamic calibration, a torque transducer is described by a mechanical model, and the corresponding model parameters are to be identified from measurement data. A measuring device for the primary calibration of dynamic torque, and a corresponding model-based calibration approach, have recently been developed at PTB. The complete mechanical model of the calibration set-up is very complex, and involves several calibration steps—making a straightforward implementation of a Monte Carlo uncertainty evaluation tedious. With this in mind, we here propose to separate the complete model into sub-models, with each sub-model being treated with individual experiments and analysis. The uncertainty evaluation for the overall model then has to combine the information from the sub-models in line with Supplement 2 of the Guide to the Expression of Uncertainty in Measurement. In this contribution, we demonstrate how to carry this out using the Monte Carlo method. The uncertainty evaluation involves various input quantities of different origin and the solution of a numerical optimisation problem.
Understanding impacts of climate change on hydrodynamic processes and ecosystem response within the Great Lakes is an important and challenging task. Variability in future climate conditions, uncertainty in rainfall-runoff model forecasts, the potential for land use change, and t...
Ultra-Scalable Algorithms for Large-Scale Uncertainty Quantification in Inverse Wave Propagation
2016-03-04
MCMC sampling methods for posteriors for Bayesian inverse wave propagation problems. We developed a so-called randomized maximum a posteriori (rMAP...method for generating approximate samples of posteriors in high dimensional Bayesian inverse problems governed by large-scale forward problems, with...equivalent for a linear parameter-to-observable map. Nu- merical results indicated the potential of the rMAP approach in posterior sampling of nonlinear
Bacchi, Vito; Duluc, Claire-Marie; Bertrand, Nathalie; Bardet, Lise
2017-04-01
In recent years, in the context of hydraulic risk assessment, much effort has been put into the development of sophisticated numerical model systems able reproducing surface flow field. These numerical models are based on a deterministic approach and the results are presented in terms of measurable quantities (water depths, flow velocities, etc…). However, the modelling of surface flows involves numerous uncertainties associated both to the numerical structure of the model, to the knowledge of the physical parameters which force the system and to the randomness inherent to natural phenomena. As a consequence, dealing with uncertainties can be a difficult task for both modelers and decision-makers [Ioss, 2011]. In the context of nuclear safety, IRSN assesses studies conducted by operators for different reference flood situations (local rain, small or large watershed flooding, sea levels, etc…), that are defined in the guide ASN N°13 [ASN, 2013]. The guide provides some recommendations to deal with uncertainties, by proposing a specific conservative approach to cover hydraulic modelling uncertainties. Depending of the situation, the influencing parameter might be the Strickler coefficient, levee behavior, simplified topographic assumptions, etc. Obviously, identifying the most influencing parameter and giving it a penalizing value is challenging and usually questionable. In this context, IRSN conducted cooperative (Compagnie Nationale du Rhone, I-CiTy laboratory of Polytech'Nice, Atomic Energy Commission, Bureau de Recherches Géologiques et Minières) research activities since 2011 in order to investigate feasibility and benefits of Uncertainties Analysis (UA) and Global Sensitivity Analysis (GSA) when applied to hydraulic modelling. A specific methodology was tested by using the computational environment Promethee, developed by IRSN, which allows carrying out uncertainties propagation study. This methodology was applied with various numerical models and in
DEFF Research Database (Denmark)
Diky, Vladimir; Chirico, Robert D.; Muzny, Chris
ThermoData Engine (TDE, NIST Standard Reference Databases 103a and 103b) is the first product that implements the concept of Dynamic Data Evaluation in the fields of thermophysics and thermochemistry, which includes maintaining the comprehensive and up-to-date database of experimentally measured...... property values and expert system for data analysis and generation of recommended property values at the specified conditions along with uncertainties on demand. The most recent extension of TDE covers solvent design and multi-component process stream property calculations with uncertainty analysis....... Selection is made by best efficiency (depending on the task, solubility, selectivity, or distribution coefficient, etc.) and matching other requirements requested by the user. At user’s request, efficiency criteria are evaluated based on experimental data for binary mixtures or predictive models (UNIFAC...
DEFF Research Database (Denmark)
He, Xiulan
be compensated by model parameters, e.g. when hydraulic heads are considered. However, geological structure is the primary source of uncertainty with respect to simulations of groundwater age and capture zone. Operational MPS based software has been on stage for just around ten years; yet, issues regarding...... parameters and model structures, which are the primary focuses of this PhD research. Parameter uncertainty was analyzed using an optimization tool (PEST: Parameter ESTimation) in combination with a random sampling method (LHS: Latin Hypercube Sampling). Model structure, namely geological architecture...... geological structures of these three sites provided appropriate conditions for testing the methods. Our study documented that MPS is an efficient approach for simulating geological heterogeneity, especially for non-stationary system. The high resolution of geophysical data such as SkyTEM is valuable both...
2011-10-01
representation of uncertainty involves decomposing a random variable into deterministic and stochastic components. Following the work of Norbert ... Wiener on Homogeneous Chaos [4], Cameron and Martin pointed out that any second-order functional of Brownian motion can be expressed as a mean-square... Wiener , The homogeneous chaos. American Journal of Mathematics, 60 (1938), 897–936. [5] R. H. Cameron and W. T. Martin, The orthogonal development of
International Nuclear Information System (INIS)
Helton, Jon Craig; Sallaberry, Cedric M.; Hansen, Clifford W.
2010-01-01
The 2008 performance assessment (PA) for the proposed repository for high-level radioactive waste at Yucca Mountain (YM), Nevada, illustrates the conceptual structure of risk assessments for complex systems. The 2008 YM PA is based on the following three conceptual entities: a probability space that characterizes aleatory uncertainty; a function that predicts consequences for individual elements of the sample space for aleatory uncertainty; and a probability space that characterizes epistemic uncertainty. These entities and their use in the characterization, propagation and analysis of aleatory and epistemic uncertainty are described and illustrated with results from the 2008 YM PA.
Samad, Noor Asma Fazli Abdul; Sin, Gürkan; Gernaey, Krist V; Gani, Rafiqul
2013-11-01
This paper presents the application of uncertainty and sensitivity analysis as part of a systematic model-based process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty, while for sensitivity analysis, global methods including the standardized regression coefficients (SRC) and Morris screening are used to identify the most significant parameters. The potassium dihydrogen phosphate (KDP) crystallization process is used as a case study, both in open-loop and closed-loop operation. In the uncertainty analysis, the impact on the predicted output of uncertain parameters related to the nucleation and the crystal growth model has been investigated for both a one- and two-dimensional crystal size distribution (CSD). The open-loop results show that the input uncertainties lead to significant uncertainties on the CSD, with appearance of a secondary peak due to secondary nucleation for both cases. The sensitivity analysis indicated that the most important parameters affecting the CSDs are nucleation order and growth order constants. In the proposed PAT system design (closed-loop), the target CSD variability was successfully reduced compared to the open-loop case, also when considering uncertainty in nucleation and crystal growth model parameters. The latter forms a strong indication of the robustness of the proposed PAT system design in achieving the target CSD and encourages its transfer to full-scale implementation. Copyright © 2013 Elsevier B.V. All rights reserved.
Gustafsson, Johan; Brolin, Gustav; Cox, Maurice; Ljungberg, Michael; Johansson, Lena; Sjögreen Gleisner, Katarina
2015-11-01
A computer model of a patient-specific clinical 177Lu-DOTATATE therapy dosimetry system is constructed and used for investigating the variability of renal absorbed dose and biologically effective dose (BED) estimates. As patient models, three anthropomorphic computer phantoms coupled to a pharmacokinetic model of 177Lu-DOTATATE are used. Aspects included in the dosimetry-process model are the gamma-camera calibration via measurement of the system sensitivity, selection of imaging time points, generation of mass-density maps from CT, SPECT imaging, volume-of-interest delineation, calculation of absorbed-dose rate via a combination of local energy deposition for electrons and Monte Carlo simulations of photons, curve fitting and integration to absorbed dose and BED. By introducing variabilities in these steps the combined uncertainty in the output quantity is determined. The importance of different sources of uncertainty is assessed by observing the decrease in standard deviation when removing a particular source. The obtained absorbed dose and BED standard deviations are approximately 6% and slightly higher if considering the root mean square error. The most important sources of variability are the compensation for partial volume effects via a recovery coefficient and the gamma-camera calibration via the system sensitivity.
DEFF Research Database (Denmark)
Rasmussen, Anders Rønne; Sørensen, Mads Peter; Gaididei, Yuri Borisovich
2008-01-01
, the model equation considered here is capable to describe waves propagating in opposite directions. Owing to the Hamiltonian structure of the proposed model equation, the front solution is in agreement with the classical Rankine Hugoniot relations. The exact front solution propagates at supersonic speed...
Indian Academy of Sciences (India)
To reflect this uncertainty in the climate scenarios, the use of AOGCMs that explicitly simulate the carbon cycle and chemistry of all the substances are needed. The Hadley Centre has developed a version of the climate model that allows the effect of climate change on the carbon cycle and its feedback into climate, to be ...
A Sparse Stochastic Collocation Technique for High-Frequency Wave Propagation with Uncertainty
Malenova, G.
2016-09-08
We consider the wave equation with highly oscillatory initial data, where there is uncertainty in the wave speed, initial phase, and/or initial amplitude. To estimate quantities of interest related to the solution and their statistics, we combine a high-frequency method based on Gaussian beams with sparse stochastic collocation. Although the wave solution, uϵ, is highly oscillatory in both physical and stochastic spaces, we provide theoretical arguments for simplified problems and numerical evidence that quantities of interest based on local averages of |uϵ|2 are smooth, with derivatives in the stochastic space uniformly bounded in ϵ, where ϵ denotes the short wavelength. This observable related regularity makes the sparse stochastic collocation approach more efficient than Monte Carlo methods. We present numerical tests that demonstrate this advantage.
Energy Technology Data Exchange (ETDEWEB)
Geffray, Clotaire Clement
2017-03-20
The work presented here constitutes an important step towards the validation of the use of coupled system thermal-hydraulics and computational fluid dynamics codes for the simulation of complex flows in liquid metal cooled pool-type facilities. First, a set of methods suited for uncertainty and sensitivity analysis and validation activities with regards to the specific constraints of the work with coupled and expensive-to-run codes is proposed. Then, these methods are applied to the ATHLET - ANSYS CFX model of the TALL-3D facility. Several transients performed at this latter facility are investigated. The results are presented, discussed and compared to the experimental data. Finally, assessments of the validity of the selected methods and of the quality of the model are offered.
Homogeneous Minor Actinide Transmutation in SFR: Neutronic Uncertainties Propagation with Depletion
International Nuclear Information System (INIS)
Buiron, L.; Plisson-Rieunier, D.
2015-01-01
In the frame of next generation fast reactor design, the minimisation of nuclear waste production is one of the key objectives for current R and D. Among the possibilities studied at CEA, minor actinides multi-recycling is the most promising industrial way achievable in the near-term. Two main management options are considered: - Multi-recycling in a homogeneous way (minor actinides diluted in the driver fuel). If this solution can help achieving high transmutation rates, the negative impact of minor actinides on safety coefficients allows only a small fraction of the total heavy mass to be loaded in the core (∼ few %). - Multi-recycling in heterogeneous way by means of Minor Actinide Bearing Blanket (MABB) located at the core periphery. This solution offers more flexibility than the previous one, allowing a total minor actinides decoupled management from the core fuel. As the impact on feedback coefficient is small larger initial minor actinide mass can be loaded in this configuration. Starting from a breakeven Sodium Fast Reactor designed jointly by CEA, Areva and EdF teams, the so called SFR V2B, transmutation performances have been studied in frame on the French fleet for both options and various specific isotopic management (all minor actinides, americium only, etc.). Using these results, a sensitivity study has been performed to assess neutronic uncertainties (i.e coming from cross section) on mass balance on the most attractive configurations. This work in based on a new implementation of sensitivity on concentration with depletion in the ERANOS code package. Uncertainties on isotopes masses at the end of irradiation using various variance-covariance is discussed. (authors)
Eichner, Bernhard; Koller, Julian; Kammerlander, Johannes; Schöber, Johannes; Achleitner, Stefan
2017-04-01
As mountain streams are sources of both, water and sediment, they are strongly influencing the whole downstream river network. Besides large flood flow events, the continuous transport of sediments during the year is in the focus of this work. Since small mountain streams are usually not measured, spatial distributed hydrological models are used to assess the internal discharges triggering the sediment transport. In general model calibration will never be perfect and is focused on specific criteria such as mass balance or peak flow, etc. The remaining uncertainties influence the subsequent applications, where the simulation results are used. The presented work focuses on the question, how modelling uncertainties in hydrological modelling impact the subsequent simulation of sediment transport. The applied auto calibration by means of MonteCarlo Simulation optimizes the model parameters for different aspects (efficiency criteria) of the runoff time series. In this case, we investigated the impacts of different hydrological criteria on a subsequent bed load transport simulation in catchment of the Längentaler Bach, a small catchment in the Stubai Alps. The used hydrologic model HQSim is a physically based semi-distributed water balance model. Different hydrologic response units (HRU), which are characterized by elevation, orientation, vegetation, soil type and depth, drain with various delay into specified river reaches. The runoff results of the Monte-Carlo simulation are evaluated in comparison to runoff gauge, where water is collected by the Tiroler Wasserkraft AG (TIWAG). Using the Nash-Sutcliffe efficiency (NSE) on events and main runoff period (summer), the weighted root mean squared error (RMSE) on duration curve and a combination of different criteria, a set of best fit parametrization with varying runoff series was received as input for the bed load transport simulation. These simulations are performed with sedFlow, a tool especially developed for bed load
International Nuclear Information System (INIS)
Machado, Marcio Dornellas
1998-09-01
One of the most important thermalhydraulics safety parameters is the DNBR (Departure from Nucleate Boiling Ratio). The current methodology in use at Eletronuclear to determine DNBR is extremely conservative and may result in penalties to the reactor power due to an increase plugging level of steam generator tubes. This work uses a new methodology to evaluate DNBR, named mini-RTDP. The standard methodology (STDP) currently in use establishes a limit design value which cannot be surpassed. This limit value is determined taking into account the uncertainties of the empirical correlation used in COBRA IIC/MIT code, modified to Angra 1 conditions. The correlation used is the Westinghouse's W-3 and the minimum DNBR (MDBR) value cannot be less than 1.3. The new methodology reduces the excessive level of conservatism associated with the parameters used in the DNBR calculation, which take most unfavorable values in the STDP methodology, by using their best estimate values. The final goal is to obtain a new DNBR design limit which will provide a margin gain due to more realistic parameters values used in the methodology. (author)
2016-04-25
ElectroMagnetic, Multipath propagation, Reflection -diffraction, SAR signal processing 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18... reflections and diffractions. However, this model still validates for indoor propagation. From this field, we can then detect and predict precisely the...sensing through obstacles, i.e. walls and doors, using microwaves is becoming an effective means supporting a wide range of civilian and military
Propagation of global model uncertainties in aerosol forecasting: A field practitioner's opinion
Reid, J. S.; Benedetti, A.; Bozzo, A.; Brooks, I. M.; Brooks, M.; Colarco, P. R.; daSilva, A.; Flatau, M. K.; Kuehn, R.; Hansen, J.; Holz, R.; Kaku, K.; Lynch, P.; Remy, S.; Rubin, J. I.; Sekiyama, T. T.; Tanaka, T. Y.; Zhang, J.
2015-12-01
While aerosol forecasting has its own host of aerosol source, sink and microphysical challenges to overcome, ultimately any numerical weather prediction based aerosol model can be no better than its underlying meteorology. However, the scorecard elements that drive NWP model development have varying relationships to the key uncertainties and biases that are of greatest concern to aerosol forecasting. Here we provide opinions from member developers of the International Cooperative for Aerosol Prediction (ICAP) on NWP deficiencies related to multi-specie aerosol forecasting, as well as relevance of current NWP scorecard elements to aerosol forecasting. Comparisons to field mission data to simulations are used to demonstrate these opinions and show how shortcomings in individual processes in the global models cascade into aerosol prediction. While a number of sensitivities will be outlined, as one would expect, the most important processes relate to aerosol sources, sinks and, in the context of data assimilation, aerosol hygroscopicity. Thus, the pressing needs in the global models relate to boundary layer and convective processes in the context of large scale waves. Examples will be derived from tropical to polar field measurements, from simpler to more complex including a) network data on dust emissions and transport from Saharan Africa, b) boundary layer development, instability, and deep convection in the United States during Studies of Emissions and Atmospheric, Clouds, and Climate Coupling by Regional Surveys (SEAC4RS); and c) 7 Southeast Asian Studies (7SEAS) data on aerosol influences by maritime convection up-scaled through tropical waves. While the focus of this talk is how improved meteorological model processes are important to aerosol modeling, we conclude with recent findings of the Arctic Summer Cloud Ocean Study (ASCOS) which demonstrate how aerosol processes may be important to global model simulations of polar cloud, surface energy and subsequently
DEFF Research Database (Denmark)
Jurado-Navas, Antonio
2015-01-01
Recently, a new and generalized statistical model, called Málaga or simply M distribution, has been proposed to characterize the irradiance fluctuations of an unbounded optical wavefront (plane and spherical waves) propagating through a turbulent medium under all irradiance fluctuation conditions...
Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua
2018-01-01
Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Petrov, Pavel S; Sturm, Frédéric
2016-03-01
A problem of sound propagation in a shallow-water waveguide with a weakly sloping penetrable bottom is considered. The adiabatic mode parabolic equations are used to approximate the solution of the three-dimensional (3D) Helmholtz equation by modal decomposition of the acoustic pressure field. The mode amplitudes satisfy parabolic equations that admit analytical solutions in the special case of the 3D wedge. Using the analytical formula for modal amplitudes, an explicit and remarkably simple expression for the acoustic pressure in the wedge is obtained. The proposed solution is validated by the comparison with a solution of the 3D penetrable wedge problem obtained using a fully 3D parabolic equation that includes a leading-order cross term correction.
Analytical approach of laser beam propagation in the hollow polygonal light pipe.
Zhu, Guangzhi; Zhu, Xiao; Zhu, Changhong
2013-08-10
An analytical method of researching the light distribution properties on the output end of a hollow n-sided polygonal light pipe and a light source with a Gaussian distribution is developed. The mirror transformation matrices and a special algorithm of removing void virtual images are created to acquire the location and direction vector of each effective virtual image on the entrance plane. The analytical method is demonstrated by Monte Carlo ray tracing. At the same time, four typical cases are discussed. The analytical results indicate that the uniformity of light distribution varies with the structural and optical parameters of the hollow n-sided polygonal light pipe and light source with a Gaussian distribution. The analytical approach will be useful to design and choose the hollow n-sided polygonal light pipe, especially for high-power laser beam homogenization techniques.
International Nuclear Information System (INIS)
Raupach, Rainer; Flohr, Thomas G
2011-01-01
We analyze the signal and noise propagation of differential phase-contrast computed tomography (PCT) compared with conventional attenuation-based computed tomography (CT) from a theoretical point of view. This work focuses on grating-based differential phase-contrast imaging. A mathematical framework is derived that is able to analytically predict the relative performance of both imaging techniques in the sense of the relative contrast-to-noise ratio for the contrast of any two materials. Two fundamentally different properties of PCT compared with CT are identified. First, the noise power spectra show qualitatively different characteristics implying a resolution-dependent performance ratio. The break-even point is derived analytically as a function of system parameters such as geometry and visibility. A superior performance of PCT compared with CT can only be achieved at a sufficiently high spatial resolution. Second, due to periodicity of phase information which is non-ambiguous only in a bounded interval statistical phase wrapping can occur. This effect causes a collapse of information propagation for low signals which limits the applicability of phase-contrast imaging at low dose.
Kirchengast, G.; Schwaerz, M.; Fritzer, J.; Schwarz, J.; Scherllin-Pirscher, B.; Steiner, A. K.
2013-12-01
influences) exists so far. Establishing such a trace first-time in form of the Reference Occultation Processing System rOPS, providing reference RO data for climate science and applications, is therefore a current cornerstone endeavor at the Wegener Center over 2011 to 2015, supported also by colleagues from other key groups at EUMETSAT Darmstadt, UCAR Boulder, DMI Copenhagen, ECMWF Reading, IAP Moscow, AIUB Berne, and RMIT Melbourne. With the rOPS we undertake to process the full chain from the SI-tied raw data to the atmospheric ECVs with integrated uncertainty propagation. We summarize where we currently stand in quantifying RO accuracy and long-term stability and then discuss the concept, development status and initial results from the rOPS, with emphasis on its novel capability to provide SI-tied reference data with integrated uncertainty estimation. We comment how these data can provide ground-breaking support to challenges such as climate model evaluation, anthropogenic change detection and attribution, and calibration of complementary climate observing systems.
Directory of Open Access Journals (Sweden)
M. W. Rotach
2012-08-01
Full Text Available D-PHASE was a Forecast Demonstration Project of the World Weather Research Programme (WWRP related to the Mesoscale Alpine Programme (MAP. Its goal was to demonstrate the reliability and quality of operational forecasting of orographically influenced (determined precipitation in the Alps and its consequences on the distribution of run-off characteristics. A special focus was, of course, on heavy-precipitation events.
The D-PHASE Operations Period (DOP ran from June to November~2007, during which an end-to-end forecasting system was operated covering many individual catchments in the Alps, with their water authorities, civil protection organizations or other end users. The forecasting system's core piece was a Visualization Platform where precipitation and flood warnings from some 30 atmospheric and 7 hydrological models (both deterministic and probabilistic and corresponding model fields were displayed in uniform and comparable formats. Also, meteograms, nowcasting information and end user communication was made available to all the forecasters, users and end users. D-PHASE information was assessed and used by some 50 different groups ranging from atmospheric forecasters to civil protection authorities or water management bodies.
In the present contribution, D-PHASE is briefly presented along with its outstanding scientific results and, in particular, the lessons learnt with respect to uncertainty propagation. A focus is thereby on the transfer of ensemble prediction information into the hydrological community and its use with respect to other aspects of societal impact. Objective verification of forecast quality is contrasted to subjective quality assessments during the project (end user workshops, questionnaires and some general conclusions concerning forecast demonstration projects are drawn.
Liubartseva, Svitlana; Coppini, Giovanni; Ciliberti, Stefania Angela; Lecci, Rita
2017-04-01
In operational oil spill modeling, MEDSLIK-II (De Dominicis et al., 2013) focuses on the reliability of the oil drift and fate predictions routinely fed by operational oceanographic and atmospheric forecasting chain. Uncertainty calculations enhance oil spill forecast efficiency, supplying probability maps to quantify the propagation of various uncertainties. Recently, we have developed the methodology that allows users to evaluate the variability of oil drift forecast caused by uncertain data on the initial oil spill conditions (Liubartseva et al., 2016). One of the key methodological aspects is a reasonable choice of a way of parameter perturbation. In case of starting oil spill location and time, these scalars might be treated as independent random parameters. If we want to perturb the underlying ocean currents and wind, we have to deal with deterministic vector parameters. To a first approximation, we suggest rolling forecasts as a set of perturbed ocean currents and wind. This approach does not need any extra hydrodynamic calculations, and it is quick enough to be performed in web-based applications. The capabilities of the proposed methodology are explored using the Black Sea Forecasting System (BSFS) recently implemented by Ciliberti et al. (2016) for the Copernicus Marine Environment Monitoring Service (http://marine.copernicus.eu/services-portfolio/access-to-products). BSFS horizontal resolution is 1/36° in zonal and 1/27° in meridional direction (ca. 3 km). Vertical domain discretization is represented by 31 unevenly spaced vertical levels. Atmospheric wind data are provided by the European Centre for Medium-Range Weather Forecasts (ECMWF) forecasts, at 1/8° (ca. 12.5 km) horizontal and 6-hour temporal resolution. A great variety of probability patterns controlled by different underlying flows is represented including the cyclonic Rim Current, flow bifurcations in anticyclonic eddies (e.g., Sevastopol and Batumi), northwestern shelf circulation, etc
Uncertainty in soil-structure interaction analysis arising from differences in analytical techniques
International Nuclear Information System (INIS)
Maslenikov, O.R.; Chen, J.C.; Johnson, J.J.
1982-07-01
This study addresses uncertainties arising from variations in different modeling approaches to soil-structure interaction of massive structures at a nuclear power plant. To perform a comprehensive systems analysis, it is necessary to quantify, for each phase of the traditional analysis procedure, both the realistic seismic response and the uncertainties associated with them. In this study two linear soil-structure interaction techniques were used to analyze the Zion, Illinois nuclear power plant: a direct method using the FLUSH computer program and a substructure approach using the CLASSI family of computer programs. In-structure response from two earthquakes, one real and one synthetic, was compared. Structure configurations from relatively simple to complicated multi-structure cases were analyzed. The resulting variations help quantify uncertainty in structure response due to analysis procedures
Romanofsky, Robert R.
1989-01-01
In this report, a thorough analytical procedure is developed for evaluating the frequency-dependent loss characteristics and effective permittivity of microstrip lines. The technique is based on the measured reflection coefficient of microstrip resonator pairs. Experimental data, including quality factor Q, effective relative permittivity, and fringing for 50-omega lines on gallium arsenide (GaAs) from 26.5 to 40.0 GHz are presented. The effects of an imperfect open circuit, coupling losses, and loading of the resonant frequency are considered. A cosine-tapered ridge-guide text fixture is described. It was found to be well suited to the device characterization.
Zhang, Bo; Chen, Tianning; Zhao, Yuyuan; Zhang, Weiyong; Zhu, Jian
2012-09-01
On the basis of the work of Wilson et al. [J. Acoust. Soc. Am. 84, 350-359 (1988)], a more exact numerical approach was constructed for predicting the nonlinear sound propagation and absorption properties of rigid porous media at high sound pressure levels. The numerical solution was validated by the experimental results for sintered fibrous porous steel samples and its predictions were compared with the numerical solution of Wilson et al. An approximate analytical solution was further put forward for the normalized surface acoustic admittance of rigid air-saturated porous materials with infinite thickness, based on the wave perturbation method developed by Lambert and McIntosh [J. Acoust. Soc. Am. 88, 1950-1959 (1990)]. Comparisons were made with the numerical results.
International Nuclear Information System (INIS)
Kiani, Keivan; Shodja, Hossein M.
2011-01-01
Highlights: ► Response of RC structures to macrocell corrosion of a rebar is studied analytically. ► The problem is solved prior to the onset of microcrack propagation. ► Suitable Love's potential functions are used to study the steel-rust-concrete media. ► The role of crucial factors on the time of onset of concrete cracking is examined. ► The effect of vital factors on the maximum radial stress of concrete is explored. - Abstract: Assessment of the macrocell corrosion which deteriorates reinforced concrete (RC) structures have attracted the attention of many researchers during recent years. In this type of rebar corrosion, the reduction in cross-section of the rebar is significantly accelerated due to the large ratio of the cathode's area to the anode's area. In order to examine the problem, an analytical solution is proposed for prediction of the response of the RC structure from the time of steel depassivation to the stage just prior to the onset of microcrack propagation. To this end, a circular cylindrical RC member under axisymmetric macrocell corrosion of the reinforcement is considered. Both cases of the symmetric and asymmetric rebar corrosion along the length of the anode zone are studied. According to the experimentally observed data, corrosion products are modeled as a thin layer with a nonlinear stress–strain relation. The exact expressions of the elastic fields associated with the steel, and concrete media are obtained using Love's potential function. By imposing the boundary conditions, the resulting set of nonlinear equations are solved in each time step by Newton's method. The effects of the key parameters which have dominating role in the time of the onset of concrete cracking and maximum radial stress field of the concrete have been examined.
Pereira, Daniel; Haiat, Guillaume; Fernandes, Julio; Belanger, Pierre
2017-04-01
Axial transmission techniques have been extensively studied for cortical bone quality assessment. However, the modeling of ultrasonic guided waves propagation in such a complex medium remains challenging. The aim of this paper is to develop a semi-analytical finite element method to simulate the propagation of guided waves in an irregular, multi-layer, and heterogeneous bone cross-section modeled with anisotropic and viscoelastic material properties. The accuracy of the simulations was verified against conventional time-domain three-dimensional finite element. The method was applied in the context of axial transmission in bone to investigate the feasibility of first arrival signal (FAS) to monitor degradation of intracortical properties at low frequency. Different physiopathological conditions for the intracortical region, varying from healthy to osteoporotic, were monitored through FAS velocity using a 10-cycle tone burst excitation centered at 32.5 kHz. The results show that the variation in FAS velocity is mainly associated with four of the eight modes supported by the waveguide, varying with velocity values between 550 and 700 m/s along the different scenarios. Furthermore, the FAS velocity is shown to be associated with the group velocity of the mode with the highest relative amplitude contribution at each studied scenario. However, because of the evolution of the mode with the highest contribution, the FAS velocity is shown to be limited to discriminate intracortical bone properties at low frequency.
Peddeti, Kranthi; Santhanam, Sridhar
2018-02-01
Acoustoelastic techniques have been recently used to characterize the state of prestress in structures such as plates. The velocity of guided wave modes propagating through plates is sensitive to the magnitude and orientation of the initial state of stress. Dispersion curves for phase velocities of plate guided waves can be computed using the superposition of partial bulk waves (SPBW) method. Here, a semi-analytical finite element (SAFE) method is formulated for the acoustoelastic problem of guided waves in weakly nonlinear elastic plates. The SAFE formulation is shown to provide phase velocity dispersion curve results identical with those provided by the SPBW method for the problem of a plate under a uniaxial and uniform tensile stress. Analytical phase and group velocity dispersion curves are also obtained for a plate with an initial prestress gradient through its thickness using the SAFE method. The magnitude of the prestress gradient is shown to have a significant effect on phase and group velocities of the fundamental and first order Lamb modes, only in certain frequency-thickness regimes.
Valier-Brasier, Tony; Conoir, Jean-Marc; Coulouvrat, François; Thomas, Jean-Louis
2015-10-01
Sound propagation in dilute suspensions of small spheres is studied using two models: a hydrodynamic model based on the coupled phase equations and an acoustic model based on the ECAH (ECAH: Epstein-Carhart-Allegra-Hawley) multiple scattering theory. The aim is to compare both models through the study of three fundamental kinds of particles: rigid particles, elastic spheres, and viscous droplets. The hydrodynamic model is based on a Rayleigh-Plesset-like equation generalized to elastic spheres and viscous droplets. The hydrodynamic forces for elastic spheres are introduced by analogy with those of droplets. The ECAH theory is also modified in order to take into account the velocity of rigid particles. Analytical calculations performed for long wavelength, low dilution, and weak absorption in the ambient fluid show that both models are strictly equivalent for the three kinds of particles studied. The analytical calculations show that dilatational and translational mechanisms are modeled in the same way by both models. The effective parameters of dilute suspensions are also calculated.
Directory of Open Access Journals (Sweden)
Sergey F Pravdin
Full Text Available We develop a numerical approach based on our recent analytical model of fiber structure in the left ventricle of the human heart. A special curvilinear coordinate system is proposed to analytically include realistic ventricular shape and myofiber directions. With this anatomical model, electrophysiological simulations can be performed on a rectangular coordinate grid. We apply our method to study the effect of fiber rotation and electrical anisotropy of cardiac tissue (i.e., the ratio of the conductivity coefficients along and across the myocardial fibers on wave propagation using the ten Tusscher-Panfilov (2006 ionic model for human ventricular cells. We show that fiber rotation increases the speed of cardiac activation and attenuates the effects of anisotropy. Our results show that the fiber rotation in the heart is an important factor underlying cardiac excitation. We also study scroll wave dynamics in our model and show the drift of a scroll wave filament whose velocity depends non-monotonically on the fiber rotation angle; the period of scroll wave rotation decreases with an increase of the fiber rotation angle; an increase in anisotropy may cause the breakup of a scroll wave, similar to the mother rotor mechanism of ventricular fibrillation.
Pravdin, Sergey F; Dierckx, Hans; Katsnelson, Leonid B; Solovyova, Olga; Markhasin, Vladimir S; Panfilov, Alexander V
2014-01-01
We develop a numerical approach based on our recent analytical model of fiber structure in the left ventricle of the human heart. A special curvilinear coordinate system is proposed to analytically include realistic ventricular shape and myofiber directions. With this anatomical model, electrophysiological simulations can be performed on a rectangular coordinate grid. We apply our method to study the effect of fiber rotation and electrical anisotropy of cardiac tissue (i.e., the ratio of the conductivity coefficients along and across the myocardial fibers) on wave propagation using the ten Tusscher-Panfilov (2006) ionic model for human ventricular cells. We show that fiber rotation increases the speed of cardiac activation and attenuates the effects of anisotropy. Our results show that the fiber rotation in the heart is an important factor underlying cardiac excitation. We also study scroll wave dynamics in our model and show the drift of a scroll wave filament whose velocity depends non-monotonically on the fiber rotation angle; the period of scroll wave rotation decreases with an increase of the fiber rotation angle; an increase in anisotropy may cause the breakup of a scroll wave, similar to the mother rotor mechanism of ventricular fibrillation.
Analytic result for the one-loop scalar pentagon integral with massless propagators
International Nuclear Information System (INIS)
Kniehl, Bernd A.; Tarasov, Oleg V.
2010-01-01
The method of dimensional recurrences proposed by one of the authors (O. V.Tarasov, 1996) is applied to the evaluation of the pentagon-type scalar integral with on-shell external legs and massless internal lines. For the first time, an analytic result valid for arbitrary space-time dimension d and five arbitrary kinematic variables is presented. An explicit expression in terms of the Appell hypergeometric function F 3 and the Gauss hypergeometric function 2 F 1 , both admitting one-fold integral representations, is given. In the case when one kinematic variable vanishes, the integral reduces to a combination of Gauss hypergeometric functions 2 F 1 . For the case when one scalar invariant is large compared to the others, the asymptotic values of the integral in terms of Gauss hypergeometric functions 2 F 1 are presented in d=2-2ε, 4-2ε, and 6-2ε dimensions. For multi-Regge kinematics, the asymptotic value of the integral in d=4-2ε dimensions is given in terms of the Appell function F 3 and the Gauss hypergeometric function 2 F 1 . (orig.)
Analytic result for the one-loop scalar pentagon integral with massless propagators
International Nuclear Information System (INIS)
Kniehl, Bernd A.; Tarasov, Oleg V.
2010-01-01
The method of dimensional recurrences proposed by Tarasov (1996, 2000) is applied to the evaluation of the pentagon-type scalar integral with on-shell external legs and massless internal lines. For the first time, an analytic result valid for arbitrary space-time dimension d and five arbitrary kinematic variables is presented. An explicit expression in terms of the Appell hypergeometric function F 3 and the Gauss hypergeometric function 2 F 1 , both admitting one-fold integral representations, is given. In the case when one kinematic variable vanishes, the integral reduces to a combination of Gauss hypergeometric functions 2 F 1 . For the case when one scalar invariant is large compared to the others, the asymptotic values of the integral in terms of Gauss hypergeometric functions 2 F 1 are presented in d=2-2ε, 4-2ε, and 6-2ε dimensions. For multi-Regge kinematics, the asymptotic value of the integral in d=4-2ε dimensions is given in terms of the Appell function F 3 and the Gauss hypergeometric function 2 F 1 .
Dalarsson, Mariana; Tassin, Philippe
2009-04-13
We have investigated the transmission and reflection properties of structures incorporating left-handed materials with graded index of refraction. We present an exact analytical solution to Helmholtz' equation for a graded index profile changing according to a hyperbolic tangent function along the propagation direction. We derive expressions for the field intensity along the graded index structure, and we show excellent agreement between the analytical solution and the corresponding results obtained by accurate numerical simulations. Our model straightforwardly allows for arbitrary spectral dispersion.
Dalarsson, Mariana; Tassin, Philippe
2012-01-01
We have investigated the transmission and reflection properties of structures incorporating left-handed materials with graded index of refraction. We present an exact analytical solution to Helmholtz' equation for a graded index profile changing according to a hyperbolic tangent function along the propagation direction. We derive expressions for the field intensity along the graded index structure, and we show excellent agreement between the analytical solution and the corresponding results o...
Karakoylu, E.; Franz, B.
2016-01-01
First attempt at quantifying uncertainties in ocean remote sensing reflectance satellite measurements. Based on 1000 iterations of Monte Carlo. Data source is a SeaWiFS 4-day composite, 2003. The uncertainty is for remote sensing reflectance (Rrs) at 443 nm.
A semi-analytical computation of the theoretical uncertainties of the solar neutrino flux
Jørgensen, Andreas C. S.; Christensen-Dalsgaard, Jørgen
2017-11-01
We present a comparison between Monte Carlo simulations and a semi-analytical approach that reproduces the theoretical probability distribution functions of the solar neutrino fluxes, stemming from the pp, pep, hep, 7Be, 8B, 13N, 15O and 17F source reactions. We obtain good agreement between the two approaches. Thus, the semi-analytical method yields confidence intervals that closely match those found, based on Monte Carlo simulations, and points towards the same general symmetries of the investigated probability distribution functions. Furthermore, the negligible computational cost of this method is a clear advantage over Monte Carlo simulations, making it trivial to take new observational constraints on the input parameters into account.
A semi-analytical computation of the theoretical uncertainties of the solar neutrino flux
DEFF Research Database (Denmark)
Jorgensen, Andreas C. S.; Christensen-Dalsgaard, Jorgen
2017-01-01
We present a comparison between Monte Carlo simulations and a semi-analytical approach that reproduces the theoretical probability distribution functions of the solar neutrino fluxes, stemming from the pp, pep, hep, Be-7, B-8, N-13, O-15 and F-17 source reactions. We obtain good agreement between...... of this method is a clear advantage over Monte Carlo simulations, making it trivial to take new observational constraints on the input parameters into account....... the two approaches. Thus, the semi-analytical method yields confidence intervals that closely match those found, based on Monte Carlo simulations, and points towards the same general symmetries of the investigated probability distribution functions. Furthermore, the negligible computational cost...
DEFF Research Database (Denmark)
Plósz, Benedek; De Clercq, Jeriffa; Nopens, Ingmar
2011-01-01
on characterising particulate organics in wastewater and on bacteria growth is well-established, whereas 1-D SST models and their impact on biomass concentration predictions are still poorly understood. A rigorous assessment of two 1-DSST models is thus presented: one based on hyperbolic (the widely used Taka´ cs...... results demonstrates a considerably improved 1-D model realism using the convection-dispersion model in terms of SBH, XTSS,RAS and XTSS,Eff. Third, to assess the propagation of uncertainty derived from settler model structure to the biokinetic model, the impact of the SST model as sub-model in a plant...
DEFF Research Database (Denmark)
Van Bockstal, Pieter Jan; Mortier, Séverine Thérèse F.C.; Corver, Jos
2017-01-01
Traditional pharmaceutical freeze-drying is an inefficient batch process often applied to improve the stability of biopharmaceutical drug products. The freeze-drying process is regulated by the (dynamic) settings of the adaptable process parameters shelf temperature Ts and chamber pressure Pc...... of a freeze-drying process, allowing to quantitatively estimate and control the risk of cake collapse (i.e., the Risk of Failure (RoF)). The propagation of the error on the estimation of the thickness of the dried layer Ldried as function of primary drying time was included in the uncertainty analysis...
Dołęgowska, Sabina; Gałuszka, Agnieszka; Migaszewski, Zdzisław M
2017-12-01
The main source of rare earth elements (REE) in mosses is atmospheric deposition of particles. Sample treatment operations including shaking, rinsing or washing, which are made in a standard way on moss samples prior to chemical analysis, may lead to removing particles adsorbed onto their tissues. This in turn causes differences in REE concentrations in treated and untreated samples. For the present study, 27 combined moss samples were collected within three wooded areas and prepared for REE determinations by ICP-MS using both manual cleaning by shaking and triple rinsing with deionized water. Higher concentrations of REE were found in manually cleaned samples. The comparison of REE signatures and shale-normalized REE concentration patterns showed that the treatment procedure did not lead to fractionation of REE. All the samples were enriched in medium rare earth elements, and the δMREE factor remained practically unchanged after rinsing. Positive anomalies of Nd, Sm, Eu, Gd, Er and Yb were observed in both, manually cleaned and rinsed samples. For all the elements examined, analytical uncertainty was below 3.0% whereas sample preparation uncertainty computed with ANOVA, RANOVA, modified RANOVA and range statistics methods varied from 3.5 to 29.7%. In most cases the lowest s rprep values were obtained with the modified RANOVA method. Copyright © 2017 Elsevier Ltd. All rights reserved.
Post van der Burg, Max; Cullinane Thomas, Catherine; Holcombe, Tracy R.; Nelson, Richard D.
2016-01-01
The Landscape Conservation Cooperatives (LCCs) are a network of partnerships throughout North America that are tasked with integrating science and management to support more effective delivery of conservation at a landscape scale. In order to achieve this integration, some LCCs have adopted the approach of providing their partners with better scientific information in an effort to facilitate more effective and coordinated conservation decisions. Taking this approach has led many LCCs to begin funding research to provide the information for improved decision making. To ensure that funding goes to research projects with the highest likelihood of leading to more integrated broad scale conservation, some LCCs have also developed approaches for prioritizing which information needs will be of most benefit to their partnerships. We describe two case studies in which decision analytic tools were used to quantitatively assess the relative importance of information for decisions made by partners in the Plains and Prairie Potholes LCC. The results of the case studies point toward a few valuable lessons in terms of using these tools with LCCs. Decision analytic tools tend to help shift focus away from research oriented discussions and toward discussions about how information is used in making better decisions. However, many technical experts do not have enough knowledge about decision making contexts to fully inform the latter type of discussion. When assessed in the right decision context, however, decision analyses can point out where uncertainties actually affect optimal decisions and where they do not. This helps technical experts understand that not all research is valuable in improving decision making. But perhaps most importantly, our results suggest that decision analytic tools may be more useful for LCCs as way of developing integrated objectives for coordinating partner decisions across the landscape, rather than simply ranking research priorities.
Shan, Zhendong; Ling, Daosheng
2018-02-01
This article develops an analytical solution for the transient wave propagation of a cylindrical P-wave line source in a semi-infinite elastic solid with a fluid layer. The analytical solution is presented in a simple closed form in which each term represents a transient physical wave. The Scholte equation is derived, through which the Scholte wave velocity can be determined. The Scholte wave is the wave that propagates along the interface between the fluid and solid. To develop the analytical solution, the wave fields in the fluid and solid are defined, their analytical solutions in the Laplace domain are derived using the boundary and interface conditions, and the solutions are then decomposed into series form according to the power series expansion method. Each item of the series solution has a clear physical meaning and represents a transient wave path. Finally, by applying Cagniard's method and the convolution theorem, the analytical solutions are transformed into the time domain. Numerical examples are provided to illustrate some interesting features in the fluid layer, the interface and the semi-infinite solid. When the P-wave velocity in the fluid is higher than that in the solid, two head waves in the solid, one head wave in the fluid and a Scholte wave at the interface are observed for the cylindrical P-wave line source.
International Nuclear Information System (INIS)
Boaratti, Mario Francisco Guerra
2006-01-01
Leaks in pressurized tubes generate acoustic waves that propagate through the walls of these tubes, which can be captured by accelerometers or by acoustic emission sensors. The knowledge of how these walls can vibrate, or in another way, how these acoustic waves propagate in this material is fundamental in the detection and localization process of the leak source. In this work an analytic model was implemented, through the motion equations of a cylindrical shell, with the objective to understand the behavior of the tube surface excited by a point source. Since the cylindrical surface has a closed pattern in the circumferential direction, waves that are beginning their trajectory will meet with another that has already completed the turn over the cylindrical shell, in the clockwise direction as well as in the counter clockwise direction, generating constructive and destructive interferences. After enough time of propagation, peaks and valleys in the shell surface are formed, which can be visualized through a graphic representation of the analytic solution created. The theoretical results were proven through measures accomplished in an experimental setup composed of a steel tube finished in sand box, simulating the condition of infinite tube. To determine the location of the point source on the surface, the process of inverse solution was adopted, that is to say, known the signals of the sensor disposed in the tube surface , it is determined through the theoretical model where the source that generated these signals can be. (author)
Hosseinbor, A. Pasha; Chung, Moo K.; Wu, Yu-Chien; Alexander, Andrew L.
2012-01-01
The ensemble average propagator (EAP) describes the 3D average diffusion process of water molecules, capturing both its radial and angular contents. The EAP can thus provide richer information about complex tissue microstructure properties than the orientation distribution function (ODF), an angular feature of the EAP. Recently, several analytical EAP reconstruction schemes for multiple q-shell acquisitions have been proposed, such as diffusion propagator imaging (DPI) and spherical polar Fourier imaging (SPFI). In this study, a new analytical EAP reconstruction method is proposed, called Bessel Fourier orientation reconstruction (BFOR), whose solution is based on heat equation estimation of the diffusion signal for each shell acquisition, and is validated on both synthetic and real datasets. A significant portion of the paper is dedicated to comparing BFOR, SPFI, and DPI using hybrid, non-Cartesian sampling for multiple b-value acquisitions. Ways to mitigate the effects of Gibbs ringing on EAP reconstruction are also explored. In addition to analytical EAP reconstruction, the aforementioned modeling bases can be used to obtain rotationally invariant q-space indices of potential clinical value, an avenue which has not yet been thoroughly explored. Three such measures are computed: zero-displacement probability (Po), mean squared displacement (MSD), and generalized fractional anisotropy (GFA). PMID:22963853
Energy Technology Data Exchange (ETDEWEB)
Holland, Michael K. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); O' Rourke, Patrick E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-05-04
An SRNL H-Canyon Test Bed performance evaluation project was completed jointly by SRNL and LANL on a prototype monochromatic energy dispersive x-ray fluorescence instrument, the hiRX. A series of uncertainty propagations were generated based upon plutonium and uranium measurements performed using the alpha-prototype hiRX instrument. Data reduction and uncertainty modeling provided in this report were performed by the SRNL authors. Observations and lessons learned from this evaluation were also used to predict the expected uncertainties that should be achievable at multiple plutonium and uranium concentration levels provided instrument hardware and software upgrades being recommended by LANL and SRNL are performed.
Gaume, Johan; van Herwijnen, Alec; Chambon, Guillaume; Schweizer, Jürg
2015-04-01
Dry-snow slab avalanches are generally caused by a sequence of fracture processes including (1) failure initiation in a weak snow layer underlying a cohesive slab, (2) crack propagation within the weak layer and (3) slab tensile failure leading to its detachment. During the past decades, theoretical and experimental work has gradually led to a better understanding of the fracture process in snow involving the collapse of the structure in the weak layer during fracture. This now allows us to better model failure initiation and the onset of crack propagation, i.e. to estimate the critical length required for crack propagation. However, the most complete model to date, namely the anticrack model, is based on fracture mechanics and is therefore not applicable to avalanche forecasting procedures which assess snowpack stability in terms of stresses and strength. Furthermore, the anticrack model requires the knowledge of the specific fracture energy of the weak layer which is very difficult to evaluate in practice and very sensitive to the experimental method used. To overcome this limitation, a new and simple analytical model was developed to evaluate the critical length as a function of the mechanical properties of the slab, the strength of the weak layer as well as the collapse height. This model was inferred from discrete element simulations of the propagation saw test (PST) allowing to reproduce the high porosity, and thus the collapse, of weak snow layers. The analytical model showed a very good agreement with PST field data, and could thus be used in practice to refine stability indices.
Mendoza Beltran, A.; Heijungs, R.; Guinée, J.; Tukker, A.
2016-01-01
Purpose: Despite efforts to treat uncertainty due to methodological choices in life cycle assessment (LCA) such as standardization, one-at-a-time (OAT) sensitivity analysis, and analytical and statistical methods, no method exists that propagate this source of uncertainty for all relevant processes
International Nuclear Information System (INIS)
Ishida, Hitoshi; Meshii, Toshiyuki
2010-01-01
This study proposes an element size selection method named the 'Impact-Meshing (IM) method' for a finite element waves propagation analysis model, which is characterized by (1) determination of element division of the model with strain energy in the whole model, (2) static analysis (dynamic analysis in a single time step) with boundary conditions which gives a maximum change of displacement in the time increment and inertial (impact) force caused by the displacement change. In this paper, an example of application of the IM method to 3D ultrasonic wave propagation problem in an elastic solid is described. These examples showed an analysis result with a model determined by the IM method was convergence and calculation time for determination of element subdivision was reduced to about 1/6 by the IM Method which did not need determination of element subdivision by a dynamic transient analysis with 100 time steps. (author)
Energy Technology Data Exchange (ETDEWEB)
Dunn, Floyd E. [Argonne National Lab. (ANL), Argonne, IL (United States); Hu, Lin-wen [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States). Nuclear Reactor Lab.; Wilson, Erik [Argonne National Lab. (ANL), Argonne, IL (United States)
2016-12-01
The STAT code was written to automate many of the steady-state thermal hydraulic safety calculations for the MIT research reactor, both for conversion of the reactor from high enrichment uranium fuel to low enrichment uranium fuel and for future fuel re-loads after the conversion. A Monte-Carlo statistical propagation approach is used to treat uncertainties in important parameters in the analysis. These safety calculations are ultimately intended to protect against high fuel plate temperatures due to critical heat flux or departure from nucleate boiling or onset of flow instability; but additional margin is obtained by basing the limiting safety settings on avoiding onset of nucleate boiling. STAT7 can simultaneously analyze all of the axial nodes of all of the fuel plates and all of the coolant channels for one stripe of a fuel element. The stripes run the length of the fuel, from the bottom to the top. Power splits are calculated for each axial node of each plate to determine how much of the power goes out each face of the plate. By running STAT7 multiple times, full core analysis has been performed by analyzing the margin to ONB for each axial node of each stripe of each plate of each element in the core.
Non-linear Calibration Leads to Improved Correspondence between Uncertainties
DEFF Research Database (Denmark)
Andersen, Jens Enevold Thaulov
2007-01-01
the analysis, the calculation of uncertainties of calibrations must correspond to the uncertainty of unknowns that was determined by many repetitions. Thus, by introducing an average value of the law-of-propagation of errors (LPE) and maintaining the fundamentals of statistics, as manifested by the central...... that the uncertainty of the detector dominates the contributions to the uncertainty budget, and it was proposed that a full analysis of the instrument ought to be performed for every single analyte before measurement. Following this investigation, the homoscedasticy or heteroscedasticy may be identified by residuals...
Nayfeh, A. H.; Kaiser, J. E.; Marshall, R. L.; Hurst, L. J.
1978-01-01
The performance of sound suppression techniques in ducts that produce refraction effects due to axial velocity gradients was evaluated. A computer code based on the method of multiple scales was used to calculate the influence of axial variations due to slow changes in the cross-sectional area as well as transverse gradients due to the wall boundary layers. An attempt was made to verify the analytical model through direct comparison of experimental and computational results and the analytical determination of the influence of axial gradients on optimum liner properties. However, the analytical studies were unable to examine the influence of non-parallel ducts on the optimum linear conditions. For liner properties not close to optimum, the analytical predictions and the experimental measurements were compared. The circumferential variations of pressure amplitudes and phases at several axial positions were examined in straight and variable-area ducts, hard-wall and lined sections with and without a mean flow. Reasonable agreement between the theoretical and experimental results was obtained.
Lanning, R. Nicholas; Xiao, Zhihao; Zhang, Mi; Novikova, Irina; Mikhailov, Eugeniy E.; Dowling, Jonathan P.
2017-07-01
We present a general, Gaussian spatial-mode propagation formalism for describing the generation of higher-order multi-spatial-mode beams generated during nonlinear interactions. Furthermore, to implement the theory, we simulate optical angular momentum transfer interactions and show how one can optimize the interaction to reduce the undesired modes. Past theoretical treatments of this problem have often been phenomenological, at best. Here we present an exact solution for the single-pass no-cavity regime, in which the nonlinear interaction is not overly strong. We apply our theory to two experiments, with very good agreement, and give examples of several more configurations, easily tested in the laboratory.
Plósz, Benedek Gy; De Clercq, Jeriffa; Nopens, Ingmar; Benedetti, Lorenzo; Vanrolleghem, Peter A
2011-01-01
In WWTP models, the accurate assessment of solids inventory in bioreactors equipped with solid-liquid separators, mostly described using one-dimensional (1-D) secondary settling tank (SST) models, is the most fundamental requirement of any calibration procedure. Scientific knowledge on characterising particulate organics in wastewater and on bacteria growth is well-established, whereas 1-D SST models and their impact on biomass concentration predictions are still poorly understood. A rigorous assessment of two 1-DSST models is thus presented: one based on hyperbolic (the widely used Takács-model) and one based on parabolic (the more recently presented Plósz-model) partial differential equations. The former model, using numerical approximation to yield realistic behaviour, is currently the most widely used by wastewater treatment process modellers. The latter is a convection-dispersion model that is solved in a numerically sound way. First, the explicit dispersion in the convection-dispersion model and the numerical dispersion for both SST models are calculated. Second, simulation results of effluent suspended solids concentration (XTSS,Eff), sludge recirculation stream (XTSS,RAS) and sludge blanket height (SBH) are used to demonstrate the distinct behaviour of the models. A thorough scenario analysis is carried out using SST feed flow rate, solids concentration, and overflow rate as degrees of freedom, spanning a broad loading spectrum. A comparison between the measurements and the simulation results demonstrates a considerably improved 1-D model realism using the convection-dispersion model in terms of SBH, XTSS,RAS and XTSS,Eff. Third, to assess the propagation of uncertainty derived from settler model structure to the biokinetic model, the impact of the SST model as sub-model in a plant-wide model on the general model performance is evaluated. A long-term simulation of a bulking event is conducted that spans temperature evolution throughout a summer
Energy Technology Data Exchange (ETDEWEB)
Morales Prieto, M.; Ortega Saiz, P.
2011-07-01
Analysis of analytical uncertainties of the methodology of simulation of processes for obtaining isotopic ending inventory of spent fuel, the ARIANE experiment explores the part of simulation of burning.
Directory of Open Access Journals (Sweden)
A. Datta
2018-03-01
Full Text Available We present a suite of programs that implement decades-old algorithms for computation of seismic surface wave reflection and transmission coefficients at a welded contact between two laterally homogeneous quarter-spaces. For Love as well as Rayleigh waves, the algorithms are shown to be capable of modelling multiple mode conversions at a lateral discontinuity, which was not shown in the original publications or in the subsequent literature. Only normal incidence at a lateral boundary is considered so there is no Love–Rayleigh coupling, but incidence of any mode and coupling to any (other mode can be handled. The code is written in Python and makes use of SciPy's Simpson's rule integrator and NumPy's linear algebra solver for its core functionality. Transmission-side results from this code are found to be in good agreement with those from finite-difference simulations. In today's research environment of extensive computing power, the coded algorithms are arguably redundant but SWRT can be used as a valuable testing tool for the ever evolving numerical solvers of seismic wave propagation. SWRT is available via GitHub (https://github.com/arjundatta23/SWRT.git.
Datta, Arjun
2018-03-01
We present a suite of programs that implement decades-old algorithms for computation of seismic surface wave reflection and transmission coefficients at a welded contact between two laterally homogeneous quarter-spaces. For Love as well as Rayleigh waves, the algorithms are shown to be capable of modelling multiple mode conversions at a lateral discontinuity, which was not shown in the original publications or in the subsequent literature. Only normal incidence at a lateral boundary is considered so there is no Love-Rayleigh coupling, but incidence of any mode and coupling to any (other) mode can be handled. The code is written in Python and makes use of SciPy's Simpson's rule integrator and NumPy's linear algebra solver for its core functionality. Transmission-side results from this code are found to be in good agreement with those from finite-difference simulations. In today's research environment of extensive computing power, the coded algorithms are arguably redundant but SWRT can be used as a valuable testing tool for the ever evolving numerical solvers of seismic wave propagation. SWRT is available via GitHub (https://github.com/arjundatta23/SWRT.git).
Cui, Ming; Xu, Lili; Wang, Huimin; Ju, Shaoqing; Xu, Shuizhu; Jing, Rongrong
2017-12-01
Measurement uncertainty (MU) is a metrological concept, which can be used for objectively estimating the quality of test results in medical laboratories. The Nordtest guide recommends an approach that uses both internal quality control (IQC) and external quality assessment (EQA) data to evaluate the MU. Bootstrap resampling is employed to simulate the unknown distribution based on the mathematical statistics method using an existing small sample of data, where the aim is to transform the small sample into a large sample. However, there have been no reports of the utilization of this method in medical laboratories. Thus, this study applied the Nordtest guide approach based on bootstrap resampling for estimating the MU. We estimated the MU for the white blood cell (WBC) count, red blood cell (RBC) count, hemoglobin (Hb), and platelets (Plt). First, we used 6months of IQC data and 12months of EQA data to calculate the MU according to the Nordtest method. Second, we combined the Nordtest method and bootstrap resampling with the quality control data and calculated the MU using MATLAB software. We then compared the MU results obtained using the two approaches. The expanded uncertainty results determined for WBC, RBC, Hb, and Plt using the bootstrap resampling method were 4.39%, 2.43%, 3.04%, and 5.92%, respectively, and 4.38%, 2.42%, 3.02%, and 6.00% with the existing quality control data (U [k=2]). For WBC, RBC, Hb, and Plt, the differences between the results obtained using the two methods were lower than 1.33%. The expanded uncertainty values were all less than the target uncertainties. The bootstrap resampling method allows the statistical analysis of the MU. Combining the Nordtest method and bootstrap resampling is considered a suitable alternative method for estimating the MU. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Uncertainty and sensitivity assessments of GPS and GIS integrated applications for transportation.
Hong, Sungchul; Vonderohe, Alan P
2014-02-10
Uncertainty and sensitivity analysis methods are introduced, concerning the quality of spatial data as well as that of output information from Global Positioning System (GPS) and Geographic Information System (GIS) integrated applications for transportation. In the methods, an error model and an error propagation method form a basis for formulating characterization and propagation of uncertainties. They are developed in two distinct approaches: analytical and simulation. Thus, an initial evaluation is performed to compare and examine uncertainty estimations from the analytical and simulation approaches. The evaluation results show that estimated ranges of output information from the analytical and simulation approaches are compatible, but the simulation approach rather than the analytical approach is preferred for uncertainty and sensitivity analyses, due to its flexibility and capability to realize positional errors in both input data. Therefore, in a case study, uncertainty and sensitivity analyses based upon the simulation approach is conducted on a winter maintenance application. The sensitivity analysis is used to determine optimum input data qualities, and the uncertainty analysis is then applied to estimate overall qualities of output information from the application. The analysis results show that output information from the non-distance-based computation model is not sensitive to positional uncertainties in input data. However, for the distance-based computational model, output information has a different magnitude of uncertainties, depending on position uncertainties in input data.
International Nuclear Information System (INIS)
Vršnak, B.; Žic, T.; Dumbović, M.; Temmer, M.; Möstl, C.; Veronig, A. M.; Taktakishvili, A.; Mays, M. L.; Odstrčil, D.
2014-01-01
Real-time forecasting of the arrival of coronal mass ejections (CMEs) at Earth, based on remote solar observations, is one of the central issues of space-weather research. In this paper, we compare arrival-time predictions calculated applying the numerical ''WSA-ENLIL+Cone model'' and the analytical ''drag-based model'' (DBM). Both models use coronagraphic observations of CMEs as input data, thus providing an early space-weather forecast two to four days before the arrival of the disturbance at the Earth, depending on the CME speed. It is shown that both methods give very similar results if the drag parameter Γ = 0.1 is used in DBM in combination with a background solar-wind speed of w = 400 km s –1 . For this combination, the mean value of the difference between arrival times calculated by ENLIL and DBM is Δ-bar =0.09±9.0 hr with an average of the absolute-value differences of |Δ|-bar =7.1 hr. Comparing the observed arrivals (O) with the calculated ones (C) for ENLIL gives O – C = –0.3 ± 16.9 hr and, analogously, O – C = +1.1 ± 19.1 hr for DBM. Applying Γ = 0.2 with w = 450 km s –1 in DBM, one finds O – C = –1.7 ± 18.3 hr, with an average of the absolute-value differences of 14.8 hr, which is similar to that for ENLIL, 14.1 hr. Finally, we demonstrate that the prediction accuracy significantly degrades with increasing solar activity
Energy Technology Data Exchange (ETDEWEB)
Vršnak, B.; Žic, T.; Dumbović, M. [Hvar Observatory, Faculty of Geodesy, University of Zagreb, Kačćeva 26, HR-10000 Zagreb (Croatia); Temmer, M.; Möstl, C.; Veronig, A. M. [Kanzelhöhe Observatory—IGAM, Institute of Physics, University of Graz, Universittsplatz 5, A-8010 Graz (Austria); Taktakishvili, A.; Mays, M. L. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Odstrčil, D., E-mail: bvrsnak@geof.hr, E-mail: tzic@geof.hr, E-mail: mdumbovic@geof.hr, E-mail: manuela.temmer@uni-graz.at, E-mail: christian.moestl@uni-graz.at, E-mail: astrid.veronig@uni-graz.at, E-mail: aleksandre.taktakishvili-1@nasa.gov, E-mail: m.leila.mays@nasa.gov, E-mail: dusan.odstrcil@nasa.gov [George Mason University, Fairfax, VA 22030 (United States)
2014-08-01
Real-time forecasting of the arrival of coronal mass ejections (CMEs) at Earth, based on remote solar observations, is one of the central issues of space-weather research. In this paper, we compare arrival-time predictions calculated applying the numerical ''WSA-ENLIL+Cone model'' and the analytical ''drag-based model'' (DBM). Both models use coronagraphic observations of CMEs as input data, thus providing an early space-weather forecast two to four days before the arrival of the disturbance at the Earth, depending on the CME speed. It is shown that both methods give very similar results if the drag parameter Γ = 0.1 is used in DBM in combination with a background solar-wind speed of w = 400 km s{sup –1}. For this combination, the mean value of the difference between arrival times calculated by ENLIL and DBM is Δ-bar =0.09±9.0 hr with an average of the absolute-value differences of |Δ|-bar =7.1 hr. Comparing the observed arrivals (O) with the calculated ones (C) for ENLIL gives O – C = –0.3 ± 16.9 hr and, analogously, O – C = +1.1 ± 19.1 hr for DBM. Applying Γ = 0.2 with w = 450 km s{sup –1} in DBM, one finds O – C = –1.7 ± 18.3 hr, with an average of the absolute-value differences of 14.8 hr, which is similar to that for ENLIL, 14.1 hr. Finally, we demonstrate that the prediction accuracy significantly degrades with increasing solar activity.
Matar, Gladys; Poggi, Bernard; Meley, Roland; Bon, Chantal; Chardon, Laurence; Chikh, Karim; Renard, Anne-Claude; Sotta, Catherine; Eynard, Jean-Christophe; Cartier, Regine; Cohen, Richard
2015-10-01
International organizations require from medical laboratories a quantitative statement of the uncertainty in measurement (UM) to help interpret patient results. The French accreditation body (COFRAC) recommends an approach (SH GTA 14 IQC/EQA method) using both internal quality control (IQC) and external quality assessment (EQA) data. The aim of this work was to validate an alternative way to quantify UM using only EQA results without any need for IQC data. This simple and practical method, which has already been described as the long-term evaluation of the UM (LTUM), is based on linear regression between data obtained by participants in EQA schemes and target values. We used it for 43 routine analytes covering biochemistry, immunoassay, and hemostasis fields. Data from 50 laboratories participating in ProBioQual (PBQ) EQA schemes over 25 months were used to obtain estimates of the median and 90th percentile LTUM and to compare them to the usual analytical goals. Then, the two UM estimation methods were compared using data from 20 laboratories participating in both IQC and EQA schemes. Median LTUMs ranged from 2.9% (sodium) to 16.3% (bicarbonates) for biochemistry analytes, from 12.6% (prothrombin time) to 18.4% (factor V) for hemostasis analytes when using the mean of all participants, and were around 10% for immunoassays when using the peer-group mean. Median LTUMs were, in most cases, slightly lower than those obtained with the SH GTA 14 method, whatever the concentration level. LTUM is a simple and convenient method that gives UM estimates that are reliable and comparable to those of recommended methods. Therefore, proficiency testing (PT) organizers are allowed to provide participants with an additional UM estimate using only EQA data and which could be updated at the end of each survey.
International Nuclear Information System (INIS)
Davis, B.W.
1982-01-01
Thermal igniters proposed by the Tennessee Valley Authority for intentional ignition of hydrogen in nuclear reactor containments have been tested in mixtures of air, hydrogen, and steam. The igniters, conventional diesel engine glow plugs, were tested in a 10.6 ft 3 pressure vessel with dry hydrogen concentrations from 4% to 29%, and in steam fractions of up to 50%. Dry tests indicated complete combustion consistently occurred at H 2 fractions above 8% with no combustion for concentrations below 5%. Combustion tests in the presence of steam were conducted with hydrogen volume fractions of 8%, 10%, and 12%. Steam concentrations of up to 30% consistently resulted in ignition. Most of the 40% steam fraction tests indicated a pressure rise. Circulation of the mixture improved combustion in both the dry and the steam tests, most notably at low H 2 concentrations. An analysis of the high steam fraction test data yielded evidence of the presence of small, suspended, water droplets in the combustion mixture. The suppressive influence of condensation-generated fog on combustion is evaluated. Analysis of experimental results along with results derived from analytic models have provided consistent evidence of the strong influence of mass condensation rates and fog on experimentally observed ignition and flame propagation phenomena
International Nuclear Information System (INIS)
Depres, B.; Dossantos-Uzarralde, P.
2009-01-01
More than 150 researchers and engineers from universities and the industrial world met to discuss on the new methodologies developed around assessing uncertainty. About 20 papers were presented and the main topics were: methods to study the propagation of uncertainties, sensitivity analysis, nuclear data covariances or multi-parameter optimisation. This report gathers the contributions of CEA researchers and engineers
Coquelin, L.; Le Brusquet, L.; Fischer, N.; Gensdarmes, F.; Motzkus, C.; Mace, T.; Fleury, G.
2018-05-01
A scanning mobility particle sizer (SMPS) is a high resolution nanoparticle sizing system that is widely used as the standard method to measure airborne particle size distributions (PSD) in the size range 1 nm–1 μm. This paper addresses the problem to assess the uncertainty associated with PSD when a differential mobility analyzer (DMA) operates under scanning mode. The sources of uncertainty are described and then modeled either through experiments or knowledge extracted from the literature. Special care is brought to model the physics and to account for competing theories. Indeed, it appears that the modeling errors resulting from approximations of the physics can largely affect the final estimate of this indirect measurement, especially for quantities that are not measured during day-to-day experiments. The Monte Carlo method is used to compute the uncertainty associated with PSD. The method is tested against real data sets that are monosize polystyrene latex spheres (PSL) with nominal diameters of 100 nm, 200 nm and 450 nm. The median diameters and associated standard uncertainty of the aerosol particles are estimated as 101.22 nm ± 0.18 nm, 204.39 nm ± 1.71 nm and 443.87 nm ± 1.52 nm with the new approach. Other statistical parameters, such as the mean diameter, the mode and the geometric mean and associated standard uncertainty, are also computed. These results are then compared with the results obtained by SMPS embedded software.
Resolving uncertainty in chemical speciation determinations
Smith, D. Scott; Adams, Nicholas W. H.; Kramer, James R.
1999-10-01
Speciation determinations involve uncertainty in system definition and experimentation. Identification of appropriate metals and ligands from basic chemical principles, analytical window considerations, types of species and checking for consistency in equilibrium calculations are considered in system definition uncertainty. A systematic approach to system definition limits uncertainty in speciation investigations. Experimental uncertainty is discussed with an example of proton interactions with Suwannee River fulvic acid (SRFA). A Monte Carlo approach was used to estimate uncertainty in experimental data, resulting from the propagation of uncertainties in electrode calibration parameters and experimental data points. Monte Carlo simulations revealed large uncertainties present at high (>9-10) and low (monoprotic ligands. Least-squares fit the data with 21 sites, whereas linear programming fit the data equally well with 9 sites. Multiresponse fitting, involving simultaneous fluorescence and pH measurements, improved model discrimination. Deconvolution of the excitation versus emission fluorescence surface for SRFA establishes a minimum of five sites. Diprotic sites are also required for the five fluorescent sites, and one non-fluorescent monoprotic site was added to accommodate the pH data. Consistent with greater complexity, the multiresponse method had broader confidence limits than the uniresponse methods, but corresponded better with the accepted total carboxylic content for SRFA. Overall there was a 40% standard deviation in total carboxylic content for the multiresponse fitting, versus 10% and 1% for least-squares and linear programming, respectively.
An introductory guide to uncertainty analysis in environmental and health risk assessment
International Nuclear Information System (INIS)
Hoffman, F.O.; Hammonds, J.S.
1992-10-01
To compensate for the potential for overly conservative estimates of risk using standard US Environmental Protection Agency methods, an uncertainty analysis should be performed as an integral part of each risk assessment. Uncertainty analyses allow one to obtain quantitative results in the form of confidence intervals that will aid in decision making and will provide guidance for the acquisition of additional data. To perform an uncertainty analysis, one must frequently rely on subjective judgment in the absence of data to estimate the range and a probability distribution describing the extent of uncertainty about a true but unknown value for each parameter of interest. This information is formulated from professional judgment based on an extensive review of literature, analysis of the data, and interviews with experts. Various analytical and numerical techniques are available to allow statistical propagation of the uncertainty in the model parameters to a statement of uncertainty in the risk to a potentially exposed individual. Although analytical methods may be straightforward for relatively simple models, they rapidly become complicated for more involved risk assessments. Because of the tedious efforts required to mathematically derive analytical approaches to propagate uncertainty in complicated risk assessments, numerical methods such as Monte Carlo simulation should be employed. The primary objective of this report is to provide an introductory guide for performing uncertainty analysis in risk assessments being performed for Superfund sites
Understanding and reducing statistical uncertainties in nebular abundance determinations
Wesson, R.; Stock, D. J.; Scicluna, P.
2012-06-01
Whenever observations are compared to theories, an estimate of the uncertainties associated with the observations is vital if the comparison is to be meaningful. However, many or even most determinations of temperatures, densities and abundances in photoionized nebulae do not quote the associated uncertainty. Those that do typically propagate the uncertainties using analytical techniques which rely on assumptions that generally do not hold. Motivated by this issue, we have developed Nebular Empirical Analysis Tool (NEAT), a new code for calculating chemical abundances in photoionized nebulae. The code carries out a standard analysis of lists of emission lines using long-established techniques to estimate the amount of interstellar extinction, calculate representative temperatures and densities, compute ionic abundances from both collisionally excited lines and recombination lines, and finally to estimate total elemental abundances using an ionization correction scheme. NEATuses a Monte Carlo technique to robustly propagate uncertainties from line flux measurements through to the derived abundances. We show that, for typical observational data, this approach is superior to analytic estimates of uncertainties. NEAT also accounts for the effect of upward biasing on measurements of lines with low signal-to-noise ratio, allowing us to accurately quantify the effect of this bias on abundance determinations. We find not only that the effect can result in significant overestimates of heavy element abundances derived from weak lines, but also that taking it into account reduces the uncertainty of these abundance determinations. Finally, we investigate the effect of possible uncertainties in R, the ratio of selective-to-total extinction, on abundance determinations. We find that the uncertainty due to this parameter is negligible compared to the statistical uncertainties due to typical line flux measurement uncertainties.
Directory of Open Access Journals (Sweden)
S. Bönisch
2004-02-01
Full Text Available Este trabalho teve por objetivos utilizar krigagem por indicação para espacializar propriedades de solos expressas por atributos numéricos, gerar uma representação acompanhada de medida espacial de incerteza e modelar a propagação de incerteza por procedimentos fuzzy de álgebra de mapas. Foram estudados os atributos: teores de potássio (K e de alumínio (Al trocáveis, saturação por bases (V, soma de bases (S, capacidade de troca catiônica (CTC e teor de areia total (AT, extraídos de 222 perfis pedológicos e de 219 amostras extras, localizados no estado de Santa Catarina. Quando os atributos foram expressos em classes de fertilidade, a incerteza de Al, S e V aumentou e a de K e CTC diminuiu, considerando intervalos de confiança de 95 % de probabilidade. Constatou-se que um maior número de dados numéricos de K, S e V levou a uma maior incerteza na inferência espacial, enquanto o maior número de dados numéricos de AT e CTC diminuiu o grau de incerteza. A incerteza diminuiu quando diferentes representações numéricas foram integradas.The objectives of this study were to use kriging indicators to generate a representation of soil properties expressed by numeric attributes, to assess the uncertainty in estimates, and to model the uncertainty propagation generated by the fuzzy procedures of map algebra. The studied attributes were exchangeable potassium (K and aluminum (Al contents, sum of bases (SB, cationic exchange capacity (CEC, base saturation (V, and total sand content (TST, extracted from 222 pedologic profiles and 219 extra samples, located in Santa Catarina State, Brazil. When the attributes were expressed in fertility classes, the uncertainty of Al, SB, and V increased while the uncertainty of K and CEC decreased, for intervals of confidence of 95% probability. A larger number of numeric data for K, SB, and V provided a larger uncertainty for space inference, while the uncertainty degree decreased for the largest number
International Nuclear Information System (INIS)
Hammonds, J.S.; Hoffman, F.O.; Bartell, S.M.
1994-12-01
This report presents guidelines for evaluating uncertainty in mathematical equations and computer models applied to assess human health and environmental risk. Uncertainty analyses involve the propagation of uncertainty in model parameters and model structure to obtain confidence statements for the estimate of risk and identify the model components of dominant importance. Uncertainty analyses are required when there is no a priori knowledge about uncertainty in the risk estimate and when there is a chance that the failure to assess uncertainty may affect the selection of wrong options for risk reduction. Uncertainty analyses are effective when they are conducted in an iterative mode. When the uncertainty in the risk estimate is intolerable for decision-making, additional data are acquired for the dominant model components that contribute most to uncertainty. This process is repeated until the level of residual uncertainty can be tolerated. A analytical and numerical methods for error propagation are presented along with methods for identifying the most important contributors to uncertainty. Monte Carlo simulation with either Simple Random Sampling (SRS) or Latin Hypercube Sampling (LHS) is proposed as the most robust method for propagating uncertainty through either simple or complex models. A distinction is made between simulating a stochastically varying assessment endpoint (i.e., the distribution of individual risks in an exposed population) and quantifying uncertainty due to lack of knowledge about a fixed but unknown quantity (e.g., a specific individual, the maximally exposed individual, or the mean, median, or 95%-tile of the distribution of exposed individuals). Emphasis is placed on the need for subjective judgement to quantify uncertainty when relevant data are absent or incomplete
An analysis of rumor propagation based on propagation force
Zhao, Zhen-jun; Liu, Yong-mei; Wang, Ke-xi
2016-02-01
A propagation force is introduced into the analysis of rumor propagation to address uncertainty in the process. The propagation force is portrayed as a fuzzy variable, and a category of new parameters with fuzzy variables is defined. The classic susceptible, infected, recovered (SIR) model is modified using these parameters, a fuzzy reproductive number is introduced into the modified model, and the rationality of the fuzzy reproductive number is illuminated through calculation and comparison. Rumor control strategies are also discussed.
International Nuclear Information System (INIS)
Andres, T.H.
2002-05-01
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
Hall, Martin P. M.; Barclay, Leslie W.
The effects of the earth atmosphere on the radio-wave propagation (RWP) and their implications for telecommunication systems are discussed in reviews based on lectures presented at the Second IEE Vacation School on Radiowave Propagation, held at the University of Surrey in September 1986. A general overview of propagation phenomena is presented, and particular attention is given to the theory of EM wave propagation; radio system parameters; surface wave propagation; RWP in the ionosphere; VLF, LF, and MF applications and predictions; HF applications and predictions; clear-air aspects of the troposphere and their effects on RWP; and the nature of precipitation, clouds, and atmospheric gases and their effects on RWP. Also considered are terrestrial and earth-space propagation path predictions, the prediction of interference levels and coordination distances for frequencies above 1 GHz, propagation effects on VHF and UHF broadcasting, and propagation effects on mobile communication services.
Error Analysis and Propagation in Metabolomics Data Analysis.
Moseley, Hunter N B
2013-01-01
Error analysis plays a fundamental role in describing the uncertainty in experimental results. It has several fundamental uses in metabolomics including experimental design, quality control of experiments, the selection of appropriate statistical methods, and the determination of uncertainty in results. Furthermore, the importance of error analysis has grown with the increasing number, complexity, and heterogeneity of measurements characteristic of 'omics research. The increase in data complexity is particularly problematic for metabolomics, which has more heterogeneity than other omics technologies due to the much wider range of molecular entities detected and measured. This review introduces the fundamental concepts of error analysis as they apply to a wide range of metabolomics experimental designs and it discusses current methodologies for determining the propagation of uncertainty in appropriate metabolomics data analysis. These methodologies include analytical derivation and approximation techniques, Monte Carlo error analysis, and error analysis in metabolic inverse problems. Current limitations of each methodology with respect to metabolomics data analysis are also discussed.
Energy Technology Data Exchange (ETDEWEB)
Barrado, A. I.; Garcia, S.; Perez, R. M.
2013-06-01
This paper presents an evaluation of uncertainty associated to analytical measurement of eighteen polycyclic aromatic compounds (PACs) in ambient air by liquid chromatography with fluorescence detection (HPLC/FD). The study was focused on analyses of PM{sub 1}0, PM{sub 2}.5 and gas phase fractions. Main analytical uncertainty was estimated for eleven polycyclic aromatic hydrocarbons (PAHs), four nitro polycyclic aromatic hydrocarbons (nitro-PAHs) and two hydroxy polycyclic aromatic hydrocarbons (OH-PAHs) based on the analytical determination, reference material analysis and extraction step. Main contributions reached 15-30% and came from extraction process of real ambient samples, being those for nitro- PAHs the highest (20-30%). Range and mean concentration of PAC mass concentrations measured in gas phase and PM{sub 1}0/PM{sub 2}.5 particle fractions during a full year are also presented. Concentrations of OH-PAHs were about 2-4 orders of magnitude lower than their parent PAHs and comparable to those sparsely reported in literature. (Author) 7 refs.
Directory of Open Access Journals (Sweden)
Petrović Predrag
2006-01-01
Full Text Available Synchronous sampling allows alternating current (AC quantities, such as the root mean square (RMS values of voltage and power, to be determined with very low uncertainties (on the order of a few parts of 10-6 [1]. In this a new mathematical expression for estimating measurement uncertainties in non ideal synchronization with fundamental frequency AC signals is presented. The obtained results were compared with those obtained with a high-precision instrument for measuring basic AC values. Computer simulation demonstrating the effectiveness of these new expression, are also presented.
International Nuclear Information System (INIS)
Thomas, R.E.
1982-03-01
An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software
Energy Technology Data Exchange (ETDEWEB)
Thomas, R.E.
1982-03-01
An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software.
Verification of uncertainty budgets
DEFF Research Database (Denmark)
Heydorn, Kaj; Madsen, B.S.
2005-01-01
The quality of analytical results is expressed by their uncertainty, as it is estimated on the basis of an uncertainty budget; little effort is, however, often spent on ascertaining the quality of the uncertainty budget. The uncertainty budget is based on circumstantial or historical data...... observed and expected variability is tested by means of the T-test, which follows a chi-square distribution with a number of degrees of freedom determined by the number of replicates. Significant deviations between predicted and observed variability may be caused by a variety of effects, and examples...... will be presented; both underestimation and overestimation may occur, each leading to correcting the influence of uncertainty components according to their influence on the variability of experimental results. Some uncertainty components can be verified only with a very small number of degrees of freedom, because...
Collision entropy and optimal uncertainty
Bosyk, G. M.; Portesi, M.; Plastino, A.
2011-01-01
We propose an alternative measure of quantum uncertainty for pairs of arbitrary observables in the 2-dimensional case, in terms of collision entropies. We derive the optimal lower bound for this entropic uncertainty relation, which results in an analytic function of the overlap of the corresponding eigenbases. Besides, we obtain the minimum uncertainty states. We compare our relation with other formulations of the uncertainty principle.
Propagating pulsed Bessel beams in periodic media
International Nuclear Information System (INIS)
Longhi, S; Janner, D; Laporta, P
2004-01-01
An analytical study of vectorial pulsed Bessel beam propagation in one-dimensional photonic bandgaps, based on a Wannier-function approach, is presented, and the conditions for dispersion-free and diffraction-free propagation are derived. The analysis is applied, as a particular case, to Bessel beam propagation in periodic layered structures
Directory of Open Access Journals (Sweden)
Vicari Kristin J
2012-04-01
the TE model predictions. This analysis highlights the primary measurements that merit further development to reduce the uncertainty associated with their use in TE models. While we develop and apply this mathematical framework to a specific biorefinery scenario here, this analysis can be readily adapted to other types of biorefining processes and provides a general framework for propagating uncertainty due to analytical measurements through a TE model.
Efficient Quantification of Uncertainties in Complex Computer Code Results, Phase II
National Aeronautics and Space Administration — Propagation of parameter uncertainties through large computer models can be very resource intensive. Frameworks and tools for uncertainty quantification are...
Flood modelling : Parameterisation and inflow uncertainty
Mukolwe, M.M.; Di Baldassarre, G.; Werner, M.; Solomatine, D.P.
2014-01-01
This paper presents an analysis of uncertainty in hydraulic modelling of floods, focusing on the inaccuracy caused by inflow errors and parameter uncertainty. In particular, the study develops a method to propagate the uncertainty induced by, firstly, application of a stage–discharge rating curve
Estimation of the uncertainties considered in NPP PSA level 2
International Nuclear Information System (INIS)
Kalchev, B.; Hristova, R.
2005-01-01
The main approaches of the uncertainties analysis are presented. The sources of uncertainties which should be considered in PSA level 2 for WWER reactor such as: uncertainties propagated from level 1 PSA; uncertainties in input parameters; uncertainties related to the modelling of physical phenomena during the accident progression and uncertainties related to the estimation of source terms are defined. The methods for estimation of the uncertainties are also discussed in this paper
Lindley, Dennis V
2013-01-01
Praise for the First Edition ""...a reference for everyone who is interested in knowing and handling uncertainty.""-Journal of Applied Statistics The critically acclaimed First Edition of Understanding Uncertainty provided a study of uncertainty addressed to scholars in all fields, showing that uncertainty could be measured by probability, and that probability obeyed three basic rules that enabled uncertainty to be handled sensibly in everyday life. These ideas were extended to embrace the scientific method and to show how decisions, containing an uncertain element, could be rationally made.
A vector model for error propagation
Energy Technology Data Exchange (ETDEWEB)
Smith, D.L.; Geraldo, L.P.
1989-03-01
A simple vector model for error propagation, which is entirely equivalent to the conventional statistical approach, is discussed. It offers considerable insight into the nature of error propagation while, at the same time, readily demonstrating the significance of uncertainty correlations. This model is well suited to the analysis of error for sets of neutron-induced reaction cross sections. 7 refs., 1 fig.
Analyzing Bullwhip Effect in Supply Networks under Exogenous Uncertainty
Directory of Open Access Journals (Sweden)
Mitra Darvish
2014-05-01
Full Text Available This paper explains a model for analyzing and measuring the propagation of order amplifications (i.e. bullwhip effect for a single-product supply network topology considering exogenous uncertainty and linear and time-invariant inventory management policies for network entities. The stream of orders placed by each entity of the network is characterized assuming customer demand is ergodic. In fact, we propose an exact formula in order to measure the bullwhip effect in the addressed supply network topology considering the system in Markovian chain framework and presenting a matrix of network member relationships and relevant order sequences. The formula turns out using a mathematical method called frequency domain analysis. The major contribution of this paper is analyzing the bullwhip effect considering exogenous uncertainty in supply networks and using the Fourier transform in order to simplify the relevant calculations. We present a number of numerical examples to assess the analytical results accuracy in quantifying the bullwhip effect.
Impact of discharge data uncertainty on nutrient load uncertainty
Westerberg, Ida; Gustavsson, Hanna; Sonesten, Lars
2016-04-01
Uncertainty in the rating-curve model of the stage-discharge relationship leads to uncertainty in discharge time series. These uncertainties in turn affect many other analyses based on discharge data, such as nutrient load estimations. It is important to understand how large the impact of discharge data uncertainty is on such analyses, since they are often used as the basis to take important environmental management decisions. In the Baltic Sea basin, nutrient load estimates from river mouths are a central information basis for managing and reducing eutrophication in the Baltic Sea. In this study we investigated rating curve uncertainty and its propagation to discharge data uncertainty and thereafter to uncertainty in the load of phosphorous and nitrogen for twelve Swedish river mouths. We estimated rating curve uncertainty using the Voting Point method, which accounts for random and epistemic errors in the stage-discharge relation and allows drawing multiple rating-curve realisations consistent with the total uncertainty. We sampled 40,000 rating curves, and for each sampled curve we calculated a discharge time series from 15-minute water level data for the period 2005-2014. Each discharge time series was then aggregated to daily scale and used to calculate the load of phosphorous and nitrogen from linearly interpolated monthly water samples, following the currently used methodology for load estimation. Finally the yearly load estimates were calculated and we thus obtained distributions with 40,000 load realisations per year - one for each rating curve. We analysed how the rating curve uncertainty propagated to the discharge time series at different temporal resolutions, and its impact on the yearly load estimates. Two shorter periods of daily water quality sampling around the spring flood peak allowed a comparison of load uncertainty magnitudes resulting from discharge data with those resulting from the monthly water quality sampling.
Ferrarese, Giorgio
2011-01-01
Lectures: A. Jeffrey: Lectures on nonlinear wave propagation.- Y. Choquet-Bruhat: Ondes asymptotiques.- G. Boillat: Urti.- Seminars: D. Graffi: Sulla teoria dell'ottica non-lineare.- G. Grioli: Sulla propagazione del calore nei mezzi continui.- T. Manacorda: Onde nei solidi con vincoli interni.- T. Ruggeri: "Entropy principle" and main field for a non linear covariant system.- B. Straughan: Singular surfaces in dipolar materials and possible consequences for continuum mechanics
Stereo-particle image velocimetry uncertainty quantification
Bhattacharya, Sayantan; Charonko, John J.; Vlachos, Pavlos P.
2017-01-01
Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric
Stereo-particle image velocimetry uncertainty quantification
International Nuclear Information System (INIS)
Bhattacharya, Sayantan; Vlachos, Pavlos P; Charonko, John J
2017-01-01
Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric
Characterizing Epistemic Uncertainty for Launch Vehicle Designs
Novack, Steven D.; Rogers, Jim; Hark, Frank; Al Hassan, Mohammad
2016-01-01
NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty are rendered obsolete since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods.This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper shows how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.
Uncertainty covariances in robotics applications
International Nuclear Information System (INIS)
Smith, D.L.
1984-01-01
The application of uncertainty covariance matrices in the analysis of robot trajectory errors is explored. First, relevant statistical concepts are reviewed briefly. Then, a simple, hypothetical robot model is considered to illustrate methods for error propagation and performance test data evaluation. The importance of including error correlations is emphasized
Solomatine, Dimitri
2016-04-01
When speaking about model uncertainty many authors implicitly assume the data uncertainty (mainly in parameters or inputs) which is probabilistically described by distributions. Often however it is look also into the residual uncertainty as well. It is hence reasonable to classify the main approaches to uncertainty analysis with respect to the two main types of model uncertainty that can be distinguished: A. The residual uncertainty of models. In this case the model parameters and/or model inputs are considered to be fixed (deterministic), i.e. the model is considered to be optimal (calibrated) and deterministic. Model error is considered as the manifestation of uncertainty. If there is enough past data about the model errors (i.e. it uncertainty), it is possible to build a statistical or machine learning model of uncertainty trained on this data. The following methods can be mentioned: (a) quantile regression (QR) method by Koenker and Basset in which linear regression is used to build predictive models for distribution quantiles [1] (b) a more recent approach that takes into account the input variables influencing such uncertainty and uses more advanced machine learning (non-linear) methods (neural networks, model trees etc.) - the UNEEC method [2,3,7] (c) and even more recent DUBRAUE method (Dynamic Uncertainty Model By Regression on Absolute Error), a autoregressive model of model residuals (it corrects the model residual first and then carries out the uncertainty prediction by a autoregressive statistical model) [5] B. The data uncertainty (parametric and/or input) - in this case we study the propagation of uncertainty (presented typically probabilistically) from parameters or inputs to the model outputs. In case of simple functions representing models analytical approaches can be used, or approximation methods (e.g., first-order second moment method). However, for real complex non-linear models implemented in software there is no other choice except using
Dealing with exploration uncertainties
International Nuclear Information System (INIS)
Capen, E.
1992-01-01
Exploration for oil and gas should fulfill the most adventurous in their quest for excitement and surprise. This paper tries to cover that tall order. The authors will touch on the magnitude of the uncertainty (which is far greater than in most other businesses), the effects of not knowing target sizes very well, how to build uncertainty into analyses naturally, how to tie reserves and chance estimates to economics, and how to look at the portfolio effect of an exploration program. With no apologies, the authors will be using a different language for some readers - the language of uncertainty, which means probability and statistics. These tools allow one to combine largely subjective exploration information with the more analytical data from the engineering and economic side
International Nuclear Information System (INIS)
Silva, Jose Wanderley S. da; Barros, Pedro Dionisio de; Araujo, Radier Mario S. de
2009-01-01
In safeguards, independent analysis of uranium content and enrichment of nuclear materials to verify operator's declarations is an important tool to evaluate the accountability system applied by nuclear installations. This determination may be performed by nondestructive (NDA) methods, generally done in the field using portable radiation detection systems, or destructive (DA) methods by chemical analysis when more accurate and precise results are necessary. Samples for DA analysis are collected by inspectors during safeguards inspections and sent to Safeguards Laboratory (LASAL) of the Brazilian Nuclear Energy Commission - (CNEN), where the analysis take place. The method used by LASAL for determination of uranium in different physical and chemical forms is the Davies and Gray/NBL using an automatic potentiometric titrator, which performs the titration of uranium IV by a standard solution of K 2 Cr 2 O 7 . Uncertainty budgets have been determined based on the concepts of the ISO 'Guide to the Expression of Uncertainty in Measurement' (GUM). In order to simplify the calculation of the uncertainty, a computational tool named Kragten Spreadsheet was used. Such spreadsheet uses the concepts established by the GUM and provides results that numerically approximates to those obtained by propagation of uncertainty with analytically determined sensitivity coefficients. The main parameters (input quantities) interfering on the uncertainty were studied. In order to evaluate their contribution in the final uncertainty, the uncertainties of all steps of the analytical method were estimated and compiled. (author)
Liu, Baoding
2015-01-01
When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, c...
Pole solutions for flame front propagation
Kupervasser, Oleg
2015-01-01
This book deals with solving mathematically the unsteady flame propagation equations. New original mathematical methods for solving complex non-linear equations and investigating their properties are presented. Pole solutions for flame front propagation are developed. Premixed flames and filtration combustion have remarkable properties: the complex nonlinear integro-differential equations for these problems have exact analytical solutions described by the motion of poles in a complex plane. Instead of complex equations, a finite set of ordinary differential equations is applied. These solutions help to investigate analytically and numerically properties of the flame front propagation equations.
International Nuclear Information System (INIS)
Conroy, Charlie; Gunn, James E.; White, Martin
2009-01-01
The stellar masses, mean ages, metallicities, and star formation histories of galaxies are now commonly estimated via stellar population synthesis (SPS) techniques. SPS relies on stellar evolution calculations from the main sequence to stellar death, stellar spectral libraries, phenomenological dust models, and stellar initial mass functions (IMFs) to translate the evolution of a multimetallicity, multi-age set of stars into a prediction for the time-evolution of the integrated light from that set of stars. Each of these necessary inputs carries significant uncertainties that have until now received little systematic attention. The present work is the first in a series that explores the impact of uncertainties in key phases of stellar evolution and the IMF on the derived physical properties of galaxies and the expected luminosity evolution for a passively evolving set of stars. A Monte Carlo Markov Chain approach is taken to fit near-UV through near-IR photometry of a representative sample of low- and high-redshift galaxies with this new SPS model. Significant results include the following. (1) Including uncertainties in stellar evolution, stellar masses at z ∼ 0 carry errors of ∼0.3 dex at 95% CL with little dependence on luminosity or color, while at z ∼ 2, the masses of bright red galaxies are uncertain at the ∼0.6 dex level. (2) Either current stellar evolution models, current observational stellar libraries, or both, do not adequately characterize the metallicity-dependence of the thermally pulsating AGB phase. (3) Conservative estimates on the uncertainty of the slope of the IMF in the solar neighborhood imply that luminosity evolution per unit redshift is uncertain at the ∼0.4 mag level in the K band, which is a substantial source of uncertainty for interpreting the evolution of galaxy populations across time. Any possible evolution in the IMF, as suggested by several independent lines of evidence, will only exacerbate this problem. (4) Assuming a
Courtney, H; Kirkland, J; Viguerie, P
1997-01-01
At the heart of the traditional approach to strategy lies the assumption that by applying a set of powerful analytic tools, executives can predict the future of any business accurately enough to allow them to choose a clear strategic direction. But what happens when the environment is so uncertain that no amount of analysis will allow us to predict the future? What makes for a good strategy in highly uncertain business environments? The authors, consultants at McKinsey & Company, argue that uncertainty requires a new way of thinking about strategy. All too often, they say, executives take a binary view: either they underestimate uncertainty to come up with the forecasts required by their companies' planning or capital-budging processes, or they overestimate it, abandon all analysis, and go with their gut instinct. The authors outline a new approach that begins by making a crucial distinction among four discrete levels of uncertainty that any company might face. They then explain how a set of generic strategies--shaping the market, adapting to it, or reserving the right to play at a later time--can be used in each of the four levels. And they illustrate how these strategies can be implemented through a combination of three basic types of actions: big bets, options, and no-regrets moves. The framework can help managers determine which analytic tools can inform decision making under uncertainty--and which cannot. At a broader level, it offers executives a discipline for thinking rigorously and systematically about uncertainty and its implications for strategy.
Uncertainty quantification in volumetric Particle Image Velocimetry
Bhattacharya, Sayantan; Charonko, John; Vlachos, Pavlos
2016-11-01
Particle Image Velocimetry (PIV) uncertainty quantification is challenging due to coupled sources of elemental uncertainty and complex data reduction procedures in the measurement chain. Recent developments in this field have led to uncertainty estimation methods for planar PIV. However, no framework exists for three-dimensional volumetric PIV. In volumetric PIV the measurement uncertainty is a function of reconstructed three-dimensional particle location that in turn is very sensitive to the accuracy of the calibration mapping function. Furthermore, the iterative correction to the camera mapping function using triangulated particle locations in space (volumetric self-calibration) has its own associated uncertainty due to image noise and ghost particle reconstructions. Here we first quantify the uncertainty in the triangulated particle position which is a function of particle detection and mapping function uncertainty. The location uncertainty is then combined with the three-dimensional cross-correlation uncertainty that is estimated as an extension of the 2D PIV uncertainty framework. Finally the overall measurement uncertainty is quantified using an uncertainty propagation equation. The framework is tested with both simulated and experimental cases. For the simulated cases the variation of estimated uncertainty with the elemental volumetric PIV error sources are also evaluated. The results show reasonable prediction of standard uncertainty with good coverage.
The state of the art of the impact of sampling uncertainty on measurement uncertainty
Leite, V. J.; Oliveira, E. C.
2018-03-01
The measurement uncertainty is a parameter that marks the reliability and can be divided into two large groups: sampling and analytical variations. Analytical uncertainty is a controlled process, performed in the laboratory. The same does not occur with the sampling uncertainty, which, because it faces several obstacles and there is no clarity on how to perform the procedures, has been neglected, although it is admittedly indispensable to the measurement process. This paper aims at describing the state of the art of sampling uncertainty and at assessing its relevance to measurement uncertainty.
DEFF Research Database (Denmark)
Heydorn, Kaj; Anglov, Thomas
2002-01-01
Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...
Background and Qualification of Uncertainty Methods
International Nuclear Information System (INIS)
D'Auria, F.; Petruzzi, A.
2008-01-01
The evaluation of uncertainty constitutes the necessary supplement of Best Estimate calculations performed to understand accident scenarios in water cooled nuclear reactors. The needs come from the imperfection of computational tools on the one side and from the interest in using such tool to get more precise evaluation of safety margins. The paper reviews the salient features of two independent approaches for estimating uncertainties associated with predictions of complex system codes. Namely the propagation of code input error and the propagation of the calculation output error constitute the key-words for identifying the methods of current interest for industrial applications. Throughout the developed methods, uncertainty bands can be derived (both upper and lower) for any desired quantity of the transient of interest. For the second case, the uncertainty method is coupled with the thermal-hydraulic code to get the Code with capability of Internal Assessment of Uncertainty, whose features are discussed in more detail.
Decay heat uncertainty quantification of MYRRHA
Fiorito, Luca; Buss, Oliver; Hoefer, Axel; Stankovskiy, Alexey; Eynde, Gert Van den
2017-09-01
MYRRHA is a lead-bismuth cooled MOX-fueled accelerator driven system (ADS) currently in the design phase at SCK·CEN in Belgium. The correct evaluation of the decay heat and of its uncertainty level is very important for the safety demonstration of the reactor. In the first part of this work we assessed the decay heat released by the MYRRHA core using the ALEPH-2 burnup code. The second part of the study focused on the nuclear data uncertainty and covariance propagation to the MYRRHA decay heat. Radioactive decay data, independent fission yield and cross section uncertainties/covariances were propagated using two nuclear data sampling codes, namely NUDUNA and SANDY. According to the results, 238U cross sections and fission yield data are the largest contributors to the MYRRHA decay heat uncertainty. The calculated uncertainty values are deemed acceptable from the safety point of view as they are well within the available regulatory limits.
Decay heat uncertainty quantification of MYRRHA
Directory of Open Access Journals (Sweden)
Fiorito Luca
2017-01-01
Full Text Available MYRRHA is a lead-bismuth cooled MOX-fueled accelerator driven system (ADS currently in the design phase at SCK·CEN in Belgium. The correct evaluation of the decay heat and of its uncertainty level is very important for the safety demonstration of the reactor. In the first part of this work we assessed the decay heat released by the MYRRHA core using the ALEPH-2 burnup code. The second part of the study focused on the nuclear data uncertainty and covariance propagation to the MYRRHA decay heat. Radioactive decay data, independent fission yield and cross section uncertainties/covariances were propagated using two nuclear data sampling codes, namely NUDUNA and SANDY. According to the results, 238U cross sections and fission yield data are the largest contributors to the MYRRHA decay heat uncertainty. The calculated uncertainty values are deemed acceptable from the safety point of view as they are well within the available regulatory limits.
Verburg, P.H.; Tabeau, A.A.; Hatna, E.
2013-01-01
Land change model outcomes are vulnerable to multiple types of uncertainty, including uncertainty in input data, structural uncertainties in the model and uncertainties in model parameters. In coupled model systems the uncertainties propagate between the models. This paper assesses uncertainty of
Sustainable Process Design under uncertainty analysis: targeting environmental indicators
DEFF Research Database (Denmark)
L. Gargalo, Carina; Gani, Rafiqul
2015-01-01
This study focuses on uncertainty analysis of environmental indicators used to support sustainable process design efforts. To this end, the Life Cycle Assessment methodology is extended with a comprehensive uncertainty analysis to propagate the uncertainties in input LCA data to the environmental...
Model uncertainty in safety assessment
International Nuclear Information System (INIS)
Pulkkinen, U.; Huovinen, T.
1996-01-01
The uncertainty analyses are an essential part of any risk assessment. Usually the uncertainties of reliability model parameter values are described by probability distributions and the uncertainty is propagated through the whole risk model. In addition to the parameter uncertainties, the assumptions behind the risk models may be based on insufficient experimental observations and the models themselves may not be exact descriptions of the phenomena under analysis. The description and quantification of this type of uncertainty, model uncertainty, is the topic of this report. The model uncertainty is characterized and some approaches to model and quantify it are discussed. The emphasis is on so called mixture models, which have been applied in PSAs. Some of the possible disadvantages of the mixture model are addressed. In addition to quantitative analyses, also qualitative analysis is discussed shortly. To illustrate the models, two simple case studies on failure intensity and human error modeling are described. In both examples, the analysis is based on simple mixture models, which are observed to apply in PSA analyses. (orig.) (36 refs., 6 figs., 2 tabs.)
2012-03-01
certify to : ISO 9001 (QMS), ISO 14001 (EMS), TS 16949 (US Automotive) etc. 2 3 DoD QSM 4.2 standard ISO /IEC 17025:2005 Each has uncertainty...Analytical Measurement Uncertainty Estimation” Defense Technical Information Center # ADA 396946 William S. Ingersoll, 2001 12 Follows the ISO GUM...SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION /AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY
DEFF Research Database (Denmark)
Nguyen, Daniel Xuyen
. This retooling addresses several shortcomings. First, the imperfect correlation of demands reconciles the sales variation observed in and across destinations. Second, since demands for the firm's output are correlated across destinations, a firm can use previously realized demands to forecast unknown demands...... in untested destinations. The option to forecast demands causes firms to delay exporting in order to gather more information about foreign demand. Third, since uncertainty is resolved after entry, many firms enter a destination and then exit after learning that they cannot profit. This prediction reconciles......This paper presents a model of trade that explains why firms wait to export and why many exporters fail. Firms face uncertain demands that are only realized after the firm enters the destination. The model retools the timing of uncertainty resolution found in productivity heterogeneity models...
Optimizing production under uncertainty
DEFF Research Database (Denmark)
Rasmussen, Svend
This Working Paper derives criteria for optimal production under uncertainty based on the state-contingent approach (Chambers and Quiggin, 2000), and discusses po-tential problems involved in applying the state-contingent approach in a normative context. The analytical approach uses the concept...... of state-contingent production functions and a definition of inputs including both sort of input, activity and alloca-tion technology. It also analyses production decisions where production is combined with trading in state-contingent claims such as insurance contracts. The final part discusses...
Subsidized Capacity Investment under Uncertainty
Wen, Xingang; Hagspiel, V.; Kort, Peter
2017-01-01
This paper studies how the subsidy support, e.g. price support and reimbursed investment cost support, affects the investment decision of a monopoly firm under uncertainty and analyzes the implications for social welfare. The analytical results show that the unconditional, i.e., subsidy support that
International Nuclear Information System (INIS)
Robouch, P.; Arana, G.; Eguskiza, M.; Etxebarria, N.
2000-01-01
The concepts of the Guide to the expression of Uncertainties in Measurements for chemical measurements (GUM) and the recommendations of the Eurachem document 'Quantifying Uncertainty in Analytical Methods' are applied to set up the uncertainty budget for k 0 -NAA. The 'universally applicable spreadsheet technique', described by KRAGTEN, is applied to the k 0 -NAA basic equations for the computation of uncertainties. The variance components - individual standard uncertainties - highlight the contribution and the importance of the different parameters to be taken into account. (author)
Uncertainty quantification applied to the radiological characterization of radioactive waste.
Zaffora, B; Magistris, M; Saporta, G; Chevalier, J-P
2017-09-01
This paper describes the process adopted at the European Organization for Nuclear Research (CERN) to quantify uncertainties affecting the characterization of very-low-level radioactive waste. Radioactive waste is a by-product of the operation of high-energy particle accelerators. Radioactive waste must be characterized to ensure its safe disposal in final repositories. Characterizing radioactive waste means establishing the list of radionuclides together with their activities. The estimated activity levels are compared to the limits given by the national authority of the waste disposal. The quantification of the uncertainty affecting the concentration of the radionuclides is therefore essential to estimate the acceptability of the waste in the final repository but also to control the sorting, volume reduction and packaging phases of the characterization process. The characterization method consists of estimating the activity of produced radionuclides either by experimental methods or statistical approaches. The uncertainties are estimated using classical statistical methods and uncertainty propagation. A mixed multivariate random vector is built to generate random input parameters for the activity calculations. The random vector is a robust tool to account for the unknown radiological history of legacy waste. This analytical technique is also particularly useful to generate random chemical compositions of materials when the trace element concentrations are not available or cannot be measured. The methodology was validated using a waste population of legacy copper activated at CERN. The methodology introduced here represents a first approach for the uncertainty quantification (UQ) of the characterization process of waste produced at particle accelerators. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zou, Xiao-Duan; Li, Jian-Yang; Clark, Beth Ellen; Golish, Dathon
2018-01-01
The OSIRIS-REx spacecraft, launched in September, 2016, will study the asteroid Bennu and return a sample from its surface to Earth in 2023. Bennu is a near-Earth carbonaceous asteroid which will provide insight into the formation and evolution of the solar system. OSIRIS-REx will first approach Bennu in August 2018 and will study the asteroid for approximately two years before sampling. OSIRIS-REx will develop its photometric model (including Lommel-Seelinger, ROLO, McEwen, Minnaert and Akimov) of Bennu with OCAM and OVIRS during the Detailed Survey mission phase. The model developed during this phase will be used to photometrically correct the OCAM and OVIRS data.Here we present the analysis of the error for the photometric corrections. Based on our testing data sets, we find:1. The model uncertainties is only correct when we use the covariance matrix to calculate, because the parameters are highly correlated.2. No evidence of domination of any parameter in each model.3. And both model error and the data error contribute to the final correction error comparably.4. We tested the uncertainty module on fake and real data sets, and find that model performance depends on the data coverage and data quality. These tests gave us a better understanding of how different model behave in different case.5. L-S model is more reliable than others. Maybe because the simulated data are based on L-S model. However, the test on real data (SPDIF) does show slight advantage of L-S, too. ROLO is not reliable to use when calculating bond albedo. The uncertainty of McEwen model is big in most cases. Akimov performs unphysical on SOPIE 1 data.6. Better use L-S as our default choice, this conclusion is based mainly on our test on SOPIE data and IPDIF.
Uncertainties in land use data
Directory of Open Access Journals (Sweden)
G. Castilla
2007-11-01
Full Text Available This paper deals with the description and assessment of uncertainties in land use data derived from Remote Sensing observations, in the context of hydrological studies. Land use is a categorical regionalised variable reporting the main socio-economic role each location has, where the role is inferred from the pattern of occupation of land. The properties of this pattern that are relevant to hydrological processes have to be known with some accuracy in order to obtain reliable results; hence, uncertainty in land use data may lead to uncertainty in model predictions. There are two main uncertainties surrounding land use data, positional and categorical. The first one is briefly addressed and the second one is explored in more depth, including the factors that influence it. We (1 argue that the conventional method used to assess categorical uncertainty, the confusion matrix, is insufficient to propagate uncertainty through distributed hydrologic models; (2 report some alternative methods to tackle this and other insufficiencies; (3 stress the role of metadata as a more reliable means to assess the degree of distrust with which these data should be used; and (4 suggest some practical recommendations.
International Nuclear Information System (INIS)
Davis, C.B.
1987-08-01
The uncertainties of calculations of loss-of-feedwater transients at Davis-Besse Unit 1 were determined to address concerns of the US Nuclear Regulatory Commission relative to the effectiveness of feed and bleed cooling. Davis-Besse Unit 1 is a pressurized water reactor of the raised-loop Babcock and Wilcox design. A detailed, quality-assured RELAP5/MOD2 model of Davis-Besse was developed at the Idaho National Engineering Laboratory. The model was used to perform an analysis of the loss-of-feedwater transient that occurred at Davis-Besse on June 9, 1985. A loss-of-feedwater transient followed by feed and bleed cooling was also calculated. The evaluation of uncertainty was based on the comparisons of calculations and data, comparisons of different calculations of the same transient, sensitivity calculations, and the propagation of the estimated uncertainty in initial and boundary conditions to the final calculated results
International Nuclear Information System (INIS)
Picard, R.R.
1989-01-01
Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process
EPA’s Web Analytics Program collects, analyzes, and provides reports on traffic, quality assurance, and customer satisfaction metrics for EPA’s website. The program uses a variety of analytics tools, including Google Analytics and CrazyEgg.
Uncertainty quantification theory, implementation, and applications
Smith, Ralph C
2014-01-01
The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...
International Nuclear Information System (INIS)
Baker, Benjamin A.; Imel, George R.
2013-06-01
This paper presents continuous and discrete equations for the propagation of uncertainty applied to inverse kinetics and shows that the uncertainty of a measurement can be minimized by the proper choice of frequency from the perturbing reactivity waveform. (authors)
Influence of network structure on rumor propagation
Energy Technology Data Exchange (ETDEWEB)
Zhou Jie [Institute of Theoretical Physics and Department of Physics, East China Normal University, Shanghai 200062 (China); Liu Zonghua [Institute of Theoretical Physics and Department of Physics, East China Normal University, Shanghai 200062 (China); Department of Physics and Centre for Computational Science and Engineering, National University of Singapore, 117542-46 Singapore (Singapore)], E-mail: zonghualiu72@yahoo.com; Li Baowen [Department of Physics and Centre for Computational Science and Engineering, National University of Singapore, 117542-46 Singapore (Singapore); NUS Graduate School for Integrative Sciences and Engineering, Singapore 117597 (Singapore); Institute of Theoretical Physics and Department of Physics, East China Normal University, Shanghai 200062 (China)
2007-09-03
Rumor propagation in complex networks is studied analytically and numerically by using the SIR model. Analytically, a mean-field theory is worked out by considering the influence of network topological structure and the unequal footings of neighbors of an infected node in propagating the rumor. It is found that the final infected density of population with degree k is {rho}(k)=1-exp(-{alpha}k), where {alpha} is a parameter related to network structure. The number of the total final infected nodes depends on the network topological structure and will decrease when the structure changes from random to scale-free network. Numerical simulations confirm the theoretical predictions.
NLO error propagation exercise: statistical results
International Nuclear Information System (INIS)
Pack, D.J.; Downing, D.J.
1985-09-01
Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or 235 U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, 235 U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and 235 U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods
Uncertainty Quantification in Numerical Aerodynamics
Litvinenko, Alexander
2017-05-16
We consider uncertainty quantification problem in aerodynamic simulations. We identify input uncertainties, classify them, suggest an appropriate statistical model and, finally, estimate propagation of these uncertainties into the solution (pressure, velocity and density fields as well as the lift and drag coefficients). The deterministic problem under consideration is a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. Input uncertainties include: uncertain angle of attack, the Mach number, random perturbations in the airfoil geometry, mesh, shock location, turbulence model and parameters of this turbulence model. This problem requires efficient numerical/statistical methods since it is computationally expensive, especially for the uncertainties caused by random geometry variations which involve a large number of variables. In numerical section we compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry [D.Liu et al \\'17]. For modeling we used the TAU code, developed in DLR, Germany.
Uncertainties in physics calculations for gas cooled reactor cores
International Nuclear Information System (INIS)
1991-04-01
The meeting was attended by 29 participants from Austria, China, France, Germany, Japan, Switzerland, the Union of Soviet Socialist Republics and the United States of America and was subdivided into four technical sessions: Analytical methods, comparison of predictions with results from existing HTGRs, uncertainty evaluations (3 papers); Analytical methods, predictions of performance of future HTGRs, uncertainty evaluations - part 1 (5 papers); Analytical methods, predictions of performance of future HTGRs, uncertainty evaluations - part 2 (6 papers); Critical experiments - planning and results, uncertainty evaluations (5 papers). The participants presented 19 papers on behalf of their countries or organizations. A separate abstract was prepared for each of these papers. Refs, figs and tabs
Analytic stochastic regularization in fermionic gauge theories
International Nuclear Information System (INIS)
Abdalla, E.; Viana, R.L.
1987-11-01
We analyse the influence of the Analytic Stochastic Regularization method in gauge symmetry, evaluating the 1-loop photon propagator correction for spinor QED. Consequences in the non-abelian case are discussed. (author) [pt
Experimental uncertainty estimation and statistics for data having interval uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)
2007-05-01
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Transformation of Bayesian posterior distribution into a basic analytical distribution
International Nuclear Information System (INIS)
Jordan Cizelj, R.; Vrbanic, I.
2002-01-01
Bayesian estimation is well-known approach that is widely used in Probabilistic Safety Analyses for the estimation of input model reliability parameters, such as component failure rates or probabilities of failure upon demand. In this approach, a prior distribution, which contains some generic knowledge about a parameter is combined with likelihood function, which contains plant-specific data about the parameter. Depending on the type of prior distribution, the resulting posterior distribution can be estimated numerically or analytically. In many instances only a numerical Bayesian integration can be performed. In such a case the posterior is provided in the form of tabular discrete distribution. On the other hand, it is much more convenient to have a parameter's uncertainty distribution that is to be input into a PSA model to be provided in the form of some basic analytical probability distribution, such as lognormal, gamma or beta distribution. One reason is that this enables much more convenient propagation of parameters' uncertainties through the model up to the so-called top events, such as plant system unavailability or core damage frequency. Additionally, software tools used to run PSA models often require that parameter's uncertainty distribution is defined in the form of one among the several allowed basic types of distributions. In such a case the posterior distribution that came as a product of Bayesian estimation needs to be transformed into an appropriate basic analytical form. In this paper, some approaches on transformation of posterior distribution to a basic probability distribution are proposed and discussed. They are illustrated by an example from NPP Krsko PSA model.(author)
Procedures for uncertainty and sensitivity analysis in repository performance assessment
International Nuclear Information System (INIS)
Poern, K.; Aakerlund, O.
1985-10-01
The objective of the project was mainly a literature study of available methods for the treatment of parameter uncertainty propagation and sensitivity aspects in complete models such as those concerning geologic disposal of radioactive waste. The study, which has run parallel with the development of a code package (PROPER) for computer assisted analysis of function, also aims at the choice of accurate, cost-affective methods for uncertainty and sensitivity analysis. Such a choice depends on several factors like the number of input parameters, the capacity of the model and the computer reresources required to use the model. Two basic approaches are addressed in the report. In one of these the model of interest is directly simulated by an efficient sampling technique to generate an output distribution. Applying the other basic method the model is replaced by an approximating analytical response surface, which is then used in the sampling phase or in moment matching to generate the output distribution. Both approaches are illustrated by simple examples in the report. (author)
Uncertainty and global climate change research
Energy Technology Data Exchange (ETDEWEB)
Tonn, B.E. [Oak Ridge National Lab., TN (United States); Weiher, R. [National Oceanic and Atmospheric Administration, Boulder, CO (United States)
1994-06-01
The Workshop on Uncertainty and Global Climate Change Research March 22--23, 1994, in Knoxville, Tennessee. This report summarizes the results and recommendations of the workshop. The purpose of the workshop was to examine in-depth the concept of uncertainty. From an analytical point of view, uncertainty is a central feature of global climate science, economics and decision making. The magnitude and complexity of uncertainty surrounding global climate change has made it quite difficult to answer even the most simple and important of questions-whether potentially costly action is required now to ameliorate adverse consequences of global climate change or whether delay is warranted to gain better information to reduce uncertainties. A major conclusion of the workshop is that multidisciplinary integrated assessments using decision analytic techniques as a foundation is key to addressing global change policy concerns. First, uncertainty must be dealt with explicitly and rigorously since it is and will continue to be a key feature of analysis and recommendations on policy questions for years to come. Second, key policy questions and variables need to be explicitly identified, prioritized, and their uncertainty characterized to guide the entire scientific, modeling, and policy analysis process. Multidisciplinary integrated assessment techniques and value of information methodologies are best suited for this task. In terms of timeliness and relevance of developing and applying decision analytic techniques, the global change research and policy communities are moving rapidly toward integrated approaches to research design and policy analysis.
A spreadsheet approach to facilitate visualization of uncertainty in information.
Streit, Alexander; Pham, Binh; Brown, Ross
2008-01-01
Information uncertainty is inherent in many problems and is often subtle and complicated to understand. Although visualization is a powerful means for exploring and understanding information, information uncertainty visualization is ad hoc and not widespread. This paper identifies two main barriers to the uptake of information uncertainty visualization: firstly, the difficulty of modeling and propagating the uncertainty information; and secondly, the difficulty of mapping uncertainty to visual elements. To overcome these barriers, we extend the spreadsheet paradigm to encapsulate uncertainty details within cells. This creates an inherent awareness of the uncertainty associated with each variable. The spreadsheet can hide the uncertainty details, enabling the user to think simply in terms of variables. Furthermore, the system can aid with automated propagation of uncertainty information, since it is intrinsically aware of the uncertainty. The system also enables mapping the encapsulated uncertainty to visual elements via the formula language and a visualization sheet. Support for such low-level visual mapping provides flexibility to explore new techniques for information uncertainty visualization.
Gilli, L.
2013-01-01
This thesis presents the development and the implementation of an uncertainty propagation algorithm based on the concept of spectral expansion. The first part of the thesis is dedicated to the study of uncertainty propagation methodologies and to the analysis of spectral techniques. The concepts
Methodology for characterizing modeling and discretization uncertainties in computational simulation
Energy Technology Data Exchange (ETDEWEB)
ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.
2000-03-01
This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.
International Nuclear Information System (INIS)
Landsberg, P.T.
1990-01-01
This paper explores how the quantum mechanics uncertainty relation can be considered to result from measurements. A distinction is drawn between the uncertainties obtained by scrutinising experiments and the standard deviation type of uncertainty definition used in quantum formalism. (UK)
An analysis of combined standard uncertainty for radiochemical measurements of environmental samples
International Nuclear Information System (INIS)
Berne, A.
1996-01-01
It is anticipated that future data acquisitions intended for use in radiological risk assessments will require the incorporation of uncertainty analysis. Often, only one aliquot of the sample is taken and a single determination is made. Under these circumstances, the total uncertainty is calculated using the open-quotes propagation of errorsclose quotes approach. However, there is no agreement in the radioanalytical community as to the exact equations to use. The Quality Assurance/Metrology Division of the Environmental Measurements Laboratory has developed a systematic process to compute uncertainties in constituent components of the analytical procedure, as well as the combined standard uncertainty (CSU). The equations for computation are presented here, with examples of their use. They have also been incorporated into a code for use in the spreadsheet application, QuattroPro trademark. Using the spreadsheet with appropriate inputs permits an analysis of the variations in the CSU as a function of several different variables. The relative importance of the open-quotes counting uncertaintyclose quotes can also be ascertained
Uncertainty of the variation in length of gauge blocks by mechanical comparison: a worked example
Matus, M.
2012-09-01
This paper is a study on the determination of the measurement uncertainty for a relatively simple and widespread calibration task. The measurement of a special form deviation of gauge blocks using the so-called five-point technique is discussed in detail. It is shown that the mainstream treatment of the measurement uncertainty (i.e. propagation of uncertainties) cannot be applied to this problem for principal reasons; the use of supplement 1 of the GUM (Monte Carlo method) is mandatory. The proposed model equation is probably the simplest ‘real world’ example where the use of supplement 1 of the GUM not only gives better results, but gives results at all. The model is simple enough to serve as a didactical example. Explicit analytical expressions for the probability density functions, expectation values, uncertainties and coverage intervals are given which is helpful for the validation of dedicated software products. Eventually a complete avoidance of the standardized form parameters in calibration certificates is proposed. The statement of ‘corner-deviations’ would be much more useful, especially for the evaluation of key comparisons.
Soft computing approaches to uncertainty propagation in environmental risk mangement
Kumar, Vikas
2008-01-01
Los problemas del mundo real, especialmente aquellos que implican sistemas naturales, son complejos y se componen de muchos componentes indeterminados, que muestran en muchos casos una relación no lineal. Los modelos convencionales basados en técnicas analíticas que se utilizan actualmente para conocer y predecir el comportamiento de dichos sistemas pueden ser muy complicados e inflexibles cuando se quiere hacer frente a la imprecisión y la complejidad del sistema en un mundo real. El tratami...
Propagation of Radar-Rainfall Uncertainty in Runoff Predictions
National Research Council Canada - National Science Library
Ogden, Fred
2001-01-01
.... It consists of two papers that have been submitted for publication based on the research. The objective of this project is to quantitatively evaluate the worth of radar-rainfall estimates for physically based hydrologic modeling...
Evaluating measurement uncertainty in fluid phase equilibrium calculations
van der Veen, Adriaan M. H.
2018-04-01
The evaluation of measurement uncertainty in accordance with the ‘Guide to the expression of uncertainty in measurement’ (GUM) has not yet become widespread in physical chemistry. With only the law of the propagation of uncertainty from the GUM, many of these uncertainty evaluations would be cumbersome, as models are often non-linear and require iterative calculations. The methods from GUM supplements 1 and 2 enable the propagation of uncertainties under most circumstances. Experimental data in physical chemistry are used, for example, to derive reference property data and support trade—all applications where measurement uncertainty plays an important role. This paper aims to outline how the methods for evaluating and propagating uncertainty can be applied to some specific cases with a wide impact: deriving reference data from vapour pressure data, a flash calculation, and the use of an equation-of-state to predict the properties of both phases in a vapour-liquid equilibrium. The three uncertainty evaluations demonstrate that the methods of GUM and its supplements are a versatile toolbox that enable us to evaluate the measurement uncertainty of physical chemical measurements, including the derivation of reference data, such as the equilibrium thermodynamical properties of fluids.
Resonance propagation in heavy-ion scattering
Indian Academy of Sciences (India)
КЬЭ ЖЖК. ЖЖ→ЬЭ. (8) which gives a measure of the strength of ЬЭ production in ЖЖ collision (except for the free space resonance propagator). With this we get .... This exercise would be of use if the theoretical formalism describes the reaction dynamics correctly and the data do not have much uncertainty. Alternatively ...
Airyprime beams and their propagation characteristics
International Nuclear Information System (INIS)
Zhou, Guoquan; Chen, Ruipin; Ru, Guoyun
2014-01-01
A type of Airyprime beam is introduced in this document. An analytical expression of Airyprime beams passing through a separable ABCD paraxial optical system is derived. The beam propagation factor of the Airyprime beam is proved to be 3.676. An analytical expression of the kurtosis parameter of an Airyprime beam passing through a separable ABCD paraxial optical system is also presented. The kurtosis parameter of the Airyprime beam passing through a separable ABCD paraxial optical system depends on the two ratios B/(Az rx ) and B/(Az ry ). As a numerical example, the propagation characteristics of an Airyprime beam is demonstrated in free space. In the source plane, the Airyprime beam has nine lobes, one of which is the central dominant lobe. In the far field, the Airyprime beam becomes a dark-hollow beam with four uniform lobes. The evolvement of an Airyprime beam propagating in free space is well exhibited. Upon propagation, the intensity distribution of the Airyprime beam becomes flatter and the kurtosis parameter decreases from the maximum value 2.973 to a saturated value 1.302. The Airyprime beam is also compared with the second-order elegant Hermite–Gaussian beam. The novel propagation characteristics of Airyprime beams denote that they could have potential application prospects such as optical trapping. (letter)
Error propagation analysis for a sensor system
International Nuclear Information System (INIS)
Yeater, M.L.; Hockenbury, R.W.; Hawkins, J.; Wilkinson, J.
1976-01-01
As part of a program to develop reliability methods for operational use with reactor sensors and protective systems, error propagation analyses are being made for each model. An example is a sensor system computer simulation model, in which the sensor system signature is convoluted with a reactor signature to show the effect of each in revealing or obscuring information contained in the other. The error propagation analysis models the system and signature uncertainties and sensitivities, whereas the simulation models the signatures and by extensive repetitions reveals the effect of errors in various reactor input or sensor response data. In the approach for the example presented, the errors accumulated by the signature (set of ''noise'' frequencies) are successively calculated as it is propagated stepwise through a system comprised of sensor and signal processing components. Additional modeling steps include a Fourier transform calculation to produce the usual power spectral density representation of the product signature, and some form of pattern recognition algorithm
Semiclassical propagation: Hilbert space vs. Wigner representation
Gottwald, Fabian; Ivanov, Sergei D.
2018-03-01
A unified viewpoint on the van Vleck and Herman-Kluk propagators in Hilbert space and their recently developed counterparts in Wigner representation is presented. Based on this viewpoint, the Wigner Herman-Kluk propagator is conceptually the most general one. Nonetheless, the respective semiclassical expressions for expectation values in terms of the density matrix and the Wigner function are mathematically proven here to coincide. The only remaining difference is a mere technical flexibility of the Wigner version in choosing the Gaussians' width for the underlying coherent states beyond minimal uncertainty. This flexibility is investigated numerically on prototypical potentials and it turns out to provide neither qualitative nor quantitative improvements. Given the aforementioned generality, utilizing the Wigner representation for semiclassical propagation thus leads to the same performance as employing the respective most-developed (Hilbert-space) methods for the density matrix.
Czech Academy of Sciences Publication Activity Database
Křivánková, Ludmila
-, č. 22 (2011), s. 718-719 ISSN 1472-3395 Institutional research plan: CEZ:AV0Z40310501 Keywords : analytical chemistry * analytical methods * nanotechnologies Subject RIV: CB - Analytical Chemistry, Separation http://edition.pagesuite-professional.co.uk/launch.aspx?referral=other&pnum=&refresh=M0j83N1cQa91&EID=82bccec1-b05f-46f9-b085-701afc238b42&skip=
Bruce, William J; Maxwell, E A; Sneddon, I N
1963-01-01
Analytic Trigonometry details the fundamental concepts and underlying principle of analytic geometry. The title aims to address the shortcomings in the instruction of trigonometry by considering basic theories of learning and pedagogy. The text first covers the essential elements from elementary algebra, plane geometry, and analytic geometry. Next, the selection tackles the trigonometric functions of angles in general, basic identities, and solutions of equations. The text also deals with the trigonometric functions of real numbers. The fifth chapter details the inverse trigonometric functions
Federal Laboratory Consortium — The Analytical Labspecializes in Oil and Hydraulic Fluid Analysis, Identification of Unknown Materials, Engineering Investigations, Qualification Testing (to support...
Summary of existing uncertainty methods
International Nuclear Information System (INIS)
Glaeser, Horst
2013-01-01
A summary of existing and most used uncertainty methods is presented, and the main features are compared. One of these methods is the order statistics method based on Wilks' formula. It is applied in safety research as well as in licensing. This method has been first proposed by GRS for use in deterministic safety analysis, and is now used by many organisations world-wide. Its advantage is that the number of potential uncertain input and output parameters is not limited to a small number. Such a limitation was necessary for the first demonstration of the Code Scaling Applicability Uncertainty Method (CSAU) by the United States Regulatory Commission (USNRC). They did not apply Wilks' formula in their statistical method propagating input uncertainties to obtain the uncertainty of a single output variable, like peak cladding temperature. A Phenomena Identification and Ranking Table (PIRT) was set up in order to limit the number of uncertain input parameters, and consequently, the number of calculations to be performed. Another purpose of such a PIRT process is to identify the most important physical phenomena which a computer code should be suitable to calculate. The validation of the code should be focused on the identified phenomena. Response surfaces are used in some applications replacing the computer code for performing a high number of calculations. The second well known uncertainty method is the Uncertainty Methodology Based on Accuracy Extrapolation (UMAE) and the follow-up method 'Code with the Capability of Internal Assessment of Uncertainty (CIAU)' developed by the University Pisa. Unlike the statistical approaches, the CIAU does compare experimental data with calculation results. It does not consider uncertain input parameters. Therefore, the CIAU is highly dependent on the experimental database. The accuracy gained from the comparison between experimental data and calculated results are extrapolated to obtain the uncertainty of the system code predictions
International Nuclear Information System (INIS)
Verma, Surendra P.; Andaverde, Jorge; Santoyo, E.
2006-01-01
We used the error propagation theory to calculate uncertainties in static formation temperature estimates in geothermal and petroleum wells from three widely used methods (line-source or Horner method; spherical and radial heat flow method; and cylindrical heat source method). Although these methods commonly use an ordinary least-squares linear regression model considered in this study, we also evaluated two variants of a weighted least-squares linear regression model for the actual relationship between the bottom-hole temperature and the corresponding time functions. Equations based on the error propagation theory were derived for estimating uncertainties in the time function of each analytical method. These uncertainties in conjunction with those on bottom-hole temperatures were used to estimate individual weighting factors required for applying the two variants of the weighted least-squares regression model. Standard deviations and 95% confidence limits of intercept were calculated for both types of linear regressions. Applications showed that static formation temperatures computed with the spherical and radial heat flow method were generally greater (at the 95% confidence level) than those from the other two methods under study. When typical measurement errors of 0.25 h in time and 5 deg. C in bottom-hole temperature were assumed for the weighted least-squares model, the uncertainties in the estimated static formation temperatures were greater than those for the ordinary least-squares model. However, if these errors were smaller (about 1% in time and 0.5% in temperature measurements), the weighted least-squares linear regression model would generally provide smaller uncertainties for the estimated temperatures than the ordinary least-squares linear regression model. Therefore, the weighted model would be statistically correct and more appropriate for such applications. We also suggest that at least 30 precise and accurate BHT and time measurements along with
Impact of model defect and experimental uncertainties on evaluated output
International Nuclear Information System (INIS)
Neudecker, D.; Capote, R.; Leeb, H.
2013-01-01
One of the current major problems in nuclear data evaluation is the unreasonably small evaluated uncertainties often obtained. These small uncertainties are partly attributed to missing correlations of experimental uncertainties as well as to deficiencies of the model employed for the prior information. In this article, both uncertainty sources are included in an evaluation of 55 Mn cross-sections for incident neutrons. Their impact on the evaluated output is studied using a prior obtained by the Full Bayesian Evaluation Technique and a prior obtained by the nuclear model program EMPIRE. It is shown analytically and by means of an evaluation that unreasonably small evaluated uncertainties can be obtained not only if correlated systematic uncertainties of the experiment are neglected but also if prior uncertainties are smaller or about the same magnitude as the experimental ones. Furthermore, it is shown that including model defect uncertainties in the evaluation of 55 Mn leads to larger evaluated uncertainties for channels where the model is deficient. It is concluded that including correlated experimental uncertainties is equally important as model defect uncertainties, if the model calculations deviate significantly from the measurements. -- Highlights: • We study possible causes of unreasonably small evaluated nuclear data uncertainties. • Two different formulations of model defect uncertainties are presented and compared. • Smaller prior than experimental uncertainties cause too small evaluated ones. • Neglected correlations of experimental uncertainties cause too small evaluated ones. • Including model defect uncertainties in the prior improves the evaluated output
Groves, Curtis E.
2013-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This proposal describes an approach to validate the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft. The research described here is absolutely cutting edge. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional"validation by test only'' mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computationaf Fluid Dynamics can be used to veritY these requirements; however, the model must be validated by test data. The proposed research project includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT and OPEN FOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid . . . Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around
Believable statements of uncertainty and believable science
International Nuclear Information System (INIS)
Lindstrom, R.M.
2017-01-01
Nearly 50 years ago, two landmark papers appeared that should have cured the problem of ambiguous uncertainty statements in published data. Eisenhart's paper in Science called for statistically meaningful numbers, and Currie's Analytical Chemistry paper revealed the wide range in common definitions of detection limit. Confusion and worse can result when uncertainties are misinterpreted or ignored. The recent stories of cold fusion, variable radioactive decay, and piezonuclear reactions provide cautionary examples in which prior probability has been neglected. We show examples from our laboratory and others to illustrate the fact that uncertainty depends on both statistical and scientific judgment. (author)
Large-uncertainty intelligent states for angular momentum and angle
International Nuclear Information System (INIS)
Goette, Joerg B; Zambrini, Roberta; Franke-Arnold, Sonja; Barnett, Stephen M
2005-01-01
The equality in the uncertainty principle for linear momentum and position is obtained for states which also minimize the uncertainty product. However, in the uncertainty relation for angular momentum and angular position both sides of the inequality are state dependent and therefore the intelligent states, which satisfy the equality, do not necessarily give a minimum for the uncertainty product. In this paper, we highlight the difference between intelligent states and minimum uncertainty states by investigating a class of intelligent states which obey the equality in the angular uncertainty relation while having an arbitrarily large uncertainty product. To develop an understanding for the uncertainties of angle and angular momentum for the large-uncertainty intelligent states we compare exact solutions with analytical approximations in two limiting cases
Uncertainty for Part Density Determination: An Update
Energy Technology Data Exchange (ETDEWEB)
Valdez, Mario Orlando [Los Alamos National Laboratory
2016-12-14
Accurate and precise density measurements by hydrostatic weighing requires the use of an analytical balance, configured with a suspension system, to both measure the weight of a part in water and in air. Additionally, the densities of these liquid media (water and air) must be precisely known for the part density determination. To validate the accuracy and precision of these measurements, uncertainty statements are required. The work in this report is a revision of an original report written more than a decade ago, specifically applying principles and guidelines suggested by the Guide to the Expression of Uncertainty in Measurement (GUM) for determining the part density uncertainty through sensitivity analysis. In this work, updated derivations are provided; an original example is revised with the updated derivations and appendix, provided solely to uncertainty evaluations using Monte Carlo techniques, specifically using the NIST Uncertainty Machine, as a viable alternative method.
Salomons, E.; Polinder, H.; Lohman, W.; Zhou, H.; Borst, H.
2009-01-01
A new engineering model for sound propagation in cities is presented. The model is based on numerical and experimental studies of sound propagation between street canyons. Multiple reflections in the source canyon and the receiver canyon are taken into account in an efficient way, while weak
Modelling the gluon propagator
Energy Technology Data Exchange (ETDEWEB)
Leinweber, D.B.; Parrinello, C.; Skullerud, J.I.; Williams, A.G
1999-03-01
Scaling of the Landau gauge gluon propagator calculated at {beta} = 6.0 and at {beta} = 6.2 is demonstrated. A variety of functional forms for the gluon propagator calculated on a large (32{sup 3} x 64) lattice at {beta} = 6.0 are investigated.
Ferroukhi, H.; Leray, O.; Hursin, M.; Vasiliev, A.; Perret, G.; Pautz, A.
2014-04-01
At the Paul Scherrer Institut (PSI), a methodology for nuclear data uncertainty propagation in CASMO-5M (C5M) assembly calculations is under development. This paper presents a preliminary application of this methodology to C5M decay heat calculations. Applying a stochastic sampling method, nuclear decay data uncertainties are first propagated for the cooling phase only. Thereafter, the uncertainty propagation is enlarged to gradually account for cross-section as well as fission yield uncertainties during the depletion phase. On that basis, assembly heat load uncertainties as well as total uncertainty for the entire pool are quantified for cooling times up to one year. The relative contributions from the various types of nuclear data uncertainties are in this context also estimated.
Measurement uncertainty in pharmaceutical analysis and its application
Directory of Open Access Journals (Sweden)
Marcus Augusto Lyrio Traple
2014-02-01
Full Text Available The measurement uncertainty provides complete information about an analytical result. This is very important because several decisions of compliance or non-compliance are based on analytical results in pharmaceutical industries. The aim of this work was to evaluate and discuss the estimation of uncertainty in pharmaceutical analysis. The uncertainty is a useful tool in the assessment of compliance or non-compliance of in-process and final pharmaceutical products as well as in the assessment of pharmaceutical equivalence and stability study of drug products. Keywords: Measurement uncertainty, Method validation, Pharmaceutical analysis, Quality control
Directory of Open Access Journals (Sweden)
Griffin Patrick
2017-01-01
Full Text Available A rigorous treatment of the uncertainty in the underlying nuclear data on silicon displacement damage metrics is presented. The uncertainty in the cross sections and recoil atom spectra are propagated into the energy-dependent uncertainty contribution in the silicon displacement kerma and damage energy using a Total Monte Carlo treatment. An energy-dependent covariance matrix is used to characterize the resulting uncertainty. A strong correlation between different reaction channels is observed in the high energy neutron contributions to the displacement damage metrics which supports the necessity of using a Monte Carlo based method to address the nonlinear nature of the uncertainty propagation.
Results from the Application of Uncertainty Methods in the CSNI Uncertainty Methods Study (UMS)
International Nuclear Information System (INIS)
Glaeser, H.
2008-01-01
Within licensing procedures there is the incentive to replace the conservative requirements for code application by a - best estimate - concept supplemented by an uncertainty analysis to account for predictive uncertainties of code results. Methods have been developed to quantify these uncertainties. The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI, has compared five methods for calculating the uncertainty in the predictions of advanced -best estimate- thermal-hydraulic codes. Most of the methods identify and combine input uncertainties. The major differences between the predictions of the methods came from the choice of uncertain parameters and the quantification of the input uncertainties, i.e. the wideness of the uncertainty ranges. Therefore, suitable experimental and analytical information has to be selected to specify these uncertainty ranges or distributions. After the closure of the Uncertainty Method Study (UMS) and after the report was issued comparison calculations of experiment LSTF-SB-CL-18 were performed by University of Pisa using different versions of the RELAP 5 code. It turned out that the version used by two of the participants calculated a 170 K higher peak clad temperature compared with other versions using the same input deck. This may contribute to the differences of the upper limit of the uncertainty ranges.
Uncertainty analysis in Monte Carlo criticality computations
International Nuclear Information System (INIS)
Qi Ao
2011-01-01
Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.
Decay heat uncertainty quantification of MYRRHA
Fiorito Luca; Buss Oliver; Hoefer Axel; Stankovskiy Alexey; Eynde Gert Van den
2017-01-01
MYRRHA is a lead-bismuth cooled MOX-fueled accelerator driven system (ADS) currently in the design phase at SCK·CEN in Belgium. The correct evaluation of the decay heat and of its uncertainty level is very important for the safety demonstration of the reactor. In the first part of this work we assessed the decay heat released by the MYRRHA core using the ALEPH-2 burnup code. The second part of the study focused on the nuclear data uncertainty and covariance propagation to the MYRRHA decay hea...
Chemical model reduction under uncertainty
Malpica Galassi, Riccardo
2017-03-06
A general strategy for analysis and reduction of uncertain chemical kinetic models is presented, and its utility is illustrated in the context of ignition of hydrocarbon fuel–air mixtures. The strategy is based on a deterministic analysis and reduction method which employs computational singular perturbation analysis to generate simplified kinetic mechanisms, starting from a detailed reference mechanism. We model uncertain quantities in the reference mechanism, namely the Arrhenius rate parameters, as random variables with prescribed uncertainty factors. We propagate this uncertainty to obtain the probability of inclusion of each reaction in the simplified mechanism. We propose probabilistic error measures to compare predictions from the uncertain reference and simplified models, based on the comparison of the uncertain dynamics of the state variables, where the mixture entropy is chosen as progress variable. We employ the construction for the simplification of an uncertain mechanism in an n-butane–air mixture homogeneous ignition case, where a 176-species, 1111-reactions detailed kinetic model for the oxidation of n-butane is used with uncertainty factors assigned to each Arrhenius rate pre-exponential coefficient. This illustration is employed to highlight the utility of the construction, and the performance of a family of simplified models produced depending on chosen thresholds on importance and marginal probabilities of the reactions.
The Drag-based Ensemble Model (DBEM) for Coronal Mass Ejection Propagation
Dumbović, Mateja; Čalogović, Jaša; Vršnak, Bojan; Temmer, Manuela; Mays, M. Leila; Veronig, Astrid; Piantschitsch, Isabell
2018-02-01
The drag-based model for heliospheric propagation of coronal mass ejections (CMEs) is a widely used analytical model that can predict CME arrival time and speed at a given heliospheric location. It is based on the assumption that the propagation of CMEs in interplanetary space is solely under the influence of magnetohydrodynamical drag, where CME propagation is determined based on CME initial properties as well as the properties of the ambient solar wind. We present an upgraded version, the drag-based ensemble model (DBEM), that covers ensemble modeling to produce a distribution of possible ICME arrival times and speeds. Multiple runs using uncertainty ranges for the input values can be performed in almost real-time, within a few minutes. This allows us to define the most likely ICME arrival times and speeds, quantify prediction uncertainties, and determine forecast confidence. The performance of the DBEM is evaluated and compared to that of ensemble WSA-ENLIL+Cone model (ENLIL) using the same sample of events. It is found that the mean error is ME = ‑9.7 hr, mean absolute error MAE = 14.3 hr, and root mean square error RMSE = 16.7 hr, which is somewhat higher than, but comparable to ENLIL errors (ME = ‑6.1 hr, MAE = 12.8 hr and RMSE = 14.4 hr). Overall, DBEM and ENLIL show a similar performance. Furthermore, we find that in both models fast CMEs are predicted to arrive earlier than observed, most likely owing to the physical limitations of models, but possibly also related to an overestimation of the CME initial speed for fast CMEs.
A Bayesian Framework of Uncertainties Integration in 3D Geological Model
Liang, D.; Liu, X.
2017-12-01
3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.
Uncertainty estimation of ultrasonic thickness measurement
International Nuclear Information System (INIS)
Yassir Yassen, Abdul Razak Daud; Mohammad Pauzi Ismail; Abdul Aziz Jemain
2009-01-01
The most important factor that should be taken into consideration when selecting ultrasonic thickness measurement technique is its reliability. Only when the uncertainty of a measurement results is known, it may be judged if the result is adequate for intended purpose. The objective of this study is to model the ultrasonic thickness measurement function, to identify the most contributing input uncertainty components, and to estimate the uncertainty of the ultrasonic thickness measurement results. We assumed that there are five error sources significantly contribute to the final error, these sources are calibration velocity, transit time, zero offset, measurement repeatability and resolution, by applying the propagation of uncertainty law to the model function, a combined uncertainty of the ultrasonic thickness measurement was obtained. In this study the modeling function of ultrasonic thickness measurement was derived. By using this model the estimation of the uncertainty of the final output result was found to be reliable. It was also found that the most contributing input uncertainty components are calibration velocity, transit time linearity and zero offset. (author)
Heat pulse propagation studies in TFTR
International Nuclear Information System (INIS)
Fredrickson, E.D.; Callen, J.D.; Colchin, R.J.
1986-02-01
The time scales for sawtooth repetition and heat pulse propagation are much longer (10's of msec) in the large tokamak TFTR than in previous, smaller tokamaks. This extended time scale coupled with more detailed diagnostics has led us to revisit the analysis of the heat pulse propagation as a method to determine the electron heat diffusivity, chi/sub e/, in the plasma. A combination of analytic and computer solutions of the electron heat diffusion equation are used to clarify previous work and develop new methods for determining chi/sub e/. Direct comparison of the predicted heat pulses with soft x-ray and ECE data indicates that the space-time evolution is diffusive. However, the chi/sub e/ determined from heat pulse propagation usually exceeds that determined from background plasma power balance considerations by a factor ranging from 2 to 10. Some hypotheses for resolving this discrepancy are discussed. 11 refs., 19 figs., 1 tab
Heat pulse propagation studies in TFTR
Energy Technology Data Exchange (ETDEWEB)
Fredrickson, E.D.; Callen, J.D.; Colchin, R.J.; Efthimion, P.C.; Hill, K.W.; Izzo, R.; Mikkelsen, D.R.; Monticello, D.A.; McGuire, K.; Bell, J.D.
1986-02-01
The time scales for sawtooth repetition and heat pulse propagation are much longer (10's of msec) in the large tokamak TFTR than in previous, smaller tokamaks. This extended time scale coupled with more detailed diagnostics has led us to revisit the analysis of the heat pulse propagation as a method to determine the electron heat diffusivity, chi/sub e/, in the plasma. A combination of analytic and computer solutions of the electron heat diffusion equation are used to clarify previous work and develop new methods for determining chi/sub e/. Direct comparison of the predicted heat pulses with soft x-ray and ECE data indicates that the space-time evolution is diffusive. However, the chi/sub e/ determined from heat pulse propagation usually exceeds that determined from background plasma power balance considerations by a factor ranging from 2 to 10. Some hypotheses for resolving this discrepancy are discussed. 11 refs., 19 figs., 1 tab.
Propagation of solar disturbances - Theories and models
Wu, S. T.
1983-01-01
Recent theoretical developments and construction of several models for the propagation of solar disturbances from the sun and their continuation throughout heliospheric space are discussed. Emphasis centers on physical mechanisms as well as mathematical techniques (i.e., analytical and numerical methods). This outline will lead to a discussion of the state-of-the-art of theoretically based modeling efforts in this area. It is shown that the fundamental theory for the study of propagation of disturbances in heliospheric space is centered around the self-consistent analysis of wave and mass motion within the context of magnetohydrodynamics in which the small scale structures will be modified by kinetic effects. Finally, brief mention is made of some interesting problems for which attention is needed for advancement of the understanding of the physics of large scale propagation of solar disturbances in heliospheric space.
Burdette, A C
1971-01-01
Analytic Geometry covers several fundamental aspects of analytic geometry needed for advanced subjects, including calculus.This book is composed of 12 chapters that review the principles, concepts, and analytic proofs of geometric theorems, families of lines, the normal equation of the line, and related matters. Other chapters highlight the application of graphing, foci, directrices, eccentricity, and conic-related topics. The remaining chapters deal with the concept polar and rectangular coordinates, surfaces and curves, and planes.This book will prove useful to undergraduate trigonometric st
Importance of Nuclear Data Uncertainties in Criticality Calculations
Ceresio, C.; Cabellos, O.; Martínez, J. S.; Diez, C. J.
2012-05-01
The aim of this paper is to study the importance of nuclear data uncertainties in the prediction of the uncertainties in keff for LWR (Light Water Reactor) unit-cells. The first part of this work is focused on the comparison of different sensitivity/uncertainty propagation methodologies based on TSUNAMI and MCNP codes; this study is undertaken for a fresh-fuel at different operational conditions. The second part of this work studies the burnup effect where the indirect contribution due to the uncertainty of the isotopic evolution is also analyzed.
Uncertainty Analysis of Power Systems Using Collocation
2008-05-01
sensitivity can be inferred and to construct surrogate models with which interpolation can be used to propagate PDF ?s. These techniques are applied to...with time and use. There is significant environmental interaction in the form of wind and waves; a large wave can partially of fully expose the...Power System We analyze uncertainty in a Simulink model describing the operation of a large pulse load reflecting the power consumption of a rail gun [8
Feynman propagator in curved space-time
International Nuclear Information System (INIS)
Candelas, P.; Raine, D.J.
1977-01-01
The Wick rotation is generalized in a covariant manner so as to apply to curved manifolds in a way that is independent of the analytic properties of the manifold. This enables us to show that various methods for defining a Feynman propagator to be found in the literature are equivalent where they are applicable. We are also able to discuss the relation between certain regularization methods that have been employed
Wave Propagation in Bimodular Geomaterials
Kuznetsova, Maria; Pasternak, Elena; Dyskin, Arcady; Pelinovsky, Efim
2016-04-01
Observations and laboratory experiments show that fragmented or layered geomaterials have the mechanical response dependent on the sign of the load. The most adequate model accounting for this effect is the theory of bimodular (bilinear) elasticity - a hyperelastic model with different elastic moduli for tension and compression. For most of geo- and structural materials (cohesionless soils, rocks, concrete, etc.) the difference between elastic moduli is such that their modulus in compression is considerably higher than that in tension. This feature has a profound effect on oscillations [1]; however, its effect on wave propagation has not been comprehensively investigated. It is believed that incorporation of bilinear elastic constitutive equations within theory of wave dynamics will bring a deeper insight to the study of mechanical behaviour of many geomaterials. The aim of this paper is to construct a mathematical model and develop analytical methods and numerical algorithms for analysing wave propagation in bimodular materials. Geophysical and exploration applications and applications in structural engineering are envisaged. The FEM modelling of wave propagation in a 1D semi-infinite bimodular material has been performed with the use of Marlow potential [2]. In the case of the initial load expressed by a harmonic pulse loading strong dependence on the pulse sign is observed: when tension is applied before compression, the phenomenon of disappearance of negative (compressive) strains takes place. References 1. Dyskin, A., Pasternak, E., & Pelinovsky, E. (2012). Periodic motions and resonances of impact oscillators. Journal of Sound and Vibration, 331(12), 2856-2873. 2. Marlow, R. S. (2008). A Second-Invariant Extension of the Marlow Model: Representing Tension and Compression Data Exactly. In ABAQUS Users' Conference.
Propagators and dimensional reduction of hot SU(2) gauge theory
Cucchieri, A.; Karsch, F.; Petreczky, P.
2001-08-01
We investigate the large distance behavior of the electric and magnetic propagators of hot SU(2) gauge theory in different gauges using lattice simulations of the full four-dimensional (4D) theory and the effective, dimensionally reduced, 3D theory. A comparison of the 3D and 4D propagators suggests that dimensional reduction works surprisingly well down to the temperature T=2Tc. Within statistical uncertainty the electric screening mass is found to be gauge independent. The magnetic propagator, on the other hand, exhibits a complicated gauge dependent structure at low momentum.
Groves, Curtis Edward
2014-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional validation by test only mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions.Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations. This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions
Groves, Curtis Edward
2014-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional "validation by test only" mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics
Investment Decisions with Two-Factor Uncertainty
Compernolle, T.; Huisman, Kuno; Kort, Peter; Lavrutich, Maria; Nunes, Claudia; Thijssen, J.J.J.
2018-01-01
This paper considers investment problems in real options with non-homogeneous two-factor uncertainty. It shows that, despite claims made in the literature, the method used to derive an analytical solution in one dimensional problems cannot be straightforwardly extended to problems with two
Uncertainty quantification of phase-based motion estimation on noisy sequence of images
Sarrafi, Aral; Mao, Zhu
2017-04-01
Optical measurement and motion estimation based on the acquired sequence of images is one of the most recent sensing techniques developed in the last decade or so. As a modern non-contact sensing technique, motion estimation and optical measurements provide a full-field awareness without any mass loading or change of stiffness in structures, which is unavoidable using other conventional transducers (e.g. accelerometers, strain gauges, and LVDTs). Among several motion estimation techniques prevalent in computer vision, phase-based motion estimation is one of the most reliable and accurate methods. However, contamination of the sequence of images with numerous sources of noise is inevitable, and the performance of the phase-based motion estimation could be affected due to the lighting changes, image acquisition noise, and the camera's intrinsic sensor noise. Within this context, the uncertainty quantification (UQ) of the phase-based motion estimation (PME) has been investigated in this paper. Based on a normality assumption, a framework has been provided in order to characterize the propagation of the uncertainty from the acquired images to the estimated motion. The established analytical solution is validated via Monte-Carlo simulations using a set of simulation data. The UQ model in the paper is able to predict the order statistics of the noise influence, in which the uncertainty bounds of the estimated motion are given, after processing the contaminated sequence of images.
Climate change decision-making: Model & parameter uncertainties explored
Energy Technology Data Exchange (ETDEWEB)
Dowlatabadi, H.; Kandlikar, M.; Linville, C.
1995-12-31
A critical aspect of climate change decision-making is uncertainties in current understanding of the socioeconomic, climatic and biogeochemical processes involved. Decision-making processes are much better informed if these uncertainties are characterized and their implications understood. Quantitative analysis of these uncertainties serve to inform decision makers about the likely outcome of policy initiatives, and help set priorities for research so that outcome ambiguities faced by the decision-makers are reduced. A family of integrated assessment models of climate change have been developed at Carnegie Mellon. These models are distinguished from other integrated assessment efforts in that they were designed from the outset to characterize and propagate parameter, model, value, and decision-rule uncertainties. The most recent of these models is ICAM 2.1. This model includes representation of the processes of demographics, economic activity, emissions, atmospheric chemistry, climate and sea level change and impacts from these changes and policies for emissions mitigation, and adaptation to change. The model has over 800 objects of which about one half are used to represent uncertainty. In this paper we show, that when considering parameter uncertainties, the relative contribution of climatic uncertainties are most important, followed by uncertainties in damage calculations, economic uncertainties and direct aerosol forcing uncertainties. When considering model structure uncertainties we find that the choice of policy is often dominated by model structure choice, rather than parameter uncertainties.
Characterizing spatial uncertainty when integrating social data in conservation planning.
Lechner, A M; Raymond, C M; Adams, V M; Polyakov, M; Gordon, A; Rhodes, J R; Mills, M; Stein, A; Ives, C D; Lefroy, E C
2014-12-01
Recent conservation planning studies have presented approaches for integrating spatially referenced social (SRS) data with a view to improving the feasibility of conservation action. We reviewed the growing conservation literature on SRS data, focusing on elicited or stated preferences derived through social survey methods such as choice experiments and public participation geographic information systems. Elicited SRS data includes the spatial distribution of willingness to sell, willingness to pay, willingness to act, and assessments of social and cultural values. We developed a typology for assessing elicited SRS data uncertainty which describes how social survey uncertainty propagates when projected spatially and the importance of accounting for spatial uncertainty such as scale effects and data quality. These uncertainties will propagate when elicited SRS data is integrated with biophysical data for conservation planning and may have important consequences for assessing the feasibility of conservation actions. To explore this issue further, we conducted a systematic review of the elicited SRS data literature. We found that social survey uncertainty was commonly tested for, but that these uncertainties were ignored when projected spatially. Based on these results we developed a framework which will help researchers and practitioners estimate social survey uncertainty and use these quantitative estimates to systematically address uncertainty within an analysis. This is important when using SRS data in conservation applications because decisions need to be made irrespective of data quality and well characterized uncertainty can be incorporated into decision theoretic approaches. © 2014 Society for Conservation Biology.
Robustness to strategic uncertainty
Andersson, O.; Argenton, C.; Weibull, J.W.
We introduce a criterion for robustness to strategic uncertainty in games with continuum strategy sets. We model a player's uncertainty about another player's strategy as an atomless probability distribution over that player's strategy set. We call a strategy profile robust to strategic uncertainty
Fission Spectrum Related Uncertainties
Energy Technology Data Exchange (ETDEWEB)
G. Aliberti; I. Kodeli; G. Palmiotti; M. Salvatores
2007-10-01
The paper presents a preliminary uncertainty analysis related to potential uncertainties on the fission spectrum data. Consistent results are shown for a reference fast reactor design configuration and for experimental thermal configurations. However the results obtained indicate the need for further analysis, in particular in terms of fission spectrum uncertainty data assessment.
Practical application of uncertainty-based validation assessment
Energy Technology Data Exchange (ETDEWEB)
Anderson, M. C. (Mark C.); Hylok, J. E. (Jeffrey E.); Maupin, R. D. (Ryan D.); Rutherford, A. C. (Amanda C.)
2004-01-01
comparison between analytical and experimental data; (4) Selection of a comprehensive, but tenable set of parameters for uncertainty propagation; and (5) Limitations of modeling capabilities and the finite element method for approximating high frequency dynamic behavior of real systems. This paper illustrates these issues by describing the details of the validation assessment for an example system. The system considered is referred to as the 'threaded assembly'. It consists of a titanium mount to which a lower mass is attached by a tape joint, an upper mass is connected via bolted joints, and a pair of aluminum shells is attached via a complex threaded joint. The system is excited impulsively by an explosive load applied over a small area of the aluminum shells. The validation assessment of the threaded assembly is described systematically so that the reader can see the logic behind the process. The simulation model is described to provide context. The feature and parameter selection processes are discussed in detail because they determine not only a large measure of the efficacy of the process, but its cost as well. The choice of uncertainty propagation method for the simulation is covered in some detail and results are presented. Validation experiments are described and results are presented along with experimental uncertainties. Finally, simulation results are compared with experimental data, and conclusions about the validity of these results are drawn within the context of the estimated uncertainties.
Optimization of FRAP uncertainty analysis option
International Nuclear Information System (INIS)
Peck, S.O.
1979-10-01
The automated uncertainty analysis option that has been incorporated in the FRAP codes (FRAP-T5 and FRAPCON-2) provides the user with a means of obtaining uncertainty bands on code predicted variables at user-selected times during a fuel pin analysis. These uncertainty bands are obtained by multiple single fuel pin analyses to generate data which can then be analyzed by second order statistical error propagation techniques. In this process, a considerable amount of data is generated and stored on tape. The user has certain choices to make regarding which independent variables are to be used in the analysis and what order of error propagation equation should be used in modeling the output response. To aid the user in these decisions, a computer program, ANALYZ, has been written and added to the uncertainty analysis option package. A variety of considerations involved in fitting response surface equations and certain pit-falls of which the user should be aware are discussed. An equation is derived expressing a residual as a function of a fitted model and an assumed true model. A variety of experimental design choices are discussed, including the advantages and disadvantages of each approach. Finally, a description of the subcodes which constitute program ANALYZ is provided
Uncertainty of Doppler reactivity worth due to uncertainties of JENDL-3.2 resonance parameters
Energy Technology Data Exchange (ETDEWEB)
Zukeran, Atsushi [Hitachi Ltd., Hitachi, Ibaraki (Japan). Power and Industrial System R and D Div.; Hanaki, Hiroshi; Nakagawa, Tuneo; Shibata, Keiichi; Ishikawa, Makoto
1998-03-01
Analytical formula of Resonance Self-shielding Factor (f-factor) is derived from the resonance integral (J-function) based on NR approximation and the analytical expression for Doppler reactivity worth ({rho}) is also obtained by using the result. Uncertainties of the f-factor and Doppler reactivity worth are evaluated on the basis of sensitivity coefficients to the resonance parameters. The uncertainty of the Doppler reactivity worth at 487{sup 0}K is about 4 % for the PNC Large Fast Breeder Reactor. (author)
Subjective judgment on measure of data uncertainty
International Nuclear Information System (INIS)
Pronyaev, V.G.; Bytchkova, A.V.
2004-01-01
Integral parameters are considered, which can be derived from the covariance matrix of the uncertainties and can serve as a general measure of uncertainties in comparisons of different fits. Using realistic examples and simple data model fits with a variable number of parameters, he was able to show that the sum of all elements of the covariance matrix is a best general measure for characterizing and comparing uncertainties obtained in different model and non-model fits. Discussions also included the problem of non-positive definiteness of the covariance matrix of the uncertainty of the cross sections obtained from the covariance matrix of the uncertainty of the parameters in cases where the number of parameters is less than number of cross section points. As a consequence of the numerical inaccuracy of the calculations that are always many orders larger than the presentation of the machine zero, it was concluded that the calculated eigenvalues of semipositive definite matrices have no machine zeros. These covariance matrices can be inverted when they are used in the error propagation equations. So the procedure for transformation of the semi-positive definite matrices to positive ones by introducing minimal changes into the matrix (changes that are equivalent to introducing additional non-informative parameters in the model) is generally not needed. But caution should be observed, because there can be cases where uncertainties can be unphysical, e.g. integral parameters estimated with formally non-positive-definite covariance matrices
Uncertainty and Cognitive Control
Directory of Open Access Journals (Sweden)
Faisal eMushtaq
2011-10-01
Full Text Available A growing trend of neuroimaging, behavioural and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1 There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2 There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3 The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the need for control; (4 Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders.
Regulating fisheries under uncertainty
DEFF Research Database (Denmark)
Hansen, Lars Gårn; Jensen, Frank
2017-01-01
the effects of these uncertainties into a single welfare measure for comparing tax and quota regulation. It is shown that quotas are always preferred to fees when structural economic uncertainty dominates. Since most regulators are subject to this kind of uncertainty, this result is a potentially important......Regulator uncertainty is decisive for whether price or quantity regulation maximizes welfare in fisheries. In this paper, we develop a model of fisheries regulation that includes ecological uncertainly, variable economic uncertainty as well as structural economic uncertainty. We aggregate...... qualification of the pro-price regulation message dominating the fisheries economics literature. We also believe that the model of a fishery developed in this paper could be applied to the regulation of other renewable resources where regulators are subject to uncertainty either directly or with some...
Spain, Barry; Ulam, S; Stark, M
1960-01-01
Analytical Quadrics focuses on the analytical geometry of three dimensions. The book first discusses the theory of the plane, sphere, cone, cylinder, straight line, and central quadrics in their standard forms. The idea of the plane at infinity is introduced through the homogenous Cartesian coordinates and applied to the nature of the intersection of three planes and to the circular sections of quadrics. The text also focuses on paraboloid, including polar properties, center of a section, axes of plane section, and generators of hyperbolic paraboloid. The book also touches on homogenous coordi
Uncertainty Estimation Cheat Sheet for Probabilistic Risk Assessment
Britton, Paul T.; Al Hassan, Mohammad; Ring, Robert W.
2017-01-01
"Uncertainty analysis itself is uncertain, therefore, you cannot evaluate it exactly," Source Uncertain Quantitative results for aerospace engineering problems are influenced by many sources of uncertainty. Uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables as well as through the propagation of these uncertainties up to the result. Uncertainty can be thought of as a measure of the 'goodness' of a result and is typically represented as statistical dispersion. This paper will explain common measures of centrality and dispersion; and-with examples-will provide guidelines for how they may be estimated to ensure effective technical contributions to decision-making.
Comparison of evidence theory and Bayesian theory for uncertainty modeling
International Nuclear Information System (INIS)
Soundappan, Prabhu; Nikolaidis, Efstratios; Haftka, Raphael T.; Grandhi, Ramana; Canfield, Robert
2004-01-01
This paper compares Evidence Theory (ET) and Bayesian Theory (BT) for uncertainty modeling and decision under uncertainty, when the evidence about uncertainty is imprecise. The basic concepts of ET and BT are introduced and the ways these theories model uncertainties, propagate them through systems and assess the safety of these systems are presented. ET and BT approaches are demonstrated and compared on challenge problems involving an algebraic function whose input variables are uncertain. The evidence about the input variables consists of intervals provided by experts. It is recommended that a decision-maker compute both the Bayesian probabilities of the outcomes of alternative actions and their plausibility and belief measures when evidence about uncertainty is imprecise, because this helps assess the importance of imprecision and the value of additional information. Finally, the paper presents and demonstrates a method for testing approaches for decision under uncertainty in terms of their effectiveness in making decisions
Durability reliability analysis for corroding concrete structures under uncertainty
Zhang, Hao
2018-02-01
This paper presents a durability reliability analysis of reinforced concrete structures subject to the action of marine chloride. The focus is to provide insight into the role of epistemic uncertainties on durability reliability. The corrosion model involves a number of variables whose probabilistic characteristics cannot be fully determined due to the limited availability of supporting data. All sources of uncertainty, both aleatory and epistemic, should be included in the reliability analysis. Two methods are available to formulate the epistemic uncertainty: the imprecise probability-based method and the purely probabilistic method in which the epistemic uncertainties are modeled as random variables. The paper illustrates how the epistemic uncertainties are modeled and propagated in the two methods, and shows how epistemic uncertainties govern the durability reliability.
Urrutxua, H.; Sanjurjo-Rivo, M.; Peláez, J.
2013-12-01
In year 2000 a house-made orbital propagator was developed by the SDGUPM (former Grupo de Dinámica de Tethers) based in a set of redundant variables including Euler parameters. This propagator was called DROMO. and it was mainly used in numerical simulations of electrodynamic tethers. It was presented for the first time in the international meeting V Jornadas de Trabajo en Mecánica Celeste, held in Albarracín, Spain, in 2002 (see reference 1). The special perturbation method associated with DROMO can be consulted in the paper.2 In year 1975, Andre Deprit in reference 3 proposes a propagation scheme very similar to the one in which DROMO is based, by using the ideal frame concept of Hansen. The different approaches used in references 3 and 2 gave rise to a small controversy. In this paper we carried out a different deduction of the DROMO propagator, underlining its close relation with the Hansen ideal frame concept, and also the similarities and the differences with the theory carried out by Deprit in 3. Simultaneously we introduce some improvements in the formulation that leads to a more synthetic propagator.
The uncertainty of modeled soil carbon stock change for Finland
Lehtonen, Aleksi; Heikkinen, Juha
2013-04-01
Countries should report soil carbon stock changes of forests for Kyoto Protocol. Under Kyoto Protocol one can omit reporting of a carbon pool by verifying that the pool is not a source of carbon, which is especially tempting for the soil pool. However, verifying that soils of a nation are not a source of carbon in given year seems to be nearly impossible. The Yasso07 model was parametrized against various decomposition data using MCMC method. Soil carbon change in Finland between 1972 and 2011 were simulated with Yasso07 model using litter input data derived from the National Forest Inventory (NFI) and fellings time series. The uncertainties of biomass models, litter turnoverrates, NFI sampling and Yasso07 model were propagated with Monte Carlo simulations. Due to biomass estimation methods, uncertainties of various litter input sources (e.g. living trees, natural mortality and fellings) correlate strongly between each other. We show how original covariance matrices can be analytically combined and the amount of simulated components reduce greatly. While doing simulations we found that proper handling correlations may be even more essential than accurate estimates of standard errors. As a preliminary results, from the analysis we found that both Southern- and Northern Finland were soil carbon sinks, coefficient of variations (CV) varying 10%-25% when model was driven with long term constant weather data. When we applied annual weather data, soils were both sinks and sources of carbon and CVs varied from 10%-90%. This implies that the success of soil carbon sink verification depends on the weather data applied with models. Due to this fact IPCC should provide clear guidance for the weather data applied with soil carbon models and also for soil carbon sink verification. In the UNFCCC reporting carbon sinks of forest biomass have been typically averaged for five years - similar period for soil model weather data would be logical.
Inverse problems and uncertainty quantification
Litvinenko, Alexander
2013-12-18
In a Bayesian setting, inverse problems and uncertainty quantification (UQ)— the propagation of uncertainty through a computational (forward) model—are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Inverse Problems and Uncertainty Quantification
Litvinenko, Alexander
2014-01-06
In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
MODIS land cover uncertainty in regional climate simulations
Li, Xue; Messina, Joseph P.; Moore, Nathan J.; Fan, Peilei; Shortridge, Ashton M.
2017-12-01
MODIS land cover datasets are used extensively across the climate modeling community, but inherent uncertainties and associated propagating impacts are rarely discussed. This paper modeled uncertainties embedded within the annual MODIS Land Cover Type (MCD12Q1) products and propagated these uncertainties through the Regional Atmospheric Modeling System (RAMS). First, land cover uncertainties were modeled using pixel-based trajectory analyses from a time series of MCD12Q1 for Urumqi, China. Second, alternative land cover maps were produced based on these categorical uncertainties and passed into RAMS. Finally, simulations from RAMS were analyzed temporally and spatially to reveal impacts. Our study found that MCD12Q1 struggles to discriminate between grasslands and croplands or grasslands and barren in this study area. Such categorical uncertainties have significant impacts on regional climate model outputs. All climate variables examined demonstrated impact across the various regions, with latent heat flux affected most with a magnitude of 4.32 W/m2 in domain average. Impacted areas were spatially connected to locations of greater land cover uncertainty. Both biophysical characteristics and soil moisture settings in regard to land cover types contribute to the variations among simulations. These results indicate that formal land cover uncertainty analysis should be included in MCD12Q1-fed climate modeling as a routine procedure.
Spatial Uncertainty Model for Visual Features Using a Kinect™ Sensor
Directory of Open Access Journals (Sweden)
Jae-Han Park
2012-06-01
Full Text Available This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.
Spatial uncertainty model for visual features using a Kinect™ sensor.
Park, Jae-Han; Shin, Yong-Deuk; Bae, Ji-Hun; Baeg, Moon-Hong
2012-01-01
This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.
Assessment of SFR Wire Wrap Simulation Uncertainties
Energy Technology Data Exchange (ETDEWEB)
Delchini, Marc-Olivier G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Popov, Emilian L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Pointer, William David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Swiler, Laura P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2016-09-30
Predictive modeling and simulation of nuclear reactor performance and fuel are challenging due to the large number of coupled physical phenomena that must be addressed. Models that will be used for design or operational decisions must be analyzed for uncertainty to ascertain impacts to safety or performance. Rigorous, structured uncertainty analyses are performed by characterizing the model’s input uncertainties and then propagating the uncertainties through the model to estimate output uncertainty. This project is part of the ongoing effort to assess modeling uncertainty in Nek5000 simulations of flow configurations relevant to the advanced reactor applications of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. Three geometries are under investigation in these preliminary assessments: a 3-D pipe, a 3-D 7-pin bundle, and a single pin from the Thermal-Hydraulic Out-of-Reactor Safety (THORS) facility. Initial efforts have focused on gaining an understanding of Nek5000 modeling options and integrating Nek5000 with Dakota. These tasks are being accomplished by demonstrating the use of Dakota to assess parametric uncertainties in a simple pipe flow problem. This problem is used to optimize performance of the uncertainty quantification strategy and to estimate computational requirements for assessments of complex geometries. A sensitivity analysis to three turbulent models was conducted for a turbulent flow in a single wire wrapped pin (THOR) geometry. Section 2 briefly describes the software tools used in this study and provides appropriate references. Section 3 presents the coupling interface between Dakota and a computational fluid dynamic (CFD) code (Nek5000 or STARCCM+), with details on the workflow, the scripts used for setting up the run, and the scripts used for post-processing the output files. In Section 4, the meshing methods used to generate the THORS and 7-pin bundle meshes are explained. Sections 5, 6 and 7 present numerical results
Spike propagation in driven chain networks with dominant global inhibition
International Nuclear Information System (INIS)
Chang Wonil; Jin, Dezhe Z.
2009-01-01
Spike propagation in chain networks is usually studied in the synfire regime, in which successive groups of neurons are synaptically activated sequentially through the unidirectional excitatory connections. Here we study the dynamics of chain networks with dominant global feedback inhibition that prevents the synfire activity. Neural activity is driven by suprathreshold external inputs. We analytically and numerically demonstrate that spike propagation along the chain is a unique dynamical attractor in a wide parameter regime. The strong inhibition permits a robust winner-take-all propagation in the case of multiple chains competing via the inhibition.
ANALYTICAL EVALUATION OF CRACK PROPAGATION FOR BULB HYDRAULIC TURBINES SHAFTS
Directory of Open Access Journals (Sweden)
Mircea O. POPOVICU
2011-05-01
Full Text Available The Hydroelectric Power Plants uses the regenerating energy of rivers. The hydraulic Bulb turbines running with low heads are excellent alternative energy sources. The shafts of these units present themselves as massive pieces, with cylindrical shape, manufactured from low-alloyed steels. The paper analyses the fatigue cracks occurring at some turbines in the neighbourhood of the connection zone between the shaft and the turbine runner flange. To obtain the tension state in this zone ANSIS and AFGROW computing programs were used. The number of running hours until the piercing of the shaft wall is established as a useful result.
David, P
2013-01-01
Propagation of Waves focuses on the wave propagation around the earth, which is influenced by its curvature, surface irregularities, and by passage through atmospheric layers that may be refracting, absorbing, or ionized. This book begins by outlining the behavior of waves in the various media and at their interfaces, which simplifies the basic phenomena, such as absorption, refraction, reflection, and interference. Applications to the case of the terrestrial sphere are also discussed as a natural generalization. Following the deliberation on the diffraction of the "ground? wave around the ear
A new type of Time-Of-Propagation (TOP) Cherenkov detector for particle identification
International Nuclear Information System (INIS)
Yan, J.; Shao, M.; Li, C.
2011-01-01
A new type of particle identification (PID) detector based on measurements of 1-D Time-Of-Propagation (TOP) and 1-D space information is described. Geant4 toolkit is used to simulate the propagation of Cherenkov photon in thin quartz bar radiator. Contributions to the timing uncertainty are discussed. The π/K separability (S o ) is defined and its dependence on the particle momentum, incident angle and propagation length are studied, respectively. (author)
International Nuclear Information System (INIS)
Kaul, Dean C.; Egbert, Stephen D.; Woolson, William A.
2005-01-01
In order to avoid the pitfalls that so discredited DS86 and its uncertainty estimates, and to provide DS02 uncertainties that are both defensible and credible, this report not only presents the ensemble uncertainties assembled from uncertainties in individual computational elements and radiation dose components but also describes how these relate to comparisons between observed and computed quantities at critical intervals in the computational process. These comparisons include those between observed and calculated radiation free-field components, where observations include thermal- and fast-neutron activation and gamma-ray thermoluminescence, which are relevant to the estimated systematic uncertainty for DS02. The comparisons also include those between calculated and observed survivor shielding, where the observations consist of biodosimetric measurements for individual survivors, which are relevant to the estimated random uncertainty for DS02. (J.P.N.)
Model uncertainty and probability
International Nuclear Information System (INIS)
Parry, G.W.
1994-01-01
This paper discusses the issue of model uncertainty. The use of probability as a measure of an analyst's uncertainty as well as a means of describing random processes has caused some confusion, even though the two uses are representing different types of uncertainty with respect to modeling a system. The importance of maintaining the distinction between the two types is illustrated with a simple example
Analytic continuation of dual Feynman amplitudes
International Nuclear Information System (INIS)
Bleher, P.M.
1981-01-01
A notion of dual Feynman amplitude is introduced and a theorem on the existence of analytic continuation of this amplitude from the convergence domain to the whole complex is proved. The case under consideration corresponds to massless power propagators and the analytic continuation is constructed on the propagators powers. Analytic continuation poles and singular set of external impulses are found explicitly. The proof of the theorem on the existence of analytic continuation is based on the introduction of α-representation for dual Feynman amplitudes. In proving, the so-called ''trees formula'' and ''trees-with-cycles formula'' are established that are dual by formulation to the trees and 2-trees formulae for usual Feynman amplitudes. (Auth.)
Uncertainty in artificial intelligence
Kanal, LN
1986-01-01
How to deal with uncertainty is a subject of much controversy in Artificial Intelligence. This volume brings together a wide range of perspectives on uncertainty, many of the contributors being the principal proponents in the controversy.Some of the notable issues which emerge from these papers revolve around an interval-based calculus of uncertainty, the Dempster-Shafer Theory, and probability as the best numeric model for uncertainty. There remain strong dissenting opinions not only about probability but even about the utility of any numeric method in this context.
Uncertainties in hydrogen combustion
International Nuclear Information System (INIS)
Stamps, D.W.; Wong, C.C.; Nelson, L.S.
1988-01-01
Three important areas of hydrogen combustion with uncertainties are identified: high-temperature combustion, flame acceleration and deflagration-to-detonation transition, and aerosol resuspension during hydrogen combustion. The uncertainties associated with high-temperature combustion may affect at least three different accident scenarios: the in-cavity oxidation of combustible gases produced by core-concrete interactions, the direct containment heating hydrogen problem, and the possibility of local detonations. How these uncertainties may affect the sequence of various accident scenarios is discussed and recommendations are made to reduce these uncertainties. 40 references
International Nuclear Information System (INIS)
Choi, Jae Seong
1993-02-01
This book is comprised of nineteen chapters, which describes introduction of analytical chemistry, experimental error and statistics, chemistry equilibrium and solubility, gravimetric analysis with mechanism of precipitation, range and calculation of the result, volume analysis on general principle, sedimentation method on types and titration curve, acid base balance, acid base titration curve, complex and firing reaction, introduction of chemical electro analysis, acid-base titration curve, electrode and potentiometry, electrolysis and conductometry, voltammetry and polarographic spectrophotometry, atomic spectrometry, solvent extraction, chromatograph and experiments.
International Nuclear Information System (INIS)
Anon.
1985-01-01
The division for Analytical Chemistry continued to try and develope an accurate method for the separation of trace amounts from mixtures which, contain various other elements. Ion exchange chromatography is of special importance in this regard. New separation techniques were tried on certain trace amounts in South African standard rock materials and special ceramics. Methods were also tested for the separation of carrier-free radioisotopes from irradiated cyclotron discs
Monte Carlo eigenfunction strategies and uncertainties
International Nuclear Information System (INIS)
Gast, R.C.; Candelore, N.R.
1974-01-01
Comparisons of convergence rates for several possible eigenfunction source strategies led to the selection of the ''straight'' analog of the analytic power method as the source strategy for Monte Carlo eigenfunction calculations. To insure a fair game strategy, the number of histories per iteration increases with increasing iteration number. The estimate of eigenfunction uncertainty is obtained from a modification of a proposal by D. B. MacMillan and involves only estimates of the usual purely statistical component of uncertainty and a serial correlation coefficient of lag one. 14 references. (U.S.)
Uncertainties associated with inertial-fusion ignition
International Nuclear Information System (INIS)
McCall, G.H.
1981-01-01
An estimate is made of a worst case driving energy which is derived from analytic and computer calculations. It will be shown that the uncertainty can be reduced by a factor of 10 to 100 if certain physical effects are understood. That is not to say that the energy requirement can necessarily be reduced below that of the worst case, but it is possible to reduce the uncertainty associated with ignition energy. With laser costs in the $0.5 to 1 billion per MJ range, it can be seen that such an exercise is worthwhile
National Research Council Canada - National Science Library
Gray, William
1994-01-01
This paper discusses the question of tropical cyclone propagation or why the average tropical cyclone moves 1-2 m/s faster and usually 10-20 deg to the left of its surrounding (or 5-7 deg radius) deep layer (850-300 mb) steering current...
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 5. Flood Wave Propagation-The Saint Venant Equations. P P Mujumdar. General Article Volume 6 Issue 5 May 2001 pp 66-73. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/006/05/0066-0073 ...
Czech Academy of Sciences Publication Activity Database
Schejbal, V.; Bezoušek, P.; Čermák, D.; NĚMEC, Z.; Fišer, Ondřej; Hájek, M.
2006-01-01
Roč. 15, č. 1 (2006), s. 17-24 ISSN 1210-2512 R&D Projects: GA MPO(CZ) FT-TA2/030 Institutional research plan: CEZ:AV0Z30420517 Keywords : Ultra wide band * UWB antenna s * UWB propagation * multipath effects Subject RIV: JB - Sensors, Measurment, Regulation
Atmospheric and laser propagation
Eijk, A.M.J. van; Stein, K.
2017-01-01
This paper reviews three phenomena that affect the propagation of electro-optical radiation through the atmosphere: absorption and scattering, refraction and turbulence. The net effect on imaging or laser systems is a net reduction of the effective range, or a degradation of the information
Indian Academy of Sciences (India)
I available for forecasting the propagation of the flood wave. Introduction. Among all natural disasters, floods are the most frequently occurring phenomena that affect a large section of population all over the world, every year. Throughout the last century, flood- ing has been one of the most devastating disasters both in terms.
Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses
Energy Technology Data Exchange (ETDEWEB)
Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Curtis E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-08-01
We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.
Estimating Coastal Digital Elevation Model (DEM) Uncertainty
Amante, C.; Mesick, S.
2017-12-01
Integrated bathymetric-topographic digital elevation models (DEMs) are representations of the Earth's solid surface and are fundamental to the modeling of coastal processes, including tsunami, storm surge, and sea-level rise inundation. Deviations in elevation values from the actual seabed or land surface constitute errors in DEMs, which originate from numerous sources, including: (i) the source elevation measurements (e.g., multibeam sonar, lidar), (ii) the interpolative gridding technique (e.g., spline, kriging) used to estimate elevations in areas unconstrained by source measurements, and (iii) the datum transformation used to convert bathymetric and topographic data to common vertical reference systems. The magnitude and spatial distribution of the errors from these sources are typically unknown, and the lack of knowledge regarding these errors represents the vertical uncertainty in the DEM. The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) has developed DEMs for more than 200 coastal communities. This study presents a methodology developed at NOAA NCEI to derive accompanying uncertainty surfaces that estimate DEM errors at the individual cell-level. The development of high-resolution (1/9th arc-second), integrated bathymetric-topographic DEMs along the southwest coast of Florida serves as the case study for deriving uncertainty surfaces. The estimated uncertainty can then be propagated into the modeling of coastal processes that utilize DEMs. Incorporating the uncertainty produces more reliable modeling results, and in turn, better-informed coastal management decisions.
Solitary wave propagation in solar flux tubes
International Nuclear Information System (INIS)
Erdelyi, Robert; Fedun, Viktor
2006-01-01
The aim of the present work is to investigate the excitation, time-dependent dynamic evolution, and interaction of nonlinear propagating (i.e., solitary) waves on vertical cylindrical magnetic flux tubes in compressible solar atmospheric plasma. The axisymmetric flux tube has a field strength of 1000 G at its footpoint, which is typical for photospheric regions. Nonlinear waves that develop into solitary waves are excited by a footpoint driver. The propagation of the nonlinear signal is investigated by solving numerically a set of fully nonlinear 2.0D magnetohydrodynamic (MHD) equations in cylindrical coordinates. For the initial conditions, axisymmetric solutions of the linear dispersion relation for wave modes in a magnetic flux tube are applied. In the present case, we focus on the sausage mode only. The dispersion relation is solved numerically for a range of plasma parameters. The equilibrium state is perturbed by a Gaussian at the flux tube footpoint. Two solitary solutions are found by solving the full nonlinear MHD equations. First, the nonlinear wave propagation with external sound speed is investigated. Next, the solitary wave propagating close to the tube speed, also found in the numerical solution, is studied. In contrast to previous analytical and numerical works, here no approximations were made to find the solitary solutions. A natural application of the present study may be spicule formation in the low chromosphere. Future possible improvements in modeling and the relevance of the photospheric chromospheric transition region coupling by spicules is suggested
Uncertainty evaluation on the determination of uranium ores by volumetry of ammonium vanadate
International Nuclear Information System (INIS)
Ma Likui; Wang Yao; Luo Yuanyuan; Li Jinbiao; Zhu Lejie
2012-01-01
Uncertainty evaluation on the criteria of 'ferrous sulfate deoxidization/ammonium vanadate oxidation titrimetry to measure uranium' (EJ 267.2-84) issued by Ministry of Nuclear Industry was analyzed. The uncertainty brought by the method itself was obtained through the identification of uncertainty sources, quantification of uncertainty components and determination of uranium content by titration with calibrated ammonium vanadate solution. With the analytical study to uncertainty sources of entire process and components, and the application of statistical treatment based on scientific data, the combined standard uncertainty and expanded uncertainty about different levels of uranium content was reported. (authors)
Uncertainty information in climate data records from Earth observation
Merchant, C. J.
2017-12-01
How to derive and present uncertainty in climate data records (CDRs) has been debated within the European Space Agency Climate Change Initiative, in search of common principles applicable across a range of essential climate variables. Various points of consensus have been reached, including the importance of improving provision of uncertainty information and the benefit of adopting international norms of metrology for language around the distinct concepts of uncertainty and error. Providing an estimate of standard uncertainty per datum (or the means to readily calculate it) emerged as baseline good practice, and should be highly relevant to users of CDRs when the uncertainty in data is variable (the usual case). Given this baseline, the role of quality flags is clarified as being complementary to and not repetitive of uncertainty information. Data with high uncertainty are not poor quality if a valid estimate of the uncertainty is available. For CDRs and their applications, the error correlation properties across spatio-temporal scales present important challenges that are not fully solved. Error effects that are negligible in the uncertainty of a single pixel may dominate uncertainty in the large-scale and long-term. A further principle is that uncertainty estimates should themselves be validated. The concepts of estimating and propagating uncertainty are generally acknowledged in geophysical sciences, but less widely practised in Earth observation and development of CDRs. Uncertainty in a CDR depends in part (and usually significantly) on the error covariance of the radiances and auxiliary data used in the retrieval. Typically, error covariance information is not available in the fundamental CDR (FCDR) (i.e., with the level-1 radiances), since provision of adequate level-1 uncertainty information is not yet standard practice. Those deriving CDRs thus cannot propagate the radiance uncertainty to their geophysical products. The FIDUCEO project (www.fiduceo.eu) is
Aspects of uncertainty analysis in accident consequence modeling
International Nuclear Information System (INIS)
Travis, C.C.; Hoffman, F.O.
1981-01-01
Mathematical models are frequently used to determine probable dose to man from an accidental release of radionuclides by a nuclear facility. With increased emphasis on the accuracy of these models, the incorporation of uncertainty analysis has become one of the most crucial and sensitive components in evaluating the significance of model predictions. In the present paper, we address three aspects of uncertainty in models used to assess the radiological impact to humans: uncertainties resulting from the natural variability in human biological parameters; the propagation of parameter variability by mathematical models; and comparison of model predictions to observational data
Some target assay uncertainties for passive neutron coincidence counting
International Nuclear Information System (INIS)
Ensslin, N.; Langner, D.G.; Menlove, H.O.; Miller, M.C.; Russo, P.A.
1990-01-01
This paper provides some target assay uncertainties for passive neutron coincidence counting of plutonium metal, oxide, mixed oxide, and scrap and waste. The target values are based in part on past user experience and in part on the estimated results from new coincidence counting techniques that are under development. The paper summarizes assay error sources and the new coincidence techniques, and recommends the technique that is likely to yield the lowest assay uncertainty for a given material type. These target assay uncertainties are intended to be useful for NDA instrument selection and assay variance propagation studies for both new and existing facilities. 14 refs., 3 tabs
Exploring Heterogeneous Multicore Architectures for Advanced Embedded Uncertainty Quantification.
Energy Technology Data Exchange (ETDEWEB)
Phipps, Eric T.; Edwards, Harold C.; Hu, Jonathan J.
2014-09-01
We explore rearrangements of classical uncertainty quantification methods with the aim of achieving higher aggregate performance for uncertainty quantification calculations on emerging multicore and manycore architectures. We show a rearrangement of the stochastic Galerkin method leads to improved performance and scalability on several computational architectures whereby un- certainty information is propagated at the lowest levels of the simulation code improving memory access patterns, exposing new dimensions of fine grained parallelism, and reducing communica- tion. We also develop a general framework for implementing such rearrangements for a diverse set of uncertainty quantification algorithms as well as computational simulation codes to which they are applied.
Working fluid selection for organic Rankine cycles - Impact of uncertainty of fluid properties
DEFF Research Database (Denmark)
Frutiger, Jerome; Andreasen, Jesper Graa; Liu, Wei
2016-01-01
of processmodels and constraints 2) selection of property models, i.e. Penge Robinson equation of state 3)screening of 1965 possible working fluid candidates including identification of optimal process parametersbased on Monte Carlo sampling 4) propagating uncertainty of fluid parameters to the ORC netpower output......This study presents a generic methodology to select working fluids for ORC (Organic Rankine Cycles)taking into account property uncertainties of the working fluids. A Monte Carlo procedure is described as a tool to propagate the influence of the input uncertainty of the fluid parameters on the ORC....... The net power outputs of all the feasible working fluids were ranked including their uncertainties. The method could propagate and quantify the input property uncertainty of the fluidproperty parameters to the ORC model, giving an additional dimension to the fluid selection process. In the given analysis...
Instability Versus Equilibrium Propagation of Laser Beam in Plasma
Lushnikov, Pavel M.; Rose, Harvey A.
2003-01-01
We obtain, for the first time, an analytic theory of the forward stimulated Brillouin scattering instability of a spatially and temporally incoherent laser beam, that controls the transition between statistical equilibrium and non-equilibrium (unstable) self-focusing regimes of beam propagation. The stability boundary may be used as a comprehensive guide for inertial confinement fusion designs. Well into the stable regime, an analytic expression for the angular diffusion coefficient is obtain...
Love wave propagation in piezoelectric layered structure with dissipation.
Du, Jianke; Xian, Kai; Wang, Ji; Yong, Yook-Kong
2009-02-01
We investigate analytically the effect of the viscous dissipation of piezoelectric material on the dispersive and attenuated characteristics of Love wave propagation in a layered structure, which involves a thin piezoelectric layer bonded perfectly to an unbounded elastic substrate. The effects of the viscous coefficient on the phase velocity of Love waves and attenuation are presented and discussed in detail. The analytical method and the results can be useful for the design of the resonators and sensors.
Robust stability of fractional order polynomials with complicated uncertainty structure.
Matušů, Radek; Şenol, Bilal; Pekař, Libor
2017-01-01
The main aim of this article is to present a graphical approach to robust stability analysis for families of fractional order (quasi-)polynomials with complicated uncertainty structure. More specifically, the work emphasizes the multilinear, polynomial and general structures of uncertainty and, moreover, the retarded quasi-polynomials with parametric uncertainty are studied. Since the families with these complex uncertainty structures suffer from the lack of analytical tools, their robust stability is investigated by numerical calculation and depiction of the value sets and subsequent application of the zero exclusion condition.
Van Nooyen, R.R.P.; Hrachowitz, M.; Kolechkina, A.G.
2014-01-01
Even without uncertainty about the model structure or parameters, the output of a hydrological model run still contains several sources of uncertainty. These are: measurement errors affecting the input, the transition from continuous time and space to discrete time and space, which causes loss of
Schrodinger's Uncertainty Principle?
Indian Academy of Sciences (India)
correlation between x and p. The virtue of Schrodinger's version (5) is that it accounts for this correlation. In spe- cial cases like the free particle and the harmonic oscillator, the 'Schrodinger uncertainty product' even remains constant with time, whereas Heisenberg's does not. The glory of giving the uncertainty principle to ...
Positive Surge Propagation in Sloping Channels
Directory of Open Access Journals (Sweden)
Daniele Pietro Viero
2017-07-01
Full Text Available A simplified model for the upstream propagation of a positive surge in a sloping, rectangular channel is presented. The model is based on the assumptions of a flat water surface and negligible energy dissipation downstream of the surge, which is generated by the instantaneous closure of a downstream gate. Under these hypotheses, a set of equations that depends only on time accurately describes the surge wave propagation. When the Froude number of the incoming flow is relatively small, an approximate analytical solution is also proposed. The predictive ability of the model is validated by comparing the model results with the results of an experimental investigation and with the results of a numerical model that solves the full shallow water equations.
Physical Uncertainty Bounds (PUB)
Energy Technology Data Exchange (ETDEWEB)
Vaughan, Diane Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Dean L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.
Measurement uncertainty and probability
Willink, Robin
2013-01-01
A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.
Assessment of errors and uncertainty patterns in GIA modeling
DEFF Research Database (Denmark)
Barletta, Valentina Roberta; Spada, G.
, such as time-evolving shorelines and paleo-coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...
DEFF Research Database (Denmark)
Nasrollahi, Kamal; Distante, Cosimo; Hua, Gang
2017-01-01
This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... World Videos. The workshops were run on December 4, 2016, in Cancun in Mexico. The two workshops together received 13 papers. Each paper was then reviewed by at least two expert reviewers in the field. In all, 11 papers were accepted to be presented at the workshops. The topics covered in the papers...
DEFF Research Database (Denmark)
This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... World Videos. The workshops were run on December 4, 2016, in Cancun in Mexico. The two workshops together received 13 papers. Each paper was then reviewed by at least two expert reviewers in the field. In all, 11 papers were accepted to be presented at the workshops. The topics covered in the papers...
International Nuclear Information System (INIS)
Hofer, E.; Hoffman, F.O.
1987-02-01
The uncertainty analysis of model predictions has to discriminate between two fundamentally different types of uncertainty. The presence of stochastic variability (Type 1 uncertainty) necessitates the use of a probabilistic model instead of the much simpler deterministic one. Lack of knowledge (Type 2 uncertainty), however, applies to deterministic as well as to probabilistic model predictions and often dominates over uncertainties of Type 1. The term ''probability'' is interpreted differently in the probabilistic analysis of either type of uncertainty. After these discriminations have been explained the discussion centers on the propagation of parameter uncertainties through the model, the derivation of quantitative uncertainty statements for model predictions and the presentation and interpretation of the results of a Type 2 uncertainty analysis. Various alternative approaches are compared for a very simple deterministic model
The best estimate plus uncertainty approach in licensing of pressurized water reactors using trace
Energy Technology Data Exchange (ETDEWEB)
Sporn, Michael [Westinghouse Electric Germany GmbH, Mannheim (Germany); Technische Univ. Dresden (Germany); Tietsch, Wolfgang; Freis, Daniel [Westinghouse Electric Germany GmbH, Mannheim (Germany); Hurtado, Antonio M. [Technische Univ. Dresden (Germany)
2013-07-01
In this paper, a concept for a new BEPU method (Best estimate plus uncertainty) was presented, which may be used for future licensing process of Nuclear Power Plants. Additionally to the established uncertainty methods for the variation of the input parameters the new BEPU approach could be used in order to treat the correlation uncertainties in TRACE. Generally we want to use the uncertainty methods based upon propagation of input uncertainties to handle the correlation uncertainties. Furthermore, to perform the uncertainty analysis statistical methods are used, similar to the treatment of input uncertainties. The outlook for this work is to combine input and correlation uncertainties, but only to consider these parameters, which have a significant effect on the calculation result, in a systematic way. Finally, the calculated results are based on 95 % percentile (probability) and 95 % confidence level. (orig.)
Farrance, Ian; Frenkel, Robert
2014-01-01
The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more ‘constants’, each of which has an empirically derived numerical value. Such empirically derived ‘constants’ must also have associated uncertainties which propagate through the functional
Farrance, Ian; Frenkel, Robert
2014-02-01
The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship
Gauge engineering and propagators
Directory of Open Access Journals (Sweden)
Maas Axel
2017-01-01
The dependence of the propagators on the choice of these complete gauge-fixings will then be investigated using lattice gauge theory for Yang-Mills theory. It is found that the implications for the infrared, and to some extent mid-momentum behavior, can be substantial. In going beyond the Yang-Mills case it turns out that the influence of matter can generally not be neglected. This will be briefly discussed for various types of matter.
2013-09-30
ice terminates or regenerates along the propagation direction. (2) New capabilities for elastic and poro-elastic sediments • Range-dependent...standard Euler- Bernoulli bean theory can be applied in the x-z plane. The top right panel illustrates a side view of the subunit. A shearing force F...bottom panel is a table in which the second column has representative values for these three quantities, for the most common types of clay minerals in
Practical estimation of the uncertainty of analytical measurement standards
Peters, R.J.B.; Elbers, I.J.W.; Klijnstra, M.D.; Stolker, A.A.M.
2011-01-01
Nowadays, a lot of time and resources are used to determine the quality of goods and services. As a consequence, the quality of measurements themselves, e.g., the metrological traceability of the measured quantity values is essential to allow a proper evaluation of the results with regard to
Decision analytic tools for resolving uncertainty in the energy debate
International Nuclear Information System (INIS)
Renn, O.
1986-01-01
Within the context of a Social Compatibility Study on Energy Supply Systems a complex decision making model was used to incorporate scientific expertize and public participation into the process of policy formulation and evaluation. The study was directed by the program group ''Technology and Society'' of the Nuclear Research Centre Juelich. It consisted of three parts: First, with the aid of value tree analysis the whole spectrum of concern and dimensions relevant to the energy issue in Germany was collected and structured in a combined value tree representing the values and criteria of nine important interest groups in the Federal Republic of Germany. Second, the revealed criteria were translated into indicators. Four different energy scenarios were evaluated with respect to each indicator making use of physical measurement, literature review and expert surveys. Third, the weights for each indicator were elicited by interviewing randomly chosen citizens. Those citizens were informed about the scenarios and their impacts prior to the weighting process in a four day seminar. As a result most citizens favoured more moderate energy scenarios assigning high priority to energy conservation. Nuclear energy was perceived as necessary energy source in the long run, but should be restricted to meet only the demand that cannot be covered by other energy means. (orig.)
Dilaton cosmology and the modified uncertainty principle
International Nuclear Information System (INIS)
Majumder, Barun
2011-01-01
Very recently Ali et al. (2009) proposed a new generalized uncertainty principle (with a linear term in Plank length which is consistent with doubly special relativity and string theory. The classical and quantum effects of this generalized uncertainty principle (termed as modified uncertainty principle or MUP) are investigated on the phase space of a dilatonic cosmological model with an exponential dilaton potential in a flat Friedmann-Robertson-Walker background. Interestingly, as a consequence of MUP, we found that it is possible to get a late time acceleration for this model. For the quantum mechanical description in both commutative and MUP framework, we found the analytical solutions of the Wheeler-DeWitt equation for the early universe and compare our results. We have used an approximation method in the case of MUP.
Propagating distributions up directed acyclic graphs.
Baum, E B; Smith, W D
1999-01-01
In a previous article, we considered game trees as graphical models. Adopting an evaluation function that returned a probability distribution over values likely to be taken at a given position, we described how to build a model of uncertainty and use it for utility-directed growth of the search tree and for deciding on a move after search was completed. In some games, such as chess and Othello, the same position can occur more than once, collapsing the game tree to a directed acyclic graph (DAG). This induces correlations among the distributions at sibling nodes. This article discusses some issues that arise in extending our algorithms to a DAG. We give a simply described algorithm for correctly propagating distributions up a game DAG, taking account of dependencies induced by the DAG structure. This algorithm is exponential time in the worst case. We prove that it is #P complete to propagate distributions up a game DAG correctly. We suggest how our exact propagation algorithm can yield a fast but inexact heuristic.
Error propagation analysis for a sensor system
Energy Technology Data Exchange (ETDEWEB)
Yeater, M.L.; Hockenbury, R.W.; Hawkins, J.; Wilkinson, J.
1976-01-01
As part of a program to develop reliability methods for operational use with reactor sensors and protective systems, error propagation analyses are being made for each model. An example is a sensor system computer simulation model, in which the sensor system signature is convoluted with a reactor signature to show the effect of each in revealing or obscuring information contained in the other. The error propagation analysis models the system and signature uncertainties and sensitivities, whereas the simulation models the signatures and by extensive repetitions reveals the effect of errors in various reactor input or sensor response data. In the approach for the example presented, the errors accumulated by the signature (set of ''noise'' frequencies) are successively calculated as it is propagated stepwise through a system comprised of sensor and signal processing components. Additional modeling steps include a Fourier transform calculation to produce the usual power spectral density representation of the product signature, and some form of pattern recognition algorithm.
Communicating spatial uncertainty to non-experts using R
Luzzi, Damiano; Sawicka, Kasia; Heuvelink, Gerard; de Bruin, Sytze
2016-04-01
Effective visualisation methods are important for the efficient use of uncertainty information for various groups of users. Uncertainty propagation analysis is often used with spatial environmental models to quantify the uncertainty within the information. A challenge arises when trying to effectively communicate the uncertainty information to non-experts (not statisticians) in a wide range of cases. Due to the growing popularity and applicability of the open source programming language R, we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. The package has implemented Monte Carlo algorithms for uncertainty propagation, the output of which is represented by an ensemble of model outputs (i.e. a sample from a probability distribution). Numerous visualisation methods exist that aim to present such spatial uncertainty information both statically, dynamically and interactively. To provide the most universal visualisation tools for non-experts, we conducted a survey on a group of 20 university students and assessed the effectiveness of selected static and interactive methods for visualising uncertainty in spatial variables such as DEM and land cover. The static methods included adjacent maps and glyphs for continuous variables. Both allow for displaying maps with information about the ensemble mean, variance/standard deviation and prediction intervals. Adjacent maps were also used for categorical data, displaying maps of the most probable class, as well as its associated probability. The interactive methods included a graphical user interface, which in addition to displaying the previously mentioned variables also allowed for comparison of joint uncertainties at multiple locations. The survey indicated that users could understand the basics of the uncertainty information displayed in the static maps, with the interactive interface allowing for more in-depth information. Subsequently, the R
A depth-dependent formula for shallow water propagation
Sertlek, H.O.; Ainslie, M.A.
2014-01-01
In shallow water propagation, the sound field depends on the proximity of the receiver to the sea surface, the seabed, the source depth, and the complementary source depth. While normal mode theory can predict this depth dependence, it can be computationally intensive. In this work, an analytical
Error Propagation in Equations for Geochemical Modeling of ...
Indian Academy of Sciences (India)
This paper presents error propagation equations for modeling of radiogenic isotopes during mixing of two components or end-members. These equations can be used to estimate errors on an isotopic ratio in the mixture of two components, as a function of the analytical errors or the total errors of geological field sampling ...
Seismic wave propagation in fractured media: A discontinuous Galerkin approach
De Basabe, Jonás D.
2011-01-01
We formulate and implement a discontinuous Galekin method for elastic wave propagation that allows for discontinuities in the displacement field to simulate fractures or faults using the linear- slip model. We show numerical results using a 2D model with one linear- slip discontinuity and different frequencies. The results show a good agreement with analytic solutions. © 2011 Society of Exploration Geophysicists.
Glueball masses from the refined Gribov propagator
Energy Technology Data Exchange (ETDEWEB)
Capri, M.A.L. [Universidade Federal Rural do Rio de Janeiro (UFRRJ), Seropedica, RJ (Brazil); Dudal, D.; Vandersickel, N. [Ghent University (Belgium); Gomez, A.J.; Guimaraes, M.S.; Lemes, V.E.R.; Sorella, S.P.; Tedesco, D.G. [Universidade do Estado do Rio de Janeiro (UERJ), RJ (Brazil)
2011-07-01
Full text: The quantum description of the non-abelian Yang-Mills theory at low energy scales is a long standing unsolved problem. This is a confining theory, meaning that the physical degrees of freedom are not described by the fundamental fields appearing in its defining action functional. The physical states are thus created by composite fields operators. In the pure Yang-Mills theory the fundamental, non-physical, excitations are the gluons and the physical states are named as glueballs and are created by gauge invariant operators. Recent results coming from lattice simulations of the Yang-Mills theory points to a gluon propagator which violates positivity and tends to a constant non-zero value at zero-momentum. The best fitting for this behavior is provided by a propagator whose analytic form describes the propagation of complex mass excitations, the so-called i-particles. These are supposed to be the low energy, confined, non-physical gluons. This propagator coincides with the one derived from the Refined Gribov-Zwanziger (RGZ) theory, which takes into account nonperturbative physics related to gauge copies and dimension two condensates. In this work we construct local, gauge invariant, composite operators with the quantum numbers of the lightest glueball states J{sup PC} = 0{sup ++}; 0{sup -+}; 2{sup ++}. The correlation functions of these operators are evaluated by employing a lattice input for the mass scales of the low energy RGZ gluon propagator. We obtain for the glueball masses, in the lowest order approximation, the values m{sub 0++} {approx} 1.96 GeV, m{sub 0-+} {approx} 2.19 GeV, m{sub 2++} {approx} 2.04 GeV, in the SU(3) case, all within a 20% range of the corresponding lattice values. We also recover the mass hierarchy m{sub 0++} < m{sub 2++} < m{sub 0-+}.(author)
Kadis, Rouvim
2017-05-26
The rational strategy in the evaluation of analytical measurement uncertainty is to combine the "whole method" performance data, such as precision and recovery, with the uncertainty contributions from sources not adequately covered by those data. This paper highlights some common mistakes in evaluating the uncertainty when pursuing that strategy, as revealed in current chromatographic literature. The list of the uncertainty components usually taken into account is discussed first and fallacies with the LOD- and recovery uncertainties are noted. Close attention is paid to the uncertainty arising from a linear calibration normally used. It is demonstrated that following a well-known formula for the standard deviation of an analytical result obtained from a straight line calibration leads to double counting the precision contribution to the uncertainty budget. Furthermore, the precision component itself is often estimated improperly, based on the number of replicates taken from the precision assessment experiment. As a result, the relative uncertainty from linear calibration is overestimated in the budget and may become the largest contribution to the combined uncertainty, which is clearly shown with an example calculation based on the literature data. Copyright © 2017 Elsevier B.V. All rights reserved.
Unidirectional wave propagation in media with complex principal axes
Horsley, S. A. R.
2018-02-01
In an anisotropic medium, the refractive index depends on the direction of propagation. Zero index in a fixed direction implies a stretching of the wave to uniformity along that axis, reducing the effective number of dimensions by 1. Here we investigate two-dimensional gyrotropic media where the refractive index is 0 in a complex valued direction, finding that the wave becomes an analytic function of a single complex variable z . For simply connected media this analyticity implies unidirectional propagation of electromagnetic waves, similar to the edge states that occur in photonic "topological insulators." For a medium containing holes the propagation is no longer unidirectional. We illustrate the sensitivity of the field to the topology of the space using an exactly solvable example. To conclude we provide a generalization of transformation optics where a complex coordinate transformation can be used to relate ordinary anisotropic media to the recently highlighted gyrotropic ones supporting one-way edge states.
Propagation of S-waves in a non-homogeneous anisotropic ...
African Journals Online (AJOL)
homogeneous anisotropic incompressible and initially stressed medium. Analytical analysis reveals that the velocities of the shear waves depend upon the direction of propagation, the anisotropy, the non-homogeneity of the medium and the initial ...
Helrich, Carl S
2017-01-01
This advanced undergraduate textbook begins with the Lagrangian formulation of Analytical Mechanics and then passes directly to the Hamiltonian formulation and the canonical equations, with constraints incorporated through Lagrange multipliers. Hamilton's Principle and the canonical equations remain the basis of the remainder of the text. Topics considered for applications include small oscillations, motion in electric and magnetic fields, and rigid body dynamics. The Hamilton-Jacobi approach is developed with special attention to the canonical transformation in order to provide a smooth and logical transition into the study of complex and chaotic systems. Finally the text has a careful treatment of relativistic mechanics and the requirement of Lorentz invariance. The text is enriched with an outline of the history of mechanics, which particularly outlines the importance of the work of Euler, Lagrange, Hamilton and Jacobi. Numerous exercises with solutions support the exceptionally clear and concise treatment...
Uncertainties in the simulation of groundwater recharge at different scales
Directory of Open Access Journals (Sweden)
H. Bogena
2005-01-01
Full Text Available Digital spatial data always imply some kind of uncertainty. The source of this uncertainty can be found in their compilation as well as the conceptual design that causes a more or less exact abstraction of the real world, depending on the scale under consideration. Within the framework of hydrological modelling, in which numerous data sets from diverse sources of uneven quality are combined, the various uncertainties are accumulated. In this study, the GROWA model is taken as an example to examine the effects of different types of uncertainties on the calculated groundwater recharge. Distributed input errors are determined for the parameters' slope and aspect using a Monte Carlo approach. Landcover classification uncertainties are analysed by using the conditional probabilities of a remote sensing classification procedure. The uncertainties of data ensembles at different scales and study areas are discussed. The present uncertainty analysis showed that the Gaussian error propagation method is a useful technique for analysing the influence of input data on the simulated groundwater recharge. The uncertainties involved in the land use classification procedure and the digital elevation model can be significant in some parts of the study area. However, for the specific model used in this study it was shown that the precipitation uncertainties have the greatest impact on the total groundwater recharge error.
Uncertainty Analysis of the Estimated Risk in Formal Safety Assessment
Directory of Open Access Journals (Sweden)
Molin Sun
2018-01-01
Full Text Available An uncertainty analysis is required to be carried out in formal safety assessment (FSA by the International Maritime Organization. The purpose of this article is to introduce the uncertainty analysis technique into the FSA process. Based on the uncertainty identification of input parameters, probability and possibility distributions are used to model the aleatory and epistemic uncertainties, respectively. An approach which combines the Monte Carlo random sampling of probability distribution functions with the a-cuts for fuzzy calculus is proposed to propagate the uncertainties. One output of the FSA process is societal risk (SR, which can be evaluated in the two-dimensional frequency–fatality (FN diagram. Thus, the confidence-level-based SR is presented to represent the uncertainty of SR in two dimensions. In addition, a method for time window selection is proposed to estimate the magnitude of uncertainties, which is an important aspect of modeling uncertainties. Finally, a case study is carried out on an FSA study on cruise ships. The results show that the uncertainty analysis of SR generates a two-dimensional area for a certain degree of confidence in the FN diagram rather than a single FN curve, which provides more information to authorities to produce effective risk control measures.
Multi-scenario modelling of uncertainty in stochastic chemical systems
International Nuclear Information System (INIS)
Evans, R. David; Ricardez-Sandoval, Luis A.
2014-01-01
Uncertainty analysis has not been well studied at the molecular scale, despite extensive knowledge of uncertainty in macroscale systems. The ability to predict the effect of uncertainty allows for robust control of small scale systems such as nanoreactors, surface reactions, and gene toggle switches. However, it is difficult to model uncertainty in such chemical systems as they are stochastic in nature, and require a large computational cost. To address this issue, a new model of uncertainty propagation in stochastic chemical systems, based on the Chemical Master Equation, is proposed in the present study. The uncertain solution is approximated by a composite state comprised of the averaged effect of samples from the uncertain parameter distributions. This model is then used to study the effect of uncertainty on an isomerization system and a two gene regulation network called a repressilator. The results of this model show that uncertainty in stochastic systems is dependent on both the uncertain distribution, and the system under investigation. -- Highlights: •A method to model uncertainty on stochastic systems was developed. •The method is based on the Chemical Master Equation. •Uncertainty in an isomerization reaction and a gene regulation network was modelled. •Effects were significant and dependent on the uncertain input and reaction system. •The model was computationally more efficient than Kinetic Monte Carlo
Assessing Groundwater Model Uncertainty for the Central Nevada Test Area
International Nuclear Information System (INIS)
Pohll, Greg; Pohlmann, Karl; Hassan, Ahmed; Chapman, Jenny; Mihevc, Todd
2002-01-01
The purpose of this study is to quantify the flow and transport model uncertainty for the Central Nevada Test Area (CNTA). Six parameters were identified as uncertain, including the specified head boundary conditions used in the flow model, the spatial distribution of the underlying welded tuff unit, effective porosity, sorption coefficients, matrix diffusion coefficient, and the geochemical release function which describes nuclear glass dissolution. The parameter uncertainty was described by assigning prior statistical distributions for each of these parameters. Standard Monte Carlo techniques were used to sample from the parameter distributions to determine the full prediction uncertainty. Additional analysis is performed to determine the most cost-beneficial characterization activities. The maximum radius of the tritium and strontium-90 contaminant boundary was used as the output metric for evaluation of prediction uncertainty. The results indicate that combining all of the uncertainty in the parameters listed above propagates to a prediction uncertainty in the maximum radius of the contaminant boundary of 234 to 308 m and 234 to 302 m, for tritium and strontium-90, respectively. Although the uncertainty in the input parameters is large, the prediction uncertainty in the contaminant boundary is relatively small. The relatively small prediction uncertainty is primarily due to the small transport velocities such that large changes in the uncertain input parameters causes small changes in the contaminant boundary. This suggests that the model is suitable in terms of predictive capability for the contaminant boundary delineation
Uncertainty analysis in the applications of nuclear probabilistic risk assessment
International Nuclear Information System (INIS)
Le Duy, T.D.
2011-01-01
The aim of this thesis is to propose an approach to model parameter and model uncertainties affecting the results of risk indicators used in the applications of nuclear Probabilistic Risk assessment (PRA). After studying the limitations of the traditional probabilistic approach to represent uncertainty in PRA model, a new approach based on the Dempster-Shafer theory has been proposed. The uncertainty analysis process of the proposed approach consists in five main steps. The first step aims to model input parameter uncertainties by belief and plausibility functions according to the data PRA model. The second step involves the propagation of parameter uncertainties through the risk model to lay out the uncertainties associated with output risk indicators. The model uncertainty is then taken into account in the third step by considering possible alternative risk models. The fourth step is intended firstly to provide decision makers with information needed for decision making under uncertainty (parametric and model) and secondly to identify the input parameters that have significant uncertainty contributions on the result. The final step allows the process to be continued in loop by studying the updating of beliefs functions given new data. The proposed methodology was implemented on a real but simplified application of PRA model. (author)
DEFF Research Database (Denmark)
Christensen, Hanne Bjerre; Poulsen, Mette Erecius; Pedersen, Mikael
2003-01-01
. In the present study, recommendations from the International Organisation for Standardisation's (ISO) Guide to the Expression of Uncertainty and the EURACHEM/CITAC guide Quantifying Uncertainty in Analytical Measurements were followed to estimate the expanded uncertainties for 153 pesticides in fruit......The estimation of uncertainty of an analytical result has become important in analytical chemistry. It is especially difficult to determine uncertainties for multiresidue methods, e.g. for pesticides in fruit and vegetables, as the varieties of pesticide/commodity combinations are many...
Non-linear Calibration Leads to Improved Correspondence between Uncertainties
DEFF Research Database (Denmark)
Andersen, Jens Enevold Thaulov
2007-01-01
Although calibrations are routine procedures of instrumental analysis and quality assurance, the working curve is rarely applied in the determination of the uncertainty budget, most likely owing to the difficulties associated with the calculation of uncertainties. The present work provides...... limit theorem, an excellent correspondence was obtained between predicted uncertainties and measured uncertainties. In order to validate the method, experiments were applied of flame atomic absorption spectrometry (FAAS) for the analysis of Co and Pt, and experiments of electrothermal atomic absorption...... that the uncertainty of the detector dominates the contributions to the uncertainty budget, and it was proposed that a full analysis of the instrument ought to be performed for every single analyte before measurement. Following this investigation, the homoscedasticy or heteroscedasticy may be identified by residuals...
Uncertainty in biodiversity science, policy and management: a conceptual overview
Directory of Open Access Journals (Sweden)
Yrjö Haila
2014-10-01
Full Text Available The protection of biodiversity is a complex societal, political and ultimately practical imperative of current global society. The imperative builds upon scientific knowledge on human dependence on the life-support systems of the Earth. This paper aims at introducing main types of uncertainty inherent in biodiversity science, policy and management, as an introduction to a companion paper summarizing practical experiences of scientists and scholars (Haila et al. 2014. Uncertainty is a cluster concept: the actual nature of uncertainty is inherently context-bound. We use semantic space as a conceptual device to identify key dimensions of uncertainty in the context of biodiversity protection; these relate to [i] data; [ii] proxies; [iii] concepts; [iv] policy and management; and [v] normative goals. Semantic space offers an analytic perspective for drawing critical distinctions between types of uncertainty, identifying fruitful resonances that help to cope with the uncertainties, and building up collaboration between different specialists to support mutual social learning.
A Web tool for calculating k0-NAA uncertainties
International Nuclear Information System (INIS)
Younes, N.; Robouch, P.
2003-01-01
The calculation of uncertainty budgets is becoming a standard step in reporting analytical results. This gives rise to the need for simple, easily accessed tools to calculate uncertainty budgets. An example of such a tool is the Excel spreadsheet approach of Robouch et al. An internet application which calculates uncertainty budgets for k 0 -NAA is presented. The Web application has built in 'Literature' values for standard isotopes and accepts as inputs fixed information such as the thermal to epithermal neutron flux ratio, as well as experiment specific data such as the mass of the sample. The application calculates and displays intermediate uncertainties as well as the final combined uncertainty of the element concentration in the sample. The interface only requires access to a standard browser and is thus easily accessible to researchers and laboratories. This may facilitate and standardize the calculation of k 0 -NAA uncertainty budgets. (author)
Uncertainty information in climate data records from Earth observation
Directory of Open Access Journals (Sweden)
C. J. Merchant
2017-07-01
shape of the error distribution, and the propagation of the uncertainty to the geophysical variable in the CDR accounting for its error correlation properties. Uncertainty estimates can and should be validated as part of CDR validation when possible. These principles are quite general, but the approach to providing uncertainty information appropriate to different ECVs is varied, as confirmed by a brief review across different ECVs in the CCI. User requirements for uncertainty information can conflict with each other, and a variety of solutions and compromises are possible. The concept of an ensemble CDR as a simple means of communicating rigorous uncertainty information to users is discussed. Our review concludes by providing eight concrete recommendations for good practice in providing and communicating uncertainty in EO-based climate data records.
Uncertainty information in climate data records from Earth observation
Merchant, Christopher J.; Paul, Frank; Popp, Thomas; Ablain, Michael; Bontemps, Sophie; Defourny, Pierre; Hollmann, Rainer; Lavergne, Thomas; Laeng, Alexandra; de Leeuw, Gerrit; Mittaz, Jonathan; Poulsen, Caroline; Povey, Adam C.; Reuter, Max; Sathyendranath, Shubha; Sandven, Stein; Sofieva, Viktoria F.; Wagner, Wolfgang
2017-07-01
error distribution, and the propagation of the uncertainty to the geophysical variable in the CDR accounting for its error correlation properties. Uncertainty estimates can and should be validated as part of CDR validation when possible. These principles are quite general, but the approach to providing uncertainty information appropriate to different ECVs is varied, as confirmed by a brief review across different ECVs in the CCI. User requirements for uncertainty information can conflict with each other, and a variety of solutions and compromises are possible. The concept of an ensemble CDR as a simple means of communicating rigorous uncertainty information to users is discussed. Our review concludes by providing eight concrete recommendations for good practice in providing and communicating uncertainty in EO-based climate data records.
On the cosmological propagation of high energy particles in magnetic fields
International Nuclear Information System (INIS)
Alves Batista, Rafael
2015-04-01
In the present work the connection between high energy particles and cosmic magnetic fields is explored. Particularly, the focus lies on the propagation of ultra-high energy cosmic rays (UHECRs) and very-high energy gamma rays (VHEGRs) over cosmological distances, under the influence of cosmic magnetic fields. The first part of this work concerns the propagation of UHECRs in the magnetized cosmic web, which was studied both analytically and numerically. A parametrization for the suppression of the UHECR flux at energies ∝ 10 18 eV due to diffusion in extragalactic magnetic fields was found, making it possible to set an upper limit on the energy at which this magnetic horizon effect sets in, which is
Effective propagation in a perturbed periodic structure
Maurel, Agnès; Pagneux, Vincent
2008-08-01
In a recent paper [D. Torrent, A. Hakansson, F. Cervera, and J. Sánchez-Dehesa, Phys. Rev. Lett. 96, 204302 (2006)] inspected the effective parameters of a cluster containing an ensemble of scatterers with a periodic or a weakly disordered arrangement. A small amount of disorder is shown to have a small influence on the characteristics of the acoustic wave propagation with respect to the periodic case. In this Brief Report, we inspect further the effect of a deviation in the scatterer distribution from the periodic distribution. The quasicrystalline approximation is shown to be an efficient tool to quantify this effect. An analytical formula for the effective wave number is obtained in one-dimensional acoustic medium and is compared with the Berryman result in the low-frequency limit. Direct numerical calculations show a good agreement with the analytical predictions.
Effective propagation in a perturbed periodic structure
International Nuclear Information System (INIS)
Maurel, Agnes; Pagneux, Vincent
2008-01-01
In a recent paper [D. Torrent, A. Hakansson, F. Cervera, and J. Sanchez-Dehesa, Phys. Rev. Lett. 96, 204302 (2006)] inspected the effective parameters of a cluster containing an ensemble of scatterers with a periodic or a weakly disordered arrangement. A small amount of disorder is shown to have a small influence on the characteristics of the acoustic wave propagation with respect to the periodic case. In this Brief Report, we inspect further the effect of a deviation in the scatterer distribution from the periodic distribution. The quasicrystalline approximation is shown to be an efficient tool to quantify this effect. An analytical formula for the effective wave number is obtained in one-dimensional acoustic medium and is compared with the Berryman result in the low-frequency limit. Direct numerical calculations show a good agreement with the analytical predictions
Propagation of various dark hollow beams through an apertured paraxial ABCD optical system
International Nuclear Information System (INIS)
Cai Yangjian; Ge Di
2006-01-01
Propagation of a dark hollow beam (DHB) of circular, elliptical or rectangular symmetry through an apertured paraxial ABCD optical system is investigated. Approximate analytical formulas for various DHBs propagating through an apertured paraxial optical system are derived by expanding the hard-aperture function into a finite sum of complex Gaussian functions in terms of a tensor method. Some numerical results are given. Our formulas provide a convenient way for studying the propagation of various DHBs through an apertured paraxial optical system
International Nuclear Information System (INIS)
Limperopoulos, G.J.
1995-01-01
This report presents an oil project valuation under uncertainty by means of two well-known financial techniques: The Capital Asset Pricing Model (CAPM) and The Black-Scholes Option Pricing Formula. CAPM gives a linear positive relationship between expected rate of return and risk but does not take into consideration the aspect of flexibility which is crucial for an irreversible investment as an oil price is. Introduction of investment decision flexibility by using real options can increase the oil project value substantially. Some simple tests for the importance of uncertainty in stock market for oil investments are performed. Uncertainty in stock returns is correlated with aggregate product market uncertainty according to Pindyck (1991). The results of the tests are not satisfactory due to the short data series but introducing two other explanatory variables the interest rate and Gross Domestic Product make the situation better. 36 refs., 18 figs., 6 tabs
Evaluating prediction uncertainty
International Nuclear Information System (INIS)
McKay, M.D.
1995-03-01
The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented
Introduction to uncertainty quantification
Sullivan, T J
2015-01-01
Uncertainty quantification is a topic of increasing practical importance at the intersection of applied mathematics, statistics, computation, and numerous application areas in science and engineering. This text provides a framework in which the main objectives of the field of uncertainty quantification are defined, and an overview of the range of mathematical methods by which they can be achieved. Complete with exercises throughout, the book will equip readers with both theoretical understanding and practical experience of the key mathematical and algorithmic tools underlying the treatment of uncertainty in modern applied mathematics. Students and readers alike are encouraged to apply the mathematical methods discussed in this book to their own favourite problems to understand their strengths and weaknesses, also making the text suitable as a self-study. This text is designed as an introduction to uncertainty quantification for senior undergraduate and graduate students with a mathematical or statistical back...
Wave propagation in elastic solids
Achenbach, Jan
1984-01-01
The propagation of mechanical disturbances in solids is of interest in many branches of the physical scienses and engineering. This book aims to present an account of the theory of wave propagation in elastic solids. The material is arranged to present an exposition of the basic concepts of mechanical wave propagation within a one-dimensional setting and a discussion of formal aspects of elastodynamic theory in three dimensions, followed by chapters expounding on typical wave propagation phenomena, such as radiation, reflection, refraction, propagation in waveguides, and diffraction. The treat
Stochastic model in microwave propagation
International Nuclear Information System (INIS)
Ranfagni, A.; Mugnai, D.
2011-01-01
Further experimental results of delay time in microwave propagation are reported in the presence of a lossy medium (wood). The measurements show that the presence of a lossy medium makes the propagation slightly superluminal. The results are interpreted on the basis of a stochastic (or path integral) model, showing how this model is able to describe each kind of physical system in which multi-path trajectories are present. -- Highlights: ► We present new experimental results on electromagnetic “anomalous” propagation. ► We apply a path integral theoretical model to wave propagation. ► Stochastic processes and multi-path trajectories in propagation are considered.
Propagation of nonlinear waves in bi-inductance nonlinear transmission lines
Kengne, Emmanuel; Lakhssassi, Ahmed
2014-10-01
We consider a one-dimensional modified complex Ginzburg-Landau equation, which governs the dynamics of matter waves propagating in a discrete bi-inductance nonlinear transmission line containing a finite number of cells. Employing an extended Jacobi elliptic functions expansion method, we present new exact analytical solutions which describe the propagation of periodic and solitary waves in the considered network.
Uncertainty calculations made easier
International Nuclear Information System (INIS)
Hogenbirk, A.
1994-07-01
The results are presented of a neutron cross section sensitivity/uncertainty analysis performed in a complicated 2D model of the NET shielding blanket design inside the ITER torus design, surrounded by the cryostat/biological shield as planned for ITER. The calculations were performed with a code system developed at ECN Petten, with which sensitivity/uncertainty calculations become relatively simple. In order to check the deterministic neutron transport calculations (performed with DORT), calculations were also performed with the Monte Carlo code MCNP. Care was taken to model the 2.0 cm wide gaps between two blanket segments, as the neutron flux behind the vacuum vessel is largely determined by neutrons streaming through these gaps. The resulting neutron flux spectra are in excellent agreement up to the end of the cryostat. It is noted, that at this position the attenuation of the neutron flux is about 1 l orders of magnitude. The uncertainty in the energy integrated flux at the beginning of the vacuum vessel and at the beginning of the cryostat was determined in the calculations. The uncertainty appears to be strongly dependent on the exact geometry: if the gaps are filled with stainless steel, the neutron spectrum changes strongly, which results in an uncertainty of 70% in the energy integrated flux at the beginning of the cryostat in the no-gap-geometry, compared to an uncertainty of only 5% in the gap-geometry. Therefore, it is essential to take into account the exact geometry in sensitivity/uncertainty calculations. Furthermore, this study shows that an improvement of the covariance data is urgently needed in order to obtain reliable estimates of the uncertainties in response parameters in neutron transport calculations. (orig./GL)
Uncertainty: lotteries and risk
Ávalos, Eloy
2011-01-01
In this paper we develop the theory of uncertainty in a context where the risks assumed by the individual are measurable and manageable. We primarily use the definition of lottery to formulate the axioms of the individual's preferences, and its representation through the utility function von Neumann - Morgenstern. We study the expected utility theorem and its properties, the paradoxes of choice under uncertainty and finally the measures of risk aversion with monetary lotteries.
Sources of Judgmental Uncertainty
1977-09-01
sometimes at the end. To avoid primacy or recency effects , which were not part of this first study, for half of the subjects the orders of information items...summarize, 72 subjects were randomly assigned to two conditions of control and exposed to three conditions of orderliness. Order effects and primacy / recency ...WORDS (Continue on reverie atids If necessary and Identity by block number) ~ Judgmental Uncertainty Primacy / Recency Environmental UncertaintyN1
Decision making under uncertainty
International Nuclear Information System (INIS)
Wu, J.S.; Apostolakis, G.E.; Okrent, D.
1989-01-01
The theory of evidence and the theory of possibility are considered by some analysts as potential models for uncertainty. This paper discusses two issues: how formal probability theory has been relaxed to develop these uncertainty models; and the degree to which these models can be applied to risk assessment. The scope of the second issue is limited to an investigation of their compatibility for combining various pieces of evidence, which is an important problem in PRA
Instability of Walker propagating domain wall in magnetic nanowires.
Hu, B; Wang, X R
2013-07-12
The stability of the well-known Walker propagating domain wall (DW) solution of the Landau-Lifshitz-Gilbert equation is analytically investigated. Surprisingly, a propagating DW is always dressed with spin waves so that the Walker rigid-body propagating DW mode does not occur in reality. In the low field region only stern spin waves are emitted while both stern and bow waves are generated under high fields. In a high enough field, but below the Walker breakdown field, the Walker solution could be convective or absolute unstable if the transverse magnetic anisotropy is larger than a critical value, corresponding to a significant modification of the DW profile and DW propagating speed.
Charged particle beam propagation studies at the Naval Research Laboratory
International Nuclear Information System (INIS)
Meger, R.A.; Hubbard, R.F.; Antoniades, J.A.; Fernsler, R.F.; Lampe, M.; Murphy, D.P.; Myers, M.C.; Pechacek, R.E.; Peyser, T.A.; Santos, J.; Slinker, S.P.
1993-01-01
The Plasma Physics Division of the Naval Research Laboratory has been performing research into the propagation of high current electron beams for 20 years. Recent efforts have focused on the stabilization of the resistive hose instability. Experiments have utilized the SuperIBEX e-beam generator (5-MeV, 100-kA, 40-ns pulse) and a 2-m diameter, 5-m long propagation chamber. Full density air propagation experiments have successfully demonstrated techniques to control the hose instability allowing stable 5-m transport of 1-2 cm radius, 10-20 kA total current beams. Analytic theory and particle simulations have been used to both guide and interpret the experimental results. This paper will provide background on the program and summarize the achievements of the NRL propagation program up to this point. Further details can be found in other papers presented in this conference
Measurement uncertainty in Total Reflection X-ray Fluorescence
Energy Technology Data Exchange (ETDEWEB)
Floor, G.H., E-mail: geerke.floor@gfz-potsdam.de [GFZ German Research Centre for Geosciences Section 3.4. Earth Surface Geochemistry, Telegrafenberg, 14473 Postdam (Germany); Queralt, I. [Institute of Earth Sciences Jaume Almera ICTJA-CSIC, Solé Sabaris s/n, 08028 Barcelona (Spain); Hidalgo, M.; Marguí, E. [Department of Chemistry, University of Girona, Campus Montilivi s/n, 17071 Girona (Spain)
2015-09-01
Total Reflection X-ray Fluorescence (TXRF) spectrometry is a multi-elemental technique using micro-volumes of sample. This work assessed the components contributing to the combined uncertainty budget associated with TXRF measurements using Cu and Fe concentrations in different spiked and natural water samples as an example. The results showed that an uncertainty estimation based solely on the count statistics of the analyte is not a realistic estimation of the overall uncertainty, since the depositional repeatability and the relative sensitivity between the analyte and the internal standard are important contributions to the uncertainty budget. The uncertainty on the instrumental repeatability and sensitivity factor could be estimated and as such, potentially relatively straightforward implemented in the TXRF instrument software. However, the depositional repeatability varied significantly from sample to sample and between elemental ratios and the controlling factors are not well understood. By a lack of theoretical prediction of the depositional repeatability, the uncertainty budget can be based on repeat measurements using different reflectors. A simple approach to estimate the uncertainty was presented. The measurement procedure implemented and the uncertainty estimation processes developed were validated from the agreement with results obtained by inductively coupled plasma — optical emission spectrometry (ICP-OES) and/or reference/calculated values. - Highlights: • The uncertainty of TXRF cannot be realistically described by the counting statistics. • The depositional repeatability is an important contribution to the uncertainty. • Total combined uncertainties for Fe and Cu in waste/mine water samples were 4–8%. • Obtained concentrations agree within uncertainty with reference values.
Uncertainty and error in computational simulations
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, W.L.; Diegert, K.V.; Alvin, K.F.; Rutherford, B.M.
1997-10-01
The present paper addresses the question: ``What are the general classes of uncertainty and error sources in complex, computational simulations?`` This is the first step of a two step process to develop a general methodology for quantitatively estimating the global modeling and simulation uncertainty in computational modeling and simulation. The second step is to develop a general mathematical procedure for representing, combining and propagating all of the individual sources through the simulation. The authors develop a comprehensive view of the general phases of modeling and simulation. The phases proposed are: conceptual modeling of the physical system, mathematical modeling of the system, discretization of the mathematical model, computer programming of the discrete model, numerical solution of the model, and interpretation of the results. This new view is built upon combining phases recognized in the disciplines of operations research and numerical solution methods for partial differential equations. The characteristics and activities of each of these phases is discussed in general, but examples are given for the fields of computational fluid dynamics and heat transfer. They argue that a clear distinction should be made between uncertainty and error that can arise in each of these phases. The present definitions for uncertainty and error are inadequate and. therefore, they propose comprehensive definitions for these terms. Specific classes of uncertainty and error sources are then defined that can occur in each phase of modeling and simulation. The numerical sources of error considered apply regardless of whether the discretization procedure is based on finite elements, finite volumes, or finite differences. To better explain the broad types of sources of uncertainty and error, and the utility of their categorization, they discuss a coupled-physics example simulation.
Temporal scaling in information propagation
Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi
2014-06-01
For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
Uncertainty in Seismic Capacity of Masonry Buildings
Directory of Open Access Journals (Sweden)
Nicola Augenti
2012-07-01
Full Text Available Seismic assessment of masonry structures is plagued by both inherent randomness and model uncertainty. The former is referred to as aleatory uncertainty, the latter as epistemic uncertainty because it depends on the knowledge level. Pioneering studies on reinforced concrete buildings have revealed a significant influence of modeling parameters on seismic vulnerability. However, confidence in mechanical properties of existing masonry buildings is much lower than in the case of reinforcing steel and concrete. This paper is aimed at assessing whether and how uncertainty propagates from material properties to seismic capacity of an entire masonry structure. A typical two-story unreinforced masonry building is analyzed. Based on previous statistical characterization of mechanical properties of existing masonry types, the following random variables have been considered in this study: unit weight, uniaxial compressive strength, shear strength at zero confining stress, Young’s modulus, shear modulus, and available ductility in shear. Probability density functions were implemented to generate a significant number of realizations and static pushover analysis of the case-study building was performed for each vector of realizations, load combination and lateral load pattern. Analysis results show a large dispersion in displacement capacity and lower dispersion in spectral acceleration capacity. This can directly affect decision-making because both design and retrofit solutions depend on seismic capacity predictions. Therefore, engineering judgment should always be used when assessing structural safety of existing masonry constructions against design earthquakes, based on a series of seismic analyses under uncertain parameters.
Relational uncertainty in service dyads
DEFF Research Database (Denmark)
Kreye, Melanie
2017-01-01
Purpose: Relational uncertainty determines how relationships develop because it enables the building of trust and commitment. However, relational uncertainty has not been explored in an inter-organisational setting. This paper investigates how organisations experience relational uncertainty in se...
Indian Academy of Sciences (India)
Newton's second law of motion) for the one dimensional un- steady open channel flow (see Box ... The momentum equation is derived from Newton's second law, viz .. rate of change of momentum = net force. As the derivation is .... consideration the effects of unsteadiness through the continuity equation. Analytical solution ...
Wave propagation scattering theory
Birman, M Sh
1993-01-01
The papers in this collection were written primarily by members of the St. Petersburg seminar in mathematical physics. The seminar, now run by O. A. Ladyzhenskaya, was initiated in 1947 by V. I. Smirnov, to whose memory this volume is dedicated. The papers in the collection are devoted mainly to wave propagation processes, scattering theory, integrability of nonlinear equations, and related problems of spectral theory of differential and integral operators. The book is of interest to mathematicians working in mathematical physics and differential equations, as well as to physicists studying va
Rockower, Edward B.
1985-01-01
A number of laser propagation codes have been assessed as to their suitability for modeling Army High Energy Laser (HEL) weapons used in an anti- sensor mode. We identify a number of areas in which systems analysis HEL codes are deficient. Most notably, available HEL scaling law codes model the laser aperture as circular, possibly with a fixed (e.g. 10%) obscuration. However, most HELs have rectangular apertures with up to 30% obscuration. We present a beam-quality/aperture shape scaling rela...
Application of high-order uncertainty for severe accident management
International Nuclear Information System (INIS)
Yu, Donghan; Ha, Jaejoo
1998-01-01
The use of probability distribution to represent uncertainty about point-valued probabilities has been a controversial subject. Probability theorists have argued that it is inherently meaningless to be uncertain about a probability since this appears to violate the subjectivists' assumption that individual can develop unique and precise probability judgments. However, many others have found the concept of uncertainty about the probability to be both intuitively appealing and potentially useful. Especially, high-order uncertainty, i.e., the uncertainty about the probability, can be potentially relevant to decision-making when expert's judgment is needed under very uncertain data and imprecise knowledge and where the phenomena and events are frequently complicated and ill-defined. This paper presents two approaches for evaluating the uncertainties inherent in accident management strategies: 'a fuzzy probability' and 'an interval-valued subjective probability'. At first, this analysis considers accident management as a decision problem (i.e., 'applying a strategy' vs. 'do nothing') and uses an influence diagram. Then, the analysis applies two approaches above to evaluate imprecise node probabilities in the influence diagram. For the propagation of subjective probabilities, the analysis uses the Monte-Carlo simulation. In case of fuzzy probabilities, the fuzzy logic is applied to propagate them. We believe that these approaches can allow us to understand uncertainties associated with severe accident management strategy since they offer not only information similar to the classical approach using point-estimate values but also additional information regarding the impact from imprecise input data
Propagation error simulations concerning the CLIC active prealignment
Touzé, T; Missiaen, D
2009-01-01
The CLIC1 components will have to be prealigned within a thirty times more demanding tolerance than the existing CERNmachines. It is a technical challenge and a key issue for the CLIC feasibility. Simulations have been undertaken concerning the propagation error due to the measurement uncertainties of the prealignment systems. The uncertainties of measurement, taken as hypothesis for the simulations, are based on the data obtained on several dedicated facilities. This paper introduces the simulations and the latest results obtained, as well as the facilities.
Propagation speed of gamma radiation in brass
International Nuclear Information System (INIS)
Cavalcante, Jose T.P.D.; Silva, Paulo R.J.; Saitovich, Henrique
2009-01-01
The propagation speed (PS) of visible light -represented by a short frequency range in the large frame of electromagnetic radiations (ER) frequencies- in air was measured during the last century, using a great deal of different methods, with high precision results being achieved. Presently, a well accepted value, with very small uncertainty, is c= 299,792.458 Km/s) (c reporting to the Latin word celeritas: 'speed swiftness'). When propagating in denser material media (MM), such value is always lower when compared to the air value, with the propagating MM density playing an important role. Until present, such studies focusing propagation speeds, refractive indexes, dispersions were specially related to visible light, or to ER in wavelengths ranges dose to it, and with a transparent MM. A first incursion in this subject dealing with γ-rays was performed using an electronic coincidence counting system, when the value of it's PS was measured in air, C γ(air) 298,300.15 Km/s; a method that went on with later electronic improvements. always in air. To perform such measurements the availability of a γ-radiation source in which two γ-rays are emitted simultaneously in opposite directions -as already used as well as applied in the present case- turns out to be essential to the feasibility of the experiment, as far as no reflection techniques could be used. Such a suitable source was the positron emitter 22 Na placed in a thin wall metal container in which the positrons are stopped and annihilated when reacting with the medium electrons, in such way originating -as it is very well established from momentum/energy conservation laws - two gamma-rays, energy 511 KeV each, both emitted simultaneously in opposite directions. In all the previous experiments were used photomultiplier detectors coupled to NaI(Tl) crystal scintillators, which have a good energy resolution but a deficient time resolution for such purposes. Presently, as an innovative improvement, were used BaF 2
Propagation speed of gamma radiation in brass
Energy Technology Data Exchange (ETDEWEB)
Cavalcante, Jose T.P.D.; Silva, Paulo R.J.; Saitovich, Henrique
2009-07-01
The propagation speed (PS) of visible light -represented by a short frequency range in the large frame of electromagnetic radiations (ER) frequencies- in air was measured during the last century, using a great deal of different methods, with high precision results being achieved. Presently, a well accepted value, with very small uncertainty, is c= 299,792.458 Km/s) (c reporting to the Latin word celeritas: 'speed swiftness'). When propagating in denser material media (MM), such value is always lower when compared to the air value, with the propagating MM density playing an important role. Until present, such studies focusing propagation speeds, refractive indexes, dispersions were specially related to visible light, or to ER in wavelengths ranges dose to it, and with a transparent MM. A first incursion in this subject dealing with {gamma}-rays was performed using an electronic coincidence counting system, when the value of it's PS was measured in air, C{sub {gamma}}{sub (air)}298,300.15 Km/s; a method that went on with later electronic improvements. always in air. To perform such measurements the availability of a {gamma}-radiation source in which two {gamma}-rays are emitted simultaneously in opposite directions -as already used as well as applied in the present case- turns out to be essential to the feasibility of the experiment, as far as no reflection techniques could be used. Such a suitable source was the positron emitter {sup 22}Na placed in a thin wall metal container in which the positrons are stopped and annihilated when reacting with the medium electrons, in such way originating -as it is very well established from momentum/energy conservation laws - two gamma-rays, energy 511 KeV each, both emitted simultaneously in opposite directions. In all the previous experiments were used photomultiplier detectors coupled to NaI(Tl) crystal scintillators, which have a good energy resolution but a deficient time resolution for such purposes
Measurement uncertainty in Total Reflection X-ray Fluorescence
Floor, G. H.; Queralt, I.; Hidalgo, M.; Marguí, E.
2015-09-01
Total Reflection X-ray Fluorescence (TXRF) spectrometry is a multi-elemental technique using micro-volumes of sample. This work assessed the components contributing to the combined uncertainty budget associated with TXRF measurements using Cu and Fe concentrations in different spiked and natural water samples as an example. The results showed that an uncertainty estimation based solely on the count statistics of the analyte is not a realistic estimation of the overall uncertainty, since the depositional repeatability and the relative sensitivity between the analyte and the internal standard are important contributions to the uncertainty budget. The uncertainty on the instrumental repeatability and sensitivity factor could be estimated and as such, potentially relatively straightforward implemented in the TXRF instrument software. However, the depositional repeatability varied significantly from sample to sample and between elemental ratios and the controlling factors are not well understood. By a lack of theoretical prediction of the depositional repeatability, the uncertainty budget can be based on repeat measurements using different reflectors. A simple approach to estimate the uncertainty was presented. The measurement procedure implemented and the uncertainty estimation processes developed were validated from the agreement with results obtained by inductively coupled plasma - optical emission spectrometry (ICP-OES) and/or reference/calculated values.
Key uncertainties in climate change policy: Results from ICAM-2
Energy Technology Data Exchange (ETDEWEB)
Dowlatabadi, H.; Kandlikar, M.
1995-12-31
A critical aspect of climate change decision-making is uncertainties in current understanding of the socioeconomic, climatic and biogeochemical processes involved. Decision-making processes are much better informed if these uncertainties are characterized and their implications understood. Quantitative analysis of these uncertainties serve to: inform decision makers about the likely outcome of policy initiatives; and help set priorities for research so that outcome ambiguities faced by the decision-makers are reduced. A family of integrated assessment models of climate change have been developed at Carnegie Mellon. These models are distinguished from other integrated assessment efforts in that they were designed from the outset to characterize and propagate parameter, model, value, and decision-rule uncertainties. The most recent of these models is ICAM 2.0. This model includes demographics, economic activities, emissions, atmospheric chemistry, climate change, sea level rise and other impact modules and the numerous associated feedbacks. The model has over 700 objects of which over 1/3 are uncertain. These have been grouped into seven different classes of uncertain items. The impact of uncertainties in each of these items can be considered individually or in combinations with the others. In this paper we demonstrate the relative contribution of various sources of uncertainty to different outcomes in the model. The analysis shows that climatic uncertainties are most important, followed by uncertainties in damage calculations, economic uncertainties and direct aerosol forcing uncertainties. Extreme uncertainties in indirect aerosol forcing and behavioral response to climate change (adaptation) were characterized by using bounding analyses; the results suggest that these extreme uncertainties can dominate the choice of policy outcomes.
Strategies for Application of Isotopic Uncertainties in Burnup Credit
Energy Technology Data Exchange (ETDEWEB)
Gauld, I.C.
2002-12-23
Uncertainties in the predicted isotopic concentrations in spent nuclear fuel represent one of the largest sources of overall uncertainty in criticality calculations that use burnup credit. The methods used to propagate the uncertainties in the calculated nuclide concentrations to the uncertainty in the predicted neutron multiplication factor (k{sub eff}) of the system can have a significant effect on the uncertainty in the safety margin in criticality calculations and ultimately affect the potential capacity of spent fuel transport and storage casks employing burnup credit. Methods that can provide a more accurate and realistic estimate of the uncertainty may enable increased spent fuel cask capacity and fewer casks needing to be transported, thereby reducing regulatory burden on licensee while maintaining safety for transporting spent fuel. This report surveys several different best-estimate strategies for considering the effects of nuclide uncertainties in burnup-credit analyses. The potential benefits of these strategies are illustrated for a prototypical burnup-credit cask design. The subcritical margin estimated using best-estimate methods is discussed in comparison to the margin estimated using conventional bounding methods of uncertainty propagation. To quantify the comparison, each of the strategies for estimating uncertainty has been performed using a common database of spent fuel isotopic assay measurements for pressurized-light-water reactor fuels and predicted nuclide concentrations obtained using the current version of the SCALE code system. The experimental database applied in this study has been significantly expanded to include new high-enrichment and high-burnup spent fuel assay data recently published for a wide range of important burnup-credit actinides and fission products. Expanded rare earth fission-product measurements performed at the Khlopin Radium Institute in Russia that contain the only known publicly-available measurement for {sup 103
DEFF Research Database (Denmark)
Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian
2014-01-01
the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could...
Uncertainty in reactive transport geochemical modelling
International Nuclear Information System (INIS)
Oedegaard-Jensen, A.; Ekberg, C.
2005-01-01
Full text of publication follows: Geochemical modelling is one way of predicting the transport of i.e. radionuclides in a rock formation. In a rock formation there will be fractures in which water and dissolved species can be transported. The composition of the water and the rock can either increase or decrease the mobility of the transported entities. When doing simulations on the mobility or transport of different species one has to know the exact water composition, the exact flow rates in the fracture and in the surrounding rock, the porosity and which minerals the rock is composed of. The problem with simulations on rocks is that the rock itself it not uniform i.e. larger fractures in some areas and smaller in other areas which can give different water flows. The rock composition can be different in different areas. In additions to this variance in the rock there are also problems with measuring the physical parameters used in a simulation. All measurements will perturb the rock and this perturbation will results in more or less correct values of the interesting parameters. The analytical methods used are also encumbered with uncertainties which in this case are added to the uncertainty from the perturbation of the analysed parameters. When doing simulation the effect of the uncertainties must be taken into account. As the computers are getting faster and faster the complexity of simulated systems are increased which also increase the uncertainty in the results from the simulations. In this paper we will show how the uncertainty in the different parameters will effect the solubility and mobility of different species. Small uncertainties in the input parameters can result in large uncertainties in the end. (authors)
Uncertainty Analysis of the NASA Glenn 8x6 Supersonic Wind Tunnel
Stephens, Julia; Hubbard, Erin; Walter, Joel; McElroy, Tyler
2016-01-01
This paper presents methods and results of a detailed measurement uncertainty analysis that was performed for the 8- by 6-foot Supersonic Wind Tunnel located at the NASA Glenn Research Center. The statistical methods and engineering judgments used to estimate elemental uncertainties are described. The Monte Carlo method of propagating uncertainty was selected to determine the uncertainty of calculated variables of interest. A detailed description of the Monte Carlo method as applied for this analysis is provided. Detailed uncertainty results for the uncertainty in average free stream Mach number as well as other variables of interest are provided. All results are presented as random (variation in observed values about a true value), systematic (potential offset between observed and true value), and total (random and systematic combined) uncertainty. The largest sources contributing to uncertainty are determined and potential improvement opportunities for the facility are investigated.
Wang, Wei; Zhang, Xin; Meng, Qingyu; Zheng, Yuetao
2017-10-16
Phase-induced amplitude apodization (PIAA) is a promising technique in high contrast coronagraphs due to the characteristics of high efficiency and small inner working angle. In this letter, we present a new method for calculating the diffraction effects in PIAA coronagraphs based on boundary wave diffraction theory. We propose a numerical propagator in an azimuth boundary integral form, and then delve into its analytical propagator using stationary phase approximation. This propagator has straightforward physical meaning and obvious advantage on calculating efficiency, compared with former methods based on numerical integral or angular spectrum propagation method. Using this propagator, we can make a more direct explanation to the significant impact of pre-apodizer. This propagator can also be used to calculate the aberration propagation properties of PIAA optics. The calculating is also simplified since the decomposing procedure is not needed regardless of the form of the aberration.
Network planning under uncertainties
Ho, Kwok Shing; Cheung, Kwok Wai
2008-11-01
One of the main focuses for network planning is on the optimization of network resources required to build a network under certain traffic demand projection. Traditionally, the inputs to this type of network planning problems are treated as deterministic. In reality, the varying traffic requirements and fluctuations in network resources can cause uncertainties in the decision models. The failure to include the uncertainties in the network design process can severely affect the feasibility and economics of the network. Therefore, it is essential to find a solution that can be insensitive to the uncertain conditions during the network planning process. As early as in the 1960's, a network planning problem with varying traffic requirements over time had been studied. Up to now, this kind of network planning problems is still being active researched, especially for the VPN network design. Another kind of network planning problems under uncertainties that has been studied actively in the past decade addresses the fluctuations in network resources. One such hotly pursued research topic is survivable network planning. It considers the design of a network under uncertainties brought by the fluctuations in topology to meet the requirement that the network remains intact up to a certain number of faults occurring anywhere in the network. Recently, the authors proposed a new planning methodology called Generalized Survivable Network that tackles the network design problem under both varying traffic requirements and fluctuations of topology. Although all the above network planning problems handle various kinds of uncertainties, it is hard to find a generic framework under more general uncertainty conditions that allows a more systematic way to solve the problems. With a unified framework, the seemingly diverse models and algorithms can be intimately related and possibly more insights and improvements can be brought out for solving the problem. This motivates us to seek a
Jones, P. W.; Strelitz, R. A.
2012-12-01
The output of a simulation is best comprehended through the agency and methods of visualization, but a vital component of good science is knowledge of uncertainty. While great strides have been made in the quantification of uncertainty, especially in simulation, there is still a notable gap: there is no widely accepted means of simultaneously viewing the data and the associated uncertainty in one pane. Visualization saturates the screen, using the full range of color, shadow, opacity and tricks of perspective to display even a single variable. There is no room in the visualization expert's repertoire left for uncertainty. We present a method of visualizing uncertainty without sacrificing the clarity and power of the underlying visualization that works as well in 3-D and time-varying visualizations as it does in 2-D. At its heart, it relies on a principal tenet of continuum mechanics, replacing the notion of value at a point with a more diffuse notion of density as a measure of content in a region. First, the uncertainties calculated or tabulated at each point are transformed into a piecewise continuous field of uncertainty density . We next compute a weighted Voronoi tessellation of a user specified N convex polygonal/polyhedral cells such that each cell contains the same amount of uncertainty as defined by . The problem thus devolves into minimizing . Computation of such a spatial decomposition is O(N*N ), and can be computed iteratively making it possible to update easily over time as well as faster. The polygonal mesh does not interfere with the visualization of the data and can be easily toggled on or off. In this representation, a small cell implies a great concentration of uncertainty, and conversely. The content weighted polygons are identical to the cartogram familiar to the information visualization community in the depiction of things voting results per stat. Furthermore, one can dispense with the mesh or edges entirely to be replaced by symbols or glyphs
Uncertainty and validation. Effect of user interpretation on uncertainty estimates
Energy Technology Data Exchange (ETDEWEB)
Kirchner, G. [Univ. of Bremen (Germany); Peterson, R. [AECL, Chalk River, ON (Canada)] [and others
1996-11-01
Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the
Uncertainty and validation. Effect of user interpretation on uncertainty estimates
International Nuclear Information System (INIS)
Kirchner, G.; Peterson, R.
1996-11-01
Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the
Zhang, Jiaxin; Shields, Michael D.
2018-01-01
This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.
Commonplaces and social uncertainty
DEFF Research Database (Denmark)
Lassen, Inger
2008-01-01
This article explores the concept of uncertainty in four focus group discussions about genetically modified food. In the discussions, members of the general public interact with food biotechnology scientists while negotiating their attitudes towards genetic engineering. Their discussions offer...... an example of risk discourse in which the use of commonplaces seems to be a central feature (Myers 2004: 81). My analyses support earlier findings that commonplaces serve important interactional purposes (Barton 1999) and that they are used for mitigating disagreement, for closing topics and for facilitating...... risk discourse (Myers 2005; 2007). In additional, however, I argue that commonplaces are used to mitigate feelings of insecurity caused by uncertainty and to negotiate new codes of moral conduct. Keywords: uncertainty, commonplaces, risk discourse, focus groups, appraisal...
Propagation into an unstable state
International Nuclear Information System (INIS)
Dee, G.
1985-01-01
We describe propagating front solutions of the equations of motion of pattern-forming systems. We make a number of conjectures concerning the properties of such fronts in connection with pattern selection in these systems. We describe a calculation which can be used to calculate the velocity and state selected by certain types of propagating fronts. We investigate the propagating front solutions of the amplitude equation which provides a valid dynamical description of many pattern-forming systems near onset
Broadband unidirectional ultrasound propagation
Sinha, Dipen N.; Pantea, Cristian
2017-12-12
A passive, linear arrangement of a sonic crystal-based apparatus and method including a 1D sonic crystal, a nonlinear medium, and an acoustic low-pass filter, for permitting unidirectional broadband ultrasound propagation as a collimated beam for underwater, air or other fluid communication, are described. The signal to be transmitted is first used to modulate a high-frequency ultrasonic carrier wave which is directed into the sonic crystal side of the apparatus. The apparatus processes the modulated signal, whereby the original low-frequency signal exits the apparatus as a collimated beam on the side of the apparatus opposite the sonic crystal. The sonic crystal provides a bandpass acoustic filter through which the modulated high-frequency ultrasonic signal passes, and the nonlinear medium demodulates the modulated signal and recovers the low-frequency sound beam. The low-pass filter removes remaining high-frequency components, and contributes to the unidirectional property of the apparatus.
Precursors in Front Propagation
International Nuclear Information System (INIS)
Kessler, D.A
1998-01-01
We investigate the dynamical construction of the leading edge of propagating fronts. Whereas the steady-state front is typically an exponential, far ahead of the front, the front falls off much faster, in a fashion determined by the Green's function of tile problem. We show that there is a universal transition Tom the steady-state exponential front to a Gaussian falloff. The transition region is of width t 1/2 , and moves out ahead of the front at a constant velocity greater than the steady-state front speed. This Gaussian front then is in general modified even further ahead of the front to match onto the expected Green's function behavior. We demonstrate this in the case of the Ginzburg-Landau and Korteweg-De Vries equations. We also discuss the relevance of this mechanism for velocity selection in the Fisher equation
Curvilinear crack layer propagation
Chudnovsky, Alexander; Chaoui, Kamel; Moet, Abdelsamie
1987-01-01
An account is given of an experiment designed to allow observation of the effect of damage orientation on the direction of crack growth in the case of crack layer propagation, using polystyrene as the model material. The direction of crack advance under a given loading condition is noted to be determined by a competition between the tendency of the crack to maintain its current direction and the tendency to follow the orientation of the crazes at its tip. The orientation of the crazes is, on the other hand, determined by the stress field due to the interaction of the crack, the crazes, and the hole. The changes in craze rotation relative to the crack define the active zone rotation.
Atomistics of crack propagation
International Nuclear Information System (INIS)
Sieradzki, K.; Dienes, G.J.; Paskin, A.; Massoumzadeh, B.
1988-01-01
The molecular dynamic technique is used to investigate static and dynamic aspects of crack extension. The material chosen for this study was the 2D triangular solid with atoms interacting via the Johnson potential. The 2D Johnson solid was chosen for this study since a sharp crack in this material remains stable against dislocation emission up to the critical Griffith load. This behavior allows for a meaningful comparison between the simulation results and continuum energy theorems for crack extension by appropriately defining an effective modulus which accounts for sample size effects and the non-linear elastic behavior of the Johnson solid. Simulation results are presented for the stress fields of moving cracks and these dynamic results are discussed in terms of the dynamic crack propagation theories, of Mott, Eshelby, and Freund
Sensitivity and uncertainty analysis
Cacuci, Dan G; Navon, Ionel Michael
2005-01-01
As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c
Uncertainty in artificial intelligence
Levitt, TS; Lemmer, JF; Shachter, RD
1990-01-01
Clearly illustrated in this volume is the current relationship between Uncertainty and AI.It has been said that research in AI revolves around five basic questions asked relative to some particular domain: What knowledge is required? How can this knowledge be acquired? How can it be represented in a system? How should this knowledge be manipulated in order to provide intelligent behavior? How can the behavior be explained? In this volume, all of these questions are addressed. From the perspective of the relationship of uncertainty to the basic questions of AI, the book divides naturally i
Uncertainty of spatial straightness in 3D measurement
International Nuclear Information System (INIS)
Wang Jinxing; Jiang Xiangqian; Ma Limin; Xu Zhengao; Li Zhu
2005-01-01
The least-square method is commonly employed to verify the spatial straightness in actual three-dimensional measurement process, but the uncertainty of the verification result is usually not given by the coordinate measuring machines. According to the basic principle of spatial straightness least-square verification and the uncertainty propagation formula given by ISO/TS 14253-2, a calculation method for the uncertainty of spatial straightness least-square verification is proposed in this paper. By this method, the coefficients of the line equation are regarded as a statistical vector, so that the line equation, the result of the spatial straightness verification and the uncertainty of the result can be obtained after the expected value and covariance matrix of the vector are determined. The method not only assures the integrity of the verification result, but also accords with the requirement of the new generation of GPS standards, which can improve the veracity of verification
Uncertainty and sensitivity methods in support of PSA level 2
International Nuclear Information System (INIS)
Devictor, N.; Bolado Lavin, R.
2007-01-01
Dealing with uncertainties in PSA level 2 requires using a set of statistical techniques to assess input uncertainty, to propagate uncertainties in an efficient way, to characterize appropriately output uncertainty and to get information from computer code runs through an intelligent use of sensitivity analysis techniques. The purpose of this paper is to give an overview of statistical and probabilistic methods and tools to answer to these topics, and to provide some guidance about their suitability and limitations to be used in a PSA level 2. Our position about their implementation in L2 PSA software has been written; it could be noticed that a lot of these methods are very time-consuming, and seem more suitable for the analysis of submodels or for focusing on specific questions. (authors)
Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model
International Nuclear Information System (INIS)
Otis, M.D.
1983-01-01
Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs
New features and improved uncertainty analysis in the NEA nuclear data sensitivity tool (NDaST)
Dyrda, J.; Soppera, N.; Hill, I.; Bossant, M.; Gulliford, J.
2017-09-01
Following the release and initial testing period of the NEA's Nuclear Data Sensitivity Tool [1], new features have been designed and implemented in order to expand its uncertainty analysis capabilities. The aim is to provide a free online tool for integral benchmark testing, that is both efficient and comprehensive, meeting the needs of the nuclear data and benchmark testing communities. New features include access to P1 sensitivities for neutron scattering angular distribution [2] and constrained Chi sensitivities for the prompt fission neutron energy sampling. Both of these are compatible with covariance data accessed via the JANIS nuclear data software, enabling propagation of the resultant uncertainties in keff to a large series of integral experiment benchmarks. These capabilities are available using a number of different covariance libraries e.g., ENDF/B, JEFF, JENDL and TENDL, allowing comparison of the broad range of results it is possible to obtain. The IRPhE database of reactor physics measurements is now also accessible within the tool in addition to the criticality benchmarks from ICSBEP. Other improvements include the ability to determine and visualise the energy dependence of a given calculated result in order to better identify specific regions of importance or high uncertainty contribution. Sorting and statistical analysis of the selected benchmark suite is now also provided. Examples of the plots generated by the software are included to illustrate such capabilities. Finally, a number of analytical expressions, for example Maxwellian and Watt fission spectra will be included. This will allow the analyst to determine the impact of varying such distributions within the data evaluation, either through adjustment of parameters within the expressions, or by comparison to a more general probability distribution fitted to measured data. The impact of such changes is verified through calculations which are compared to a `direct' measurement found by
Feizizadeh, Bakhtiar; Blaschke, Thomas
2014-03-04
GIS-based multicriteria decision analysis (MCDA) methods are increasingly being used in landslide susceptibility mapping. However, the uncertainties that are associated with MCDA techniques may significantly impact the results. This may sometimes lead to inaccurate outcomes and undesirable consequences. This article introduces a new GIS-based MCDA approach. We illustrate the consequences of applying different MCDA methods within a decision-making process through uncertainty analysis. Three GIS-MCDA methods in conjunction with Monte Carlo simulation (MCS) and Dempster-Shafer theory are analyzed for landslide susceptibility mapping (LSM) in the Urmia lake basin in Iran, which is highly susceptible to landslide hazards. The methodology comprises three stages. First, the LSM criteria are ranked and a sensitivity analysis is implemented to simulate error propagation based on the MCS. The resulting weights are expressed through probability density functions. Accordingly, within the second stage, three MCDA methods, namely analytical hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA), are used to produce the landslide susceptibility maps. In the third stage, accuracy assessments are carried out and the uncertainties of the different results are measured. We compare the accuracies of the three MCDA methods based on (1) the Dempster-Shafer theory and (2) a validation of the results using an inventory of known landslides and their respective coverage based on object-based image analysis of IRS-ID satellite images. The results of this study reveal that through the integration of GIS and MCDA models, it is possible to identify strategies for choosing an appropriate method for LSM. Furthermore, our findings indicate that the integration of MCDA and MCS can significantly improve the accuracy of the results. In LSM, the AHP method performed best, while the OWA reveals better performance in the reliability assessment. The WLC operation
New features and improved uncertainty analysis in the NEA nuclear data sensitivity tool (NDaST
Directory of Open Access Journals (Sweden)
Dyrda J.
2017-01-01
Full Text Available Following the release and initial testing period of the NEA’s Nuclear Data Sensitivity Tool [1], new features have been designed and implemented in order to expand its uncertainty analysis capabilities. The aim is to provide a free online tool for integral benchmark testing, that is both efficient and comprehensive, meeting the needs of the nuclear data and benchmark testing communities. New features include access to P1 sensitivities for neutron scattering angular distribution [2] and constrained Chi sensitivities for the prompt fission neutron energy sampling. Both of these are compatible with covariance data accessed via the JANIS nuclear data software, enabling propagation of the resultant uncertainties in keff to a large series of integral experiment benchmarks. These capabilities are available using a number of different covariance libraries e.g., ENDF/B, JEFF, JENDL and TENDL, allowing comparison of the broad range of results it is possible to obtain. The IRPhE database of reactor physics measurements is now also accessible within the tool in addition to the criticality benchmarks from ICSBEP. Other improvements include the ability to determine and visualise the energy dependence of a given calculated result in order to better identify specific regions of importance or high uncertainty contribution. Sorting and statistical analysis of the selected benchmark suite is now also provided. Examples of the plots generated by the software are included to illustrate such capabilities. Finally, a number of analytical expressions, for example Maxwellian and Watt fission spectra will be included. This will allow the analyst to determine the impact of varying such distributions within the data evaluation, either through adjustment of parameters within the expressions, or by comparison to a more general probability distribution fitted to measured data. The impact of such changes is verified through calculations which are compared to a
Causality and analyticity in optics
International Nuclear Information System (INIS)
Nussenzveig, H.M.
In order to provide an overall picture of the broad range of optical phenomena that are directly linked with the concepts of causality and analyticity, the following topics are briefly reviewed, emphasizing recent developments: 1) Derivation of dispersion relations for the optical constants of general linear media from causality. Application to the theory of natural optical activity. 2) Derivation of sum rules for the optical constants from causality and from the short-time response function (asymptotic high-frequency behavior). Average spectral behavior of optical media. Applications. 3) Role of spectral conditions. Analytic properties of coherence functions in quantum optics. Reconstruction theorem.4) Phase retrieval problems. 5) Inverse scattering problems. 6) Solution of nonlinear evolution equations in optics by inverse scattering methods. Application to self-induced transparency. Causality in nonlinear wave propagation. 7) Analytic continuation in frequency and angular momentum. Complex singularities. Resonances and natural-mode expansions. Regge poles. 8) Wigner's causal inequality. Time delay. Spatial displacements in total reflection. 9) Analyticity in diffraction theory. Complex angular momentum theory of Mie scattering. Diffraction as a barrier tunnelling effect. Complex trajectories in optics. (Author) [pt
International Nuclear Information System (INIS)
Wilson, G.E.
1992-01-01
The Analytic Hierarchy Process (AHP) has been used to help determine the importance of components and phenomena in thermal-hydraulic safety analyses of nuclear reactors. The AHP results are based, in part on expert opinion. Therefore, it is prudent to evaluate the uncertainty of the AHP ranks of importance. Prior applications have addressed uncertainty with experimental data comparisons and bounding sensitivity calculations. These methods work well when a sufficient experimental data base exists to justify the comparisons. However, in the case of limited or no experimental data the size of the uncertainty is normally made conservatively large. Accordingly, the author has taken another approach, that of performing a statistically based uncertainty analysis. The new work is based on prior evaluations of the importance of components and phenomena in the thermal-hydraulic safety analysis of the Advanced Neutron Source Reactor (ANSR), a new facility now in the design phase. The uncertainty during large break loss of coolant, and decay heat removal scenarios is estimated by assigning a probability distribution function (pdf) to the potential error in the initial expert estimates of pair-wise importance between the components. Using a Monte Carlo sampling technique, the error pdfs are propagated through the AHP software solutions to determine a pdf of uncertainty in the system wide importance of each component. To enhance the generality of the results, study of one other problem having different number of elements is reported, as are the effects of a larger assumed pdf error in the expert ranks. Validation of the Monte Carlo sample size and repeatability are also documented
Uncertainty Analyses and Strategy
International Nuclear Information System (INIS)
Kevin Coppersmith
2001-01-01
The DOE identified a variety of uncertainties, arising from different sources, during its assessment of the performance of a potential geologic repository at the Yucca Mountain site. In general, the number and detail of process models developed for the Yucca Mountain site, and the complex coupling among those models, make the direct incorporation of all uncertainties difficult. The DOE has addressed these issues in a number of ways using an approach to uncertainties that is focused on producing a defensible evaluation of the performance of a potential repository. The treatment of uncertainties oriented toward defensible assessments has led to analyses and models with so-called ''conservative'' assumptions and parameter bounds, where conservative implies lower performance than might be demonstrated with a more realistic representation. The varying maturity of the analyses and models, and uneven level of data availability, result in total system level analyses with a mix of realistic and conservative estimates (for both probabilistic representations and single values). That is, some inputs have realistically represented uncertainties, and others are conservatively estimated or bounded. However, this approach is consistent with the ''reasonable assurance'' approach to compliance demonstration, which was called for in the U.S. Nuclear Regulatory Commission's (NRC) proposed 10 CFR Part 63 regulation (64 FR 8640 [DIRS 101680]). A risk analysis that includes conservatism in the inputs will result in conservative risk estimates. Therefore, the approach taken for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) provides a reasonable representation of processes and conservatism for purposes of site recommendation. However, mixing unknown degrees of conservatism in models and parameter representations reduces the transparency of the analysis and makes the development of coherent and consistent probability statements about projected repository
Kudo, Ryoji; Yoshida, Takeo; Masumoto, Takao
2017-05-01
The impact of climate change on snow water equivalent (SWE) and its uncertainty were investigated in snowy areas of subarctic and temperate climate zones in Japan by using a snow process model and climate projections derived from general circulation models (GCMs). In particular, we examined how the uncertainty due to GCMs propagated through the snow model, which contained nonlinear processes defined by thresholds, as an example of the uncertainty caused by interactions among multiple sources of uncertainty. An assessment based on the climate projections in Coupled Model Intercomparison Project Phase 5 indicated that heavy-snowfall areas in the temperate zone (especially in low-elevation areas) were markedly vulnerable to temperature change, showing a large SWE reduction even under slight changes in winter temperature. The uncertainty analysis demonstrated that the uncertainty associated with snow processes (1) can be accounted for mainly by the interactions between GCM uncertainty (in particular, the differences of projected temperature changes between GCMs) and the nonlinear responses of the snow model and (2) depends on the balance between the magnitude of projected temperature changes and present climates dominated largely by climate zones and elevation. Specifically, when the peaks of the distributions of daily mean temperature projected by GCMs cross the key thresholds set in the model, the GCM uncertainty, even if tiny, can be amplified by the nonlinear propagation through the snow process model. This amplification results in large uncertainty in projections of CC impact on snow processes.
The use of error and uncertainty methods in the medical laboratory.
Oosterhuis, Wytze P; Bayat, Hassan; Armbruster, David; Coskun, Abdurrahman; Freeman, Kathleen P; Kallner, Anders; Koch, David; Mackenzie, Finlay; Migliarino, Gabriel; Orth, Matthias; Sandberg, Sverre; Sylte, Marit S; Westgard, Sten; Theodorsson, Elvar
2018-01-26
Error methods - compared with uncertainty methods - offer simpler, more intuitive and practical procedures for calculating measurement uncertainty and conducting quality assurance in laboratory medicine. However, uncertainty methods are preferred in other fields of science as reflected by the guide to the expression of uncertainty in measurement. When laboratory results are used for supporting medical diagnoses, the total uncertainty consists only partially of analytical variation. Biological variation, pre- and postanalytical variation all need to be included. Furthermore, all components of the measuring procedure need to be taken into account. Performance specifications for diagnostic tests should include the diagnostic uncertainty of the entire testing process. Uncertainty methods may be particularly useful for this purpose but have yet to show their strength in laboratory medicine. The purpose of this paper is to elucidate the pros and cons of error and uncertainty methods as groundwork for future consensus on their use in practical performance specifications. Error and uncertainty methods are complementary when evaluating measurement data.
Effects of Relative Platform and Target Motion on Propagation of High Energy Lasers
2016-06-01
HEL Optical pRopagation (ANCHOR). This code uses well-known analytical scaling laws and a scriptable user interface to allow the quick exploration of...ANCHOR). This code uses well-known analytical scaling laws and a scriptable user interface to allow the quick exploration of multi-dimensional...63 ix LIST OF FIGURES Figure 1. Atmospheric transmittance measured over
Morse oscillator propagator in the high temperature limit II: Quantum dynamics and spectroscopy
Toutounji, Mohamad
2018-04-01
This paper is a continuation of Paper I (Toutounji, 2017) of which motivation was testing the applicability of Morse oscillator propagator whose analytical form was derived by Duru (1983). This is because the Morse oscillator propagator was reported (Duru, 1983) in a triple-integral form of a functional of modified Bessel function of the first kind, which considerably limits its applicability. For this reason, I was prompted to find a regime under which Morse oscillator propagator may be simplified and hence be expressed in a closed-form. This was well accomplished in Paper I. Because Morse oscillator is of central importance and widely used in modelling vibrations, its propagator applicability will be extended to applications in quantum dynamics and spectroscopy as will be reported in this paper using the off-diagonal propagator of Morse oscillator whose analytical form is derived.
Uncertainties in repository modeling
Energy Technology Data Exchange (ETDEWEB)
Wilson, J.R.
1996-12-31
The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling.
Uncertainties in repository modeling
International Nuclear Information System (INIS)
Wilson, J.R.
1996-01-01
The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling
Risk, Uncertainty, and Entrepreneurship
DEFF Research Database (Denmark)
Koudstaal, Martin; Sloof, Randolph; Van Praag, Mirjam
2016-01-01
Theory predicts that entrepreneurs have distinct attitudes toward risk and uncertainty, but empirical evidence is mixed. To better understand these mixed results, we perform a large “lab-in-the-field” experiment comparing entrepreneurs to managers (a suitable comparison group) and employees (n D ...
Schrodinger's Uncertainty Principle?
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 4; Issue 2. Schrödinger's Uncertainty Principle? - Lilies can be Painted. Rajaram Nityananda. General Article Volume 4 Issue 2 February 1999 pp 24-26. Fulltext. Click here to view fulltext PDF. Permanent link:
International Nuclear Information System (INIS)
Haefele, W.; Renn, O.; Erdmann, G.
1990-01-01
The notion of 'risk' is discussed in its social and technological contexts, leading to an investigation of the terms factuality, hypotheticality, uncertainty, and vagueness, and to the problems of acceptance and acceptability especially in the context of political decision finding. (DG) [de
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
Wood, Alexander
2004-01-01
deterministic methods for Hg TMDL decision support, one that is fully compatible with an adaptive management approach. This alternative approach uses empirical data and informed judgment to provide a scientific and technical basis for helping National Pollutant Discharge Elimination System (NPDES) permit holders make management decisions. An Hg-offset system would be an option if a wastewater-treatment plant could not achieve NPDES permit requirements for HgT reduction. We develop a probabilistic decision-analytical model consisting of three submodels for HgT loading, MeHg, and cost mitigation within a Bayesian network that integrates information of varying rigor and detail into a simple model of a complex system. Hg processes are identified and quantified by using a combination of historical data, statistical models, and expert judgment. Such an integrated approach to uncertainty analysis allows easy updating of prediction and inference when observations of model variables are made. We demonstrate our approach with data from the Cache Creek watershed (a subbasin of the Sacramento River watershed). The empirical models used to generate the needed probability distributions are based on the same empirical models currently being used by the Central Valley Regional Water Quality Control Cache Creek Hg TMDL working group. The significant difference is that input uncertainty and error are explicitly included in the model and propagated throughout its algorithms. This work demonstrates how to integrate uncertainty into the complex and highly uncertain Hg TMDL decisionmaking process. The various sources of uncertainty are propagated as decision risk that allows decisionmakers to simultaneously consider uncertainties in remediation/implementation costs while attempting to meet environmental/ecologic targets. We must note that this research is on going. As more data are collected, the HgT and cost-mitigation submodels are updated and the uncer
Dynamical Models for Computer Viruses Propagation
Directory of Open Access Journals (Sweden)
José R. C. Piqueira
2008-01-01
Full Text Available Nowadays, digital computer systems and networks are the main engineering tools, being used in planning, design, operation, and control of all sizes of building, transportation, machinery, business, and life maintaining devices. Consequently, computer viruses became one of the most important sources of uncertainty, contributing to decrease the reliability of vital activities. A lot of antivirus programs have been developed, but they are limited to detecting and removing infections, based on previous knowledge of the virus code. In spite of having good adaptation capability, these programs work just as vaccines against diseases and are not able to prevent new infections based on the network state. Here, a trial on modeling computer viruses propagation dynamics relates it to other notable events occurring in the network permitting to establish preventive policies in the network management. Data from three different viruses are collected in the Internet and two different identification techniques, autoregressive and Fourier analyses, are applied showing that it is possible to forecast the dynamics of a new virus propagation by using the data collected from other viruses that formerly infected the network.
Energy and Uncertainty in General Relativity
Cooperstock, F. I.; Dupre, M. J.
2018-03-01
The issue of energy and its potential localizability in general relativity has challenged physicists for more than a century. Many non-invariant measures were proposed over the years but an invariant measure was never found. We discovered the invariant localized energy measure by expanding the domain of investigation from space to spacetime. We note from relativity that the finiteness of the velocity of propagation of interactions necessarily induces indefiniteness in measurements. This is because the elements of actual physical systems being measured as well as their detectors are characterized by entire four-velocity fields, which necessarily leads to information from a measured system being processed by the detector in a spread of time. General relativity adds additional indefiniteness because of the variation in proper time between elements. The uncertainty is encapsulated in a generalized uncertainty principle, in parallel with that of Heisenberg, which incorporates the localized contribution of gravity to energy. This naturally leads to a generalized uncertainty principle for momentum as well. These generalized forms and the gravitational contribution to localized energy would be expected to be of particular importance in the regimes of ultra-strong gravitational fields. We contrast our invariant spacetime energy measure with the standard 3-space energy measure which is familiar from special relativity, appreciating why general relativity demands a measure in spacetime as opposed to 3-space. We illustrate the misconceptions by certain authors of our approach.
Evidential Model Validation under Epistemic Uncertainty
Directory of Open Access Journals (Sweden)
Wei Deng
2018-01-01
Full Text Available This paper proposes evidence theory based methods to both quantify the epistemic uncertainty and validate computational model. Three types of epistemic uncertainty concerning input model data, that is, sparse points, intervals, and probability distributions with uncertain parameters, are considered. Through the proposed methods, the given data will be described as corresponding probability distributions for uncertainty propagation in the computational model, thus, for the model validation. The proposed evidential model validation method is inspired by the idea of Bayesian hypothesis testing and Bayes factor, which compares the model predictions with the observed experimental data so as to assess the predictive capability of the model and help the decision making of model acceptance. Developed by the idea of Bayes factor, the frame of discernment of Dempster-Shafer evidence theory is constituted and the basic probability assignment (BPA is determined. Because the proposed validation method is evidence based, the robustness of the result can be guaranteed, and the most evidence-supported hypothesis about the model testing will be favored by the BPA. The validity of proposed methods is illustrated through a numerical example.
Propagation of Ion Acoustic Perturbations
DEFF Research Database (Denmark)
Pécseli, Hans
1975-01-01
Equations describing the propagation of ion acoustic perturbations are considered, using the assumption that the electrons are Boltzman distributed and isothermal at all times. Quasi-neutrality is also considered.......Equations describing the propagation of ion acoustic perturbations are considered, using the assumption that the electrons are Boltzman distributed and isothermal at all times. Quasi-neutrality is also considered....
Uncertainty analysis methods for estimation of reliability of passive system of VHTR
International Nuclear Information System (INIS)
Han, S.J.
2012-01-01
An estimation of reliability of passive system for the probabilistic safety assessment (PSA) of a very high temperature reactor (VHTR) is under development in Korea. The essential approach of this estimation is to measure the uncertainty of the system performance under a specific accident condition. The uncertainty propagation approach according to the simulation of phenomenological models (computer codes) is adopted as a typical method to estimate the uncertainty for this purpose. This presentation introduced the uncertainty propagation and discussed the related issues focusing on the propagation object and its surrogates. To achieve a sufficient level of depth of uncertainty results, the applicability of the propagation should be carefully reviewed. For an example study, Latin-hypercube sampling (LHS) method as a direct propagation was tested for a specific accident sequence of VHTR. The reactor cavity cooling system (RCCS) developed by KAERI was considered for this example study. This is an air-cooled type passive system that has no active components for its operation. The accident sequence is a low pressure conduction cooling (LPCC) accident that is considered as a design basis accident for the safety design of VHTR. This sequence is due to a large failure of the pressure boundary of the reactor system such as a guillotine break of coolant pipe lines. The presentation discussed the obtained insights (benefit and weakness) to apply an estimation of reliability of passive system
Environmental adversity and uncertainty favour cooperation.
Andras, Peter; Lazarus, John; Roberts, Gilbert
2007-11-30
A major cornerstone of evolutionary biology theory is the explanation of the emergence of cooperation in communities of selfish individuals. There is an unexplained tendency in the plant and animal world - with examples from alpine plants, worms, fish, mole-rats, monkeys and humans - for cooperation to flourish where the environment is more adverse (harsher) or more unpredictable. Using mathematical arguments and computer simulations we show that in more adverse environments individuals perceive their resources to be more unpredictable, and that this unpredictability favours cooperation. First we show analytically that in a more adverse environment the individual experiences greater perceived uncertainty. Second we show through a simulation study that more perceived uncertainty implies higher level of cooperation in communities of selfish individuals. This study captures the essential features of the natural examples: the positive impact of resource adversity or uncertainty on cooperation. These newly discovered connections between environmental adversity, uncertainty and cooperation help to explain the emergence and evolution of cooperation in animal and human societies.
Environmental adversity and uncertainty favour cooperation
Directory of Open Access Journals (Sweden)
Lazarus John
2007-11-01
Full Text Available Abstract Background A major cornerstone of evolutionary biology theory is the explanation of the emergence of cooperation in communities of selfish individuals. There is an unexplained tendency in the plant and animal world – with examples from alpine plants, worms, fish, mole-rats, monkeys and humans – for cooperation to flourish where the environment is more adverse (harsher or more unpredictable. Results Using mathematical arguments and computer simulations we show that in more adverse environments individuals perceive their resources to be more unpredictable, and that this unpredictability favours cooperation. First we show analytically that in a more adverse environment the individual experiences greater perceived uncertainty. Second we show through a simulation study that more perceived uncertainty implies higher level of cooperation in communities of selfish individuals. Conclusion This study captures the essential features of the natural examples: the positive impact of resource adversity or uncertainty on cooperation. These newly discovered connections between environmental adversity, uncertainty and cooperation help to explain the emergence and evolution of cooperation in animal and human societies.
Arnold, Dan; Demyanov, Vasily; Christie, Mike; Bakay, Alexander; Gopa, Konstantin
2016-10-01
Assessing the change in uncertainty in reservoir production forecasts over field lifetime is rarely undertaken because of the complexity of joining together the individual workflows. This becomes particularly important in complex fields such as naturally fractured reservoirs. The impact of this problem has been identified in previous and many solutions have been proposed but never implemented on complex reservoir problems due to the computational cost of quantifying uncertainty and optimising the reservoir development, specifically knowing how many and what kind of simulations to run. This paper demonstrates a workflow that propagates uncertainty throughout field lifetime, and into the decision making process by a combination of a metric-based approach, multi-objective optimisation and Bayesian estimation of uncertainty. The workflow propagates uncertainty estimates from appraisal into initial development optimisation, then updates uncertainty through history matching and finally propagates it into late-life optimisation. The combination of techniques applied, namely the metric approach and multi-objective optimisation, help evaluate development options under uncertainty. This was achieved with a significantly reduced number of flow simulations, such that the combined workflow is computationally feasible to run for a real-field problem. This workflow is applied to two synthetic naturally fractured reservoir (NFR) case studies in appraisal, field development, history matching and mid-life EOR stages. The first is a simple sector model, while the second is a more complex full field example based on a real life analogue. This study infers geological uncertainty from an ensemble of models that are based on the carbonate Brazilian outcrop which are propagated through the field lifetime, before and after the start of production, with the inclusion of production data significantly collapsing the spread of P10-P90 in reservoir forecasts. The workflow links uncertainty
Propagation Engineering in Wireless Communications
Ghasemi, Abdollah; Ghasemi, Farshid
2012-01-01
Wireless communications has seen explosive growth in recent decades, in a realm that is both broad and rapidly expanding to include satellite services, navigational aids, remote sensing, telemetering, audio and video broadcasting, high-speed data communications, mobile radio systems and much more. Propagation Engineering in Wireless Communications deals with the basic principles of radiowaves propagation for frequency bands used in radio-communications, offering descriptions of new achievements and newly developed propagation models. The book bridges the gap between theoretical calculations and approaches, and applied procedures needed for advanced radio links design. The primary objective of this two-volume set is to demonstrate the fundamentals, and to introduce propagation phenomena and mechanisms that engineers are likely to encounter in the design and evaluation of radio links of a given type and operating frequency. Volume one covers basic principles, along with tropospheric and ionospheric propagation,...
Dressing the nucleon propagator
International Nuclear Information System (INIS)
Fishman, S.; Gersten, A.
1976-01-01
The nucleon propagator in the ''nested bubbles'' approximation is analyzed. The approximation is built from the minimal set of diagrams which is needed to maintain the unitarity condition under two-pion production threshold in the two-nucleon Bethe--Salpeter equation. Recursive formulas for subsets of ''nested bubbles'' diagrams calculated in the framework of the pseudoscalar interaction are obtained by the use of dispersion relations. We prove that the sum of all the ''nested bubbles'' diverges. Moreover, the successive iterations are plagued with ghost poles. We prove that the first approximation--which is the so-called chain approximation--has ghost poles for any nonvanishing coupling constant. In an earlier paper we have shown that ghost poles lead to ghost cuts. These cuts are present in the ''nested bubbles.'' Ghost elimination procedures are discussed. Modifications of the ''nested bubbles'' approximation are introduced in order to obtain convergence and in order to eliminate the ghost poles and ghost cuts. In a similar way as in the Lee model, cutoff functions are introduced in order to eliminate the ghost poles. The necessary and sufficient conditions for the absence of ghost poles are formulated and analyzed. The spectral functions of the modified ''nested bubbles'' are analyzed and computed. Finally, we present a theorem, similar in its form to Levinson's theorem in scattering theory, which enables one to compute in a simple way the number of ghost poles
Transionospheric propagation predictions
Klobucher, J. A.; Basu, S.; Basu, S.; Bernhardt, P. A.; Davies, K.; Donatelli, D. E.; Fremouw, E. J.; Goodman, J. M.; Hartmann, G. K.; Leitinger, R.
1979-01-01
The current status and future prospects of the capability to make transionospheric propagation predictions are addressed, highlighting the effects of the ionized media, which dominate for frequencies below 1 to 3 GHz, depending upon the state of the ionosphere and the elevation angle through the Earth-space path. The primary concerns are the predictions of time delay of signal modulation (group path delay) and of radio wave scintillation. Progress in these areas is strongly tied to knowledge of variable structures in the ionosphere ranging from the large scale (thousands of kilometers in horizontal extent) to the fine scale (kilometer size). Ionospheric variability and the relative importance of various mechanisms responsible for the time histories observed in total electron content (TEC), proportional to signal group delay, and in irregularity formation are discussed in terms of capability to make both short and long term predictions. The data base upon which predictions are made is examined for its adequacy, and the prospects for prediction improvements by more theoretical studies as well as by increasing the available statistical data base are examined.
Cost uncertainty for different levels of technology maturity
International Nuclear Information System (INIS)
DeMuth, S.F.; Franklin, A.L.
1996-01-01
It is difficult at best to apply a single methodology for estimating cost uncertainties related to technologies of differing maturity. While highly mature technologies may have significant performance and manufacturing cost data available, less well developed technologies may be defined in only conceptual terms. Regardless of the degree of technical maturity, often a cost estimate relating to application of the technology may be required to justify continued funding for development. Yet, a cost estimate without its associated uncertainty lacks the information required to assess the economic risk. For this reason, it is important for the developer to provide some type of uncertainty along with a cost estimate. This study demonstrates how different methodologies for estimating uncertainties can be applied to cost estimates for technologies of different maturities. For a less well developed technology an uncertainty analysis of the cost estimate can be based on a sensitivity analysis; whereas, an uncertainty analysis of the cost estimate for a well developed technology can be based on an error propagation technique from classical statistics. It was decided to demonstrate these uncertainty estimation techniques with (1) an investigation of the additional cost of remediation due to beyond baseline, nearly complete, waste heel retrieval from underground storage tanks (USTs) at Hanford; and (2) the cost related to the use of crystalline silico-titanate (CST) rather than the baseline CS100 ion exchange resin for cesium separation from UST waste at Hanford
A Framework for Understanding Uncertainty in Seismic Risk Assessment.
Foulser-Piggott, Roxane; Bowman, Gary; Hughes, Martin
2017-10-11
A better understanding of the uncertainty that exists in models used for seismic risk assessment is critical to improving risk-based decisions pertaining to earthquake safety. Current models estimating the probability of collapse of a building do not consider comprehensively the nature and impact of uncertainty. This article presents a model framework to enhance seismic risk assessment and thus gives decisionmakers a fuller understanding of the nature and limitations of the estimates. This can help ensure that risks are not over- or underestimated and the value of acquiring accurate data is appreciated fully. The methodology presented provides a novel treatment of uncertainties in input variables, their propagation through the model, and their effect on the results. The study presents ranges of possible annual collapse probabilities for different case studies on buildings in different parts of the world, exposed to different levels of seismicity, and with different vulnerabilities. A global sensitivity analysis was conducted to determine the significance of uncertain variables. Two key outcomes are (1) that the uncertainty in ground-motion conversion equations has the largest effect on the uncertainty in the calculation of annual collapse probability; and (2) the vulnerability of a building appears to have an effect on the range of annual collapse probabilities produced, i.e., the level of uncertainty in the estimate of annual collapse probability, with less vulnerable buildings having a smaller uncertainty. © 2017 Society for Risk Analysis.
Assessing predictive uncertainty in comparative toxicity potentials of triazoles.
Golsteijn, Laura; Iqbal, M Sarfraz; Cassani, Stefano; Hendriks, Harrie W M; Kovarich, Simona; Papa, Ester; Rorije, Emiel; Sahlin, Ullrika; Huijbregts, Mark A J
2014-02-01
Comparative toxicity potentials (CTPs) quantify the potential ecotoxicological impacts of chemicals per unit of emission. They are the product of a substance's environmental fate, exposure, and hazardous concentration. When empirical data are lacking, substance properties can be predicted. The goal of the present study was to assess the influence of predictive uncertainty in substance property predictions on the CTPs of triazoles. Physicochemical and toxic properties were predicted with quantitative structure-activity relationships (QSARs), and uncertainty in the predictions was quantified with use of the data underlying the QSARs. Degradation half-lives were based on a probability distribution representing experimental half-lives of triazoles. Uncertainty related to the species' sample size that was present in the prediction of the hazardous aquatic concentration was also included. All parameter uncertainties were treated as probability distributions, and propagated by Monte Carlo simulations. The 90% confidence interval of the CTPs typically spanned nearly 4 orders of magnitude. The CTP uncertainty was mainly determined by uncertainty in soil sorption and soil degradation rates, together with the small number of species sampled. In contrast, uncertainty in species-specific toxicity predictions contributed relatively little. The findings imply that the reliability of CTP predictions for the chemicals studied can be improved particularly by including experimental data for soil sorption and soil degradation, and by developing toxicity QSARs for more species. © 2013 SETAC.