Analytical Propagation of Uncertainty in Life Cycle Assessment Using Matrix Formulation
DEFF Research Database (Denmark)
Imbeault-Tétreault, Hugues; Jolliet, Olivier; Deschênes, Louise;
2013-01-01
Inventory data and characterization factors in life cycle assessment (LCA) contain considerable uncertainty. The most common method of parameter uncertainty propagation to the impact scores is Monte Carlo simulation, which remains a resource-intensive option—probably one of the reasons why...... uncertainty assessment is not a regular step in LCA. An analytical approach based on Taylor series expansion constitutes an effective means to overcome the drawbacks of the Monte Carlo method. This project aimed to test the approach on a real case study, and the resulting analytical uncertainty was compared...... of the output uncertainty. Moreover, the sensitivity analysis reveals that the uncertainty of the most sensitive input parameters was not initially considered in the case study. The uncertainty analysis of the comparison of two scenarios is a useful means of highlighting the effects of correlation...
DEFF Research Database (Denmark)
Hong, Jinglan; Shaked, Shanna; Rosenbaum, Ralph K.
2010-01-01
to develop and apply to both inventory and impact assessment an explicit and transparent analytical approach to uncertainty. This approach applies Taylor series expansions to the uncertainty propagation of lognormally distributed parameters. Materials and methods We first apply the Taylor series expansion...... determine a range and a best estimate of a) the squared geometric standard deviation on the ratio of the two scenario scores, "A/B", and b) the degree of confidence in the prediction that the impact of scenario A is lower than B (i.e., the probability that A/B75%). For the aluminum panel, the electricity......, we obtained a good concordance between the Monte Carlo and the Taylor series expansion methods regarding the probability that one scenario is better than the other. Discussion The Taylor series expansion method addresses the crucial need of accounting for dependencies in LCA, both for common LCI...
Sciacchitano, Andrea; Wieneke, Bernhard
2016-08-01
This paper discusses the propagation of the instantaneous uncertainty of PIV measurements to statistical and instantaneous quantities of interest derived from the velocity field. The expression of the uncertainty of vorticity, velocity divergence, mean value and Reynolds stresses is derived. It is shown that the uncertainty of vorticity and velocity divergence requires the knowledge of the spatial correlation between the error of the x and y particle image displacement, which depends upon the measurement spatial resolution. The uncertainty of statistical quantities is often dominated by the random uncertainty due to the finite sample size and decreases with the square root of the effective number of independent samples. Monte Carlo simulations are conducted to assess the accuracy of the uncertainty propagation formulae. Furthermore, three experimental assessments are carried out. In the first experiment, a turntable is used to simulate a rigid rotation flow field. The estimated uncertainty of the vorticity is compared with the actual vorticity error root-mean-square, with differences between the two quantities within 5-10% for different interrogation window sizes and overlap factors. A turbulent jet flow is investigated in the second experimental assessment. The reference velocity, which is used to compute the reference value of the instantaneous flow properties of interest, is obtained with an auxiliary PIV system, which features a higher dynamic range than the measurement system. Finally, the uncertainty quantification of statistical quantities is assessed via PIV measurements in a cavity flow. The comparison between estimated uncertainty and actual error demonstrates the accuracy of the proposed uncertainty propagation methodology.
Visualizing Flow of Uncertainty through Analytical Processes.
Wu, Yingcai; Yuan, Guo-Xun; Ma, Kwan-Liu
2012-12-01
Uncertainty can arise in any stage of a visual analytics process, especially in data-intensive applications with a sequence of data transformations. Additionally, throughout the process of multidimensional, multivariate data analysis, uncertainty due to data transformation and integration may split, merge, increase, or decrease. This dynamic characteristic along with other features of uncertainty pose a great challenge to effective uncertainty-aware visualization. This paper presents a new framework for modeling uncertainty and characterizing the evolution of the uncertainty information through analytical processes. Based on the framework, we have designed a visual metaphor called uncertainty flow to visually and intuitively summarize how uncertainty information propagates over the whole analysis pipeline. Our system allows analysts to interact with and analyze the uncertainty information at different levels of detail. Three experiments were conducted to demonstrate the effectiveness and intuitiveness of our design.
Verma, Mahendra P.
2012-11-01
The quartz solubility geothermometry to calculate geothermal reservoir temperature and vapor fraction with multivariate analytical uncertainty propagation is programmed as two classes, SiO2TD and QrtzGeotherm in Visual Basic in Visual Studio 2010 (VB.NET). The class, SiO2TD calculates the total discharge concentration, SiO2TD and its uncertainty, SiO2TDErr from the analytical concentration of silica, SiO2msd and uncertainty, SiO2msdErr of separated water, sampled after N-separations of vapor and liquid. The class, QrtzGeotherm uses the following properties as input parameters: (i) HRes-reservoir enthalpy (kJ/kg), (ii) HResErr-uncertainty in the reservoir enthalpy (kJ/kg), (iii) SiO2TD-total discharge silica concentration (ppm), (iv) SiO2TDErr-uncertainty in the total discharge silica concentration (ppm) (v) GeoEq-number of quartz solubility regression equation, (vi) TempGuess-a guess value of the reservoir temperature (°C). The properties corresponding to the output parameters are (i) TempRes-reservoir temperature (K), (ii) TempResErr-uncertainty in the reservoir temperature (K), (iii) VaporRes-reservoir vapor fraction and (iv) VaporResErr-uncertainty in the reservoir vapor fraction. Similarly, it has a method, SiO2Eqn(EqNo, Temp) to provide the silica solubility as function of temperature corresponding to the regression equation. Four quartz solubility equations along the liquid-vapor saturation curve: (i) a quadratic equation of 1/T and pressure, (ii) a linear equation relating log SiO2to the inverse of absolute temperature (T), (iii) a polynomial of T including logarithmic terms and (iv) temperature as a polynomial of SiO2including logarithmic terms are programmed. A demonstration program, QGeotherm is written VB.NET. Similarly, the applicability of classes SiO2TD and QrtzGeotherm in MS-Excel is illustrated considering Los Azufres geothermal field as an example.
Uncertainty propagation with functionally correlated quantities
Giordano, Mosè
2016-01-01
Many uncertainty propagation software exist, written in different programming languages, but not all of them are able to handle functional correlation between quantities. In this paper we review one strategy to deal with uncertainty propagation of quantities that are functionally correlated, and introduce a new software offering this feature: the Julia package Measurements.jl. It supports real and complex numbers with uncertainty, arbitrary-precision calculations, mathematical and linear algebra operations with matrices and arrays.
Stochastic and epistemic uncertainty propagation in LCA
DEFF Research Database (Denmark)
Clavreul, Julie; Guyonnet, Dominique; Tonini, Davide
2013-01-01
When performing uncertainty propagation, most LCA practitioners choose to represent uncertainties by single probability distributions and to propagate them using stochastic methods. However, the selection of single probability distributions appears often arbitrary when faced with scarce information...... or expert judgement (epistemic uncertainty). The possibility theory has been developed over the last decades to address this problem. The objective of this study is to present a methodology that combines probability and possibility theories to represent stochastic and epistemic uncertainties in a consistent...... of epistemic uncertainty representation using fuzzy intervals. The propagation methods used are the Monte Carlo analysis for probability distribution and an optimisation on alpha-cuts for fuzzy intervals. The proposed method (noted as Independent Random Set, IRS) generalizes the process of random sampling...
Uncertainty propagation within the UNEDF models
Haverinen, T
2016-01-01
The parameters of the nuclear energy density have to be adjusted to experimental data. As a result they carry certain uncertainty which then propagates to calculated values of observables. In the present work we quantify the statistical uncertainties on binding energies for three UNEDF Skyrme energy density functionals by taking advantage of the knowledge of the model parameter uncertainties. We find that the uncertainty of UNEDF models increases rapidly when going towards proton or neutron rich nuclei. We also investigate the impact of each model parameter on the total error budget.
Uncertainty propagation within the UNEDF models
Haverinen, T.; Kortelainen, M.
2017-04-01
The parameters of the nuclear energy density have to be adjusted to experimental data. As a result they carry certain uncertainty which then propagates to calculated values of observables. In the present work we quantify the statistical uncertainties of binding energies, proton quadrupole moments and proton matter radius for three UNEDF Skyrme energy density functionals by taking advantage of the knowledge of the model parameter uncertainties. We find that the uncertainty of UNEDF models increases rapidly when going towards proton or neutron rich nuclei. We also investigate the impact of each model parameter on the total error budget.
Uncertainty Propagation for Terrestrial Mobile Laser Scanner
Mezian, c.; Vallet, Bruno; Soheilian, Bahman; Paparoditis, Nicolas
2016-06-01
Laser scanners are used more and more in mobile mapping systems. They provide 3D point clouds that are used for object reconstruction and registration of the system. For both of those applications, uncertainty analysis of 3D points is of great interest but rarely investigated in the literature. In this paper we present a complete pipeline that takes into account all the sources of uncertainties and allows to compute a covariance matrix per 3D point. The sources of uncertainties are laser scanner, calibration of the scanner in relation to the vehicle and direct georeferencing system. We suppose that all the uncertainties follow the Gaussian law. The variances of the laser scanner measurements (two angles and one distance) are usually evaluated by the constructors. This is also the case for integrated direct georeferencing devices. Residuals of the calibration process were used to estimate the covariance matrix of the 6D transformation between scanner laser and the vehicle system. Knowing the variances of all sources of uncertainties, we applied uncertainty propagation technique to compute the variance-covariance matrix of every obtained 3D point. Such an uncertainty analysis enables to estimate the impact of different laser scanners and georeferencing devices on the quality of obtained 3D points. The obtained uncertainty values were illustrated using error ellipsoids on different datasets.
Quantification and Propagation of Nuclear Data Uncertainties
Rising, Michael E.
The use of several uncertainty quantification and propagation methodologies is investigated in the context of the prompt fission neutron spectrum (PFNS) uncertainties and its impact on critical reactor assemblies. First, the first-order, linear Kalman filter is used as a nuclear data evaluation and uncertainty quantification tool combining available PFNS experimental data and a modified version of the Los Alamos (LA) model. The experimental covariance matrices, not generally given in the EXFOR database, are computed using the GMA methodology used by the IAEA to establish more appropriate correlations within each experiment. Then, using systematics relating the LA model parameters across a suite of isotopes, the PFNS for both the uranium and plutonium actinides are evaluated leading to a new evaluation including cross-isotope correlations. Next, an alternative evaluation approach, the unified Monte Carlo (UMC) method, is studied for the evaluation of the PFNS for the n(0.5 MeV)+Pu-239 fission reaction and compared to the Kalman filter. The UMC approach to nuclear data evaluation is implemented in a variety of ways to test convergence toward the Kalman filter results and to determine the nonlinearities present in the LA model. Ultimately, the UMC approach is shown to be comparable to the Kalman filter for a realistic data evaluation of the PFNS and is capable of capturing the nonlinearities present in the LA model. Next, the impact that the PFNS uncertainties have on important critical assemblies is investigated. Using the PFNS covariance matrices in the ENDF/B-VII.1 nuclear data library, the uncertainties of the effective multiplication factor, leakage, and spectral indices of the Lady Godiva and Jezebel critical assemblies are quantified. Using principal component analysis on the PFNS covariance matrices results in needing only 2-3 principal components to retain the PFNS uncertainties. Then, using the polynomial chaos expansion (PCE) on the uncertain output
A review of uncertainty propagation in orbital mechanics
Luo, Ya-zhong; Yang, Zhen
2017-02-01
Orbital uncertainty propagation plays an important role in space situational awareness related missions such as tracking and data association, conjunction assessment, sensor resource management and anomaly detection. Linear models and Monte Carlo simulation were primarily used to propagate uncertainties. However, due to the nonlinear nature of orbital dynamics, problems such as low precision and intensive computation have greatly hampered the application of these methods. Aiming at solving these problems, many nonlinear uncertainty propagators have been proposed in the past two decades. To motivate this research area and facilitate the development of orbital uncertainty propagation, this paper summarizes the existing linear and nonlinear uncertainty propagators and their associated applications in the field of orbital mechanics. Frameworks of methods for orbital uncertainty propagation, the advantages and drawbacks of different methods, as well as potential directions for future efforts are also discussed.
Dynamic system uncertainty propagation using polynomial chaos
Institute of Scientific and Technical Information of China (English)
Xiong Fenfen; Chen Shishi; Xiong Ying
2014-01-01
The classic polynomial chaos method (PCM), characterized as an intrusive methodology, has been applied to uncertainty propagation (UP) in many dynamic systems. However, the intrusive polynomial chaos method (IPCM) requires tedious modification of the governing equations, which might introduce errors and can be impractical. Alternative to IPCM, the non-intrusive polynomial chaos method (NIPCM) that avoids such modifications has been developed. In spite of the frequent application to dynamic problems, almost all the existing works about NIPCM for dynamic UP fail to elaborate the implementation process in a straightforward way, which is important to readers who are unfamiliar with the mathematics of the polynomial chaos theory. Meanwhile, very few works have compared NIPCM to IPCM in terms of their merits and applicability. Therefore, the mathematic procedure of dynamic UP via both methods considering parametric and initial condition uncertainties are comparatively discussed and studied in the present paper. Comparison of accuracy and efficiency in statistic moment estimation is made by applying the two methods to several dynamic UP problems. The relative merits of both approaches are discussed and summarized. The detailed description and insights gained with the two methods through this work are expected to be helpful to engineering designers in solving dynamic UP problems.
Dynamic system uncertainty propagation using polynomial chaos
Directory of Open Access Journals (Sweden)
Xiong Fenfen
2014-10-01
Full Text Available The classic polynomial chaos method (PCM, characterized as an intrusive methodology, has been applied to uncertainty propagation (UP in many dynamic systems. However, the intrusive polynomial chaos method (IPCM requires tedious modification of the governing equations, which might introduce errors and can be impractical. Alternative to IPCM, the non-intrusive polynomial chaos method (NIPCM that avoids such modifications has been developed. In spite of the frequent application to dynamic problems, almost all the existing works about NIPCM for dynamic UP fail to elaborate the implementation process in a straightforward way, which is important to readers who are unfamiliar with the mathematics of the polynomial chaos theory. Meanwhile, very few works have compared NIPCM to IPCM in terms of their merits and applicability. Therefore, the mathematic procedure of dynamic UP via both methods considering parametric and initial condition uncertainties are comparatively discussed and studied in the present paper. Comparison of accuracy and efficiency in statistic moment estimation is made by applying the two methods to several dynamic UP problems. The relative merits of both approaches are discussed and summarized. The detailed description and insights gained with the two methods through this work are expected to be helpful to engineering designers in solving dynamic UP problems.
Towards a complete propagation uncertainties in depletion calculations
Energy Technology Data Exchange (ETDEWEB)
Martinez, J.S. [Universidad Politecnica de Madrid (Spain). Dept. of Nuclear Engineering; Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Garching (Germany); Zwermann, W.; Gallner, L.; Puente-Espel, Federico; Velkov, K.; Hannstein, V. [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Garching (Germany); Cabellos, O. [Universidad Politecnica de Madrid (Spain). Dept. of Nuclear Engineering
2013-07-01
Propagation of nuclear data uncertainties to calculated values is interesting for design purposes and libraries evaluation. XSUSA, developed at GRS, propagates cross section uncertainties to nuclear calculations. In depletion simulations, fission yields and decay data are also involved and are a possible source of uncertainty that should be taken into account. We have developed tools to generate varied fission yields and decay libraries and to propagate uncertainties through depletion in order to complete the XSUSA uncertainty assessment capabilities. A generic test to probe the methodology is defined and discussed. (orig.)
Gluon Propagator in Fractional Analytic Perturbation Theory
Allendes, Pedro; Cvetič, Gorazd
2014-01-01
We consider the gluon propagator in the Landau gauge at low spacelike momenta and with the dressing function $Z(Q^2)$ at the two-loop order. We incorporate the nonperturbative effects by making the (noninteger) powers of the QCD coupling in the dressing function $Z(Q^2)$ analytic (holomorphic) via the Fractional Analytic Perturbation Theory (FAPT) model, and simultaneously introducing the gluon dynamical mass in the propagator as motivated by the previous analyses of the Dyson-Schwinger equations. The obtained propagator has behavior compatible with the unquenched lattice data ($N_f=2+1$) at low spacelike momenta $0.4 \\ {\\rm GeV} < Q \\lesssim 10$ GeV. We conclude that the removal of the unphysical Landau singularities of the powers of the coupling via the (F)APT prescription, in conjunction with the introduction of the dynamical mass $M \\approx 0.62$ GeV of the gluon, leads to an acceptable behavior of the propagator in the infrared regime.
Uncertainty propagation in urban hydrology water quality modelling
Torres Matallana, Arturo; Leopold, U.; Heuvelink, G.B.M.
2016-01-01
Uncertainty is often ignored in urban hydrology modelling. Engineering practice typically ignores uncertainties and uncertainty propagation. This can have large impacts, such as the wrong dimensioning of urban drainage systems and the inaccurate estimation of pollution in the environment caused by c
Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I.
Bautista, M. A.; Fivet, V.; Quinet, P.; Dunn, J.; Gull, T. R.; Kallman, T. R.; Mendoza, C.
2013-01-01
We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data.We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of Oiii and Fe ii and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe ii]. Key words: atomic data - atomic processes - line: formation - methods: data analysis - molecular data - molecular processes - techniques: spectroscopic
Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I
Bautista, Manuel A; Quinet, Pascal; Dunn, Jay; Kallman, Theodore R Gull Timothy R; Mendoza, Claudio
2013-01-01
We present a method for computing uncertainties in spectral models, i.e. level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data. We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of O III and Fe II and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe II].
Manufacturing Data Uncertainties Propagation Method in Burn-Up Problems
Directory of Open Access Journals (Sweden)
Thomas Frosio
2017-01-01
Full Text Available A nuclear data-based uncertainty propagation methodology is extended to enable propagation of manufacturing/technological data (TD uncertainties in a burn-up calculation problem, taking into account correlation terms between Boltzmann and Bateman terms. The methodology is applied to reactivity and power distributions in a Material Testing Reactor benchmark. Due to the inherent statistical behavior of manufacturing tolerances, Monte Carlo sampling method is used for determining output perturbations on integral quantities. A global sensitivity analysis (GSA is performed for each manufacturing parameter and allows identifying and ranking the influential parameters whose tolerances need to be better controlled. We show that the overall impact of some TD uncertainties, such as uranium enrichment, or fuel plate thickness, on the reactivity is negligible because the different core areas induce compensating effects on the global quantity. However, local quantities, such as power distributions, are strongly impacted by TD uncertainty propagations. For isotopic concentrations, no clear trends appear on the results.
New challenges on uncertainty propagation assessment of flood risk analysis
Martins, Luciano; Aroca-Jiménez, Estefanía; Bodoque, José M.; Díez-Herrero, Andrés
2016-04-01
Natural hazards, such as floods, cause considerable damage to the human life, material and functional assets every year and around the World. Risk assessment procedures has associated a set of uncertainties, mainly of two types: natural, derived from stochastic character inherent in the flood process dynamics; and epistemic, that are associated with lack of knowledge or the bad procedures employed in the study of these processes. There are abundant scientific and technical literature on uncertainties estimation in each step of flood risk analysis (e.g. rainfall estimates, hydraulic modelling variables); but very few experience on the propagation of the uncertainties along the flood risk assessment. Therefore, epistemic uncertainties are the main goal of this work, in particular,understand the extension of the propagation of uncertainties throughout the process, starting with inundability studies until risk analysis, and how far does vary a proper analysis of the risk of flooding. These methodologies, such as Polynomial Chaos Theory (PCT), Method of Moments or Monte Carlo, are used to evaluate different sources of error, such as data records (precipitation gauges, flow gauges...), hydrologic and hydraulic modelling (inundation estimation), socio-demographic data (damage estimation) to evaluate the uncertainties propagation (UP) considered in design flood risk estimation both, in numerical and cartographic expression. In order to consider the total uncertainty and understand what factors are contributed most to the final uncertainty, we used the method of Polynomial Chaos Theory (PCT). It represents an interesting way to handle to inclusion of uncertainty in the modelling and simulation process. PCT allows for the development of a probabilistic model of the system in a deterministic setting. This is done by using random variables and polynomials to handle the effects of uncertainty. Method application results have a better robustness than traditional analysis
Stochastic Systems Uncertainty Quantification and Propagation
Grigoriu, Mircea
2012-01-01
Uncertainty is an inherent feature of both properties of physical systems and the inputs to these systems that needs to be quantified for cost effective and reliable designs. The states of these systems satisfy equations with random entries, referred to as stochastic equations, so that they are random functions of time and/or space. The solution of stochastic equations poses notable technical difficulties that are frequently circumvented by heuristic assumptions at the expense of accuracy and rigor. The main objective of Stochastic Systems is to promoting the development of accurate and efficient methods for solving stochastic equations and to foster interactions between engineers, scientists, and mathematicians. To achieve these objectives Stochastic Systems presents: · A clear and brief review of essential concepts on probability theory, random functions, stochastic calculus, Monte Carlo simulation, and functional analysis · Probabilistic models for random variables an...
HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks
Energy Technology Data Exchange (ETDEWEB)
Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-05-01
This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.
Uncertainty propagation in nerve impulses through the action potential mechanism
Torres Valderrama, A.; Witteveen, J.A.S.; Navarro Jimenez, M.I.; Blom, J.G.
2015-01-01
We investigate the propagation of probabilistic uncertainty through the action potential mechanism in nerve cells. Using the Hodgkin-Huxley (H-H) model and Stochastic Collocation on Sparse Grids, we obtain an accurate probabilistic interpretation of the deterministic dynamics of the transmembrane po
Investigation of Free Particle Propagator with Generalized Uncertainty Problem
Ghobakhloo, F
2016-01-01
We consider the Schrodinger equation with a generalized uncertainty principle for a free particle. We then transform the problem into a second ordinary differential equation and thereby obtain the corresponding propagator. The result of ordinary quantum mechanics is recovered for vanishing minimal length parameter.
On the propagation of uncertainties in radiation belt simulations
Camporeale, Enrico; Shprits, Yuri; Chandorkar, Mandar; Drozdov, Alexander; Wing, Simon
2016-11-01
We present the first study of the uncertainties associated with radiation belt simulations, performed in the standard quasi-linear diffusion framework. In particular, we estimate how uncertainties of some input parameters propagate through the nonlinear simulation, producing a distribution of outputs that can be quite broad. Here we restrict our focus on two-dimensional simulations (in energy and pitch angle space) of parallel-propagating chorus waves only, and we study as stochastic input parameters the geomagnetic index Kp (that characterizes the time dependency of an idealized storm), the latitudinal extent of waves, and the average electron density. We employ a collocation method, thus performing an ensemble of simulations. The results of this work point to the necessity of shifting to a probabilistic interpretation of radiation belt simulation results and suggest that an accurate specification of a time-dependent density model is crucial for modeling the radiation environment.
On analytic formulas of Feynman propagators in position space
Institute of Scientific and Technical Information of China (English)
ZHANG Hong-Hao; FENG Kai-Xi; QIU Si-Wei; ZHAO An; LI Xue-Song
2010-01-01
We correct an inaccurate result of previous work on the Feynman propagator in position space of a free Dirac field in(3+1)-dimensional spacetime; we derive the generalized analytic formulas of both the scalar Feynman propagator and the spinor Feynman propagator in position space in arbitrary(D+1)-dimensional spacetime; and we further find a recurrence relation among the spinor Feynman propagator in(D+l)-dimensional spacetime and the scalar Feynman propagators in(D+1)-,(D-1)-and(D+3)-dimensional spacetimes.
Stolarski, R. S.; Butler, D. M.; Rundel, R. D.
1977-01-01
A concise stratospheric model was used in a Monte-Carlo analysis of the propagation of reaction rate uncertainties through the calculation of an ozone perturbation due to the addition of chlorine. Two thousand Monte-Carlo cases were run with 55 reaction rates being varied. Excellent convergence was obtained in the output distributions because the model is sensitive to the uncertainties in only about 10 reactions. For a 1 ppby chlorine perturbation added to a 1.5 ppby chlorine background, the resultant 1 sigma uncertainty on the ozone perturbation is a factor of 1.69 on the high side and 1.80 on the low side. The corresponding 2 sigma factors are 2.86 and 3.23. Results are also given for the uncertainties, due to reaction rates, in the ambient concentrations of stratospheric species.
Uncertainty Quantification and Propagation in Nuclear Density Functional Theory
Energy Technology Data Exchange (ETDEWEB)
Schunck, N; McDonnell, J D; Higdon, D; Sarich, J; Wild, S M
2015-03-17
Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going eff orts seek to better root nuclear DFT in the theory of nuclear forces, energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in fi nite nuclei. In this paper, we review recent eff orts to quantify the related uncertainties, and propagate them to model predictions. In particular, we cover the topics of parameter estimation for inverse problems, statistical analysis of model uncertainties and Bayesian inference methods. Illustrative examples are taken from the literature.
Uncertainty propagation for nonlinear vibrations: A non-intrusive approach
Panunzio, A. M.; Salles, Loic; Schwingshackl, C. W.
2017-02-01
The propagation of uncertain input parameters in a linear dynamic analysis is reasonably well established today, but with the focus of the dynamic analysis shifting towards nonlinear systems, new approaches is required to compute the uncertain nonlinear responses. A combination of stochastic methods (Polynomial Chaos Expansion, PCE) with an Asymptotic Numerical Method (ANM) for the solution of the nonlinear dynamic systems is presented to predict the propagation of random input uncertainties and assess their influence on the nonlinear vibrational behaviour of a system. The proposed method allows the computation of stochastic resonance frequencies and peak amplitudes based on multiple input uncertainties, leading to a series of uncertain nonlinear dynamic responses. One of the main challenges when using the PCE is thereby the Gibbs phenomenon, which can heavily impact the resulting stochastic nonlinear response by introducing spurious oscillations. A novel technique to avoid the Gibbs phenomenon is be presented in this paper, leading to high quality frequency response predictions. A comparison of the proposed stochastic nonlinear analysis technique to traditional Monte Carlo simulations, demonstrates comparable accuracy at a significantly reduced computational cost, thereby validating the proposed approach.
Sin, Gürkan; Gernaey, Krist V; Lantz, Anna Eliasson
2009-01-01
The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model-building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input uncertainty resulting from assumptions of the model was propagated using the Monte Carlo procedure to estimate the output uncertainty. The results showed that significant uncertainty exists in the model outputs. Moreover the uncertainty in the biomass, glucose, ammonium and base-consumption were found low compared to the large uncertainty observed in the antibiotic and off-gas CO(2) predictions. The output uncertainty was observed to be lower during the exponential growth phase, while higher in the stationary and death phases - meaning the model describes some periods better than others. To understand which input parameters are responsible for the output uncertainty, three sensitivity methods (Standardized Regression Coefficients, Morris and differential analysis) were evaluated and compared. The results from these methods were mostly in agreement with each other and revealed that only few parameters (about 10) out of a total 56 were mainly responsible for the output uncertainty. Among these significant parameters, one finds parameters related to fermentation characteristics such as biomass metabolism, chemical equilibria and mass-transfer. Overall the uncertainty and sensitivity analysis are found promising for helping to build reliable mechanistic models and to interpret the model outputs properly. These tools make part of good modeling practice, which can contribute to successful PAT applications for increased process understanding, operation and control purposes.
MONTE-CARLO BURNUP CALCULATION UNCERTAINTY QUANTIFICATION AND PROPAGATION DETERMINATION
Energy Technology Data Exchange (ETDEWEB)
Nichols, T.; Sternat, M.; Charlton, W.
2011-05-08
MONTEBURNS is a Monte-Carlo depletion routine utilizing MCNP and ORIGEN 2.2. Uncertainties exist in the MCNP transport calculation, but this information is not passed to the depletion calculation in ORIGEN or saved. To quantify this transport uncertainty and determine how it propagates between burnup steps, a statistical analysis of a multiple repeated depletion runs is performed. The reactor model chosen is the Oak Ridge Research Reactor (ORR) in a single assembly, infinite lattice configuration. This model was burned for a 25.5 day cycle broken down into three steps. The output isotopics as well as effective multiplication factor (k-effective) were tabulated and histograms were created at each burnup step using the Scott Method to determine the bin width. It was expected that the gram quantities and k-effective histograms would produce normally distributed results since they were produced from a Monte-Carlo routine, but some of results do not. The standard deviation at each burnup step was consistent between fission product isotopes as expected, while the uranium isotopes created some unique results. The variation in the quantity of uranium was small enough that, from the reaction rate MCNP tally, round off error occurred producing a set of repeated results with slight variation. Statistical analyses were performed using the {chi}{sup 2} test against a normal distribution for several isotopes and the k-effective results. While the isotopes failed to reject the null hypothesis of being normally distributed, the {chi}{sup 2} statistic grew through the steps in the k-effective test. The null hypothesis was rejected in the later steps. These results suggest, for a high accuracy solution, MCNP cell material quantities less than 100 grams and greater kcode parameters are needed to minimize uncertainty propagation and minimize round off effects.
Assessing and propagating uncertainty in model inputs in corsim
Energy Technology Data Exchange (ETDEWEB)
Molina, G.; Bayarri, M. J.; Berger, J. O.
2001-07-01
CORSIM is a large simulator for vehicular traffic, and is being studied with respect to its ability to successfully model and predict behavior of traffic in a 36 block section of Chicago. Inputs to the simulator include information about street configuration, driver behavior, traffic light timing, turning probabilities at each corner and distributions of traffic ingress into the system. This work is described in more detail in the article Fast Simulators for Assessment and Propagation of Model Uncertainty also in these proceedings. The focus of this conference poster is on the computational aspects of this problem. In particular, we address the description of the full conditional distributions needed for implementation of the MCMC algorithm and, in particular, how the constraints can be incorporated; details concerning the run time and convergence of the MCMC algorithm; and utilisation of the MCMC output for prediction and uncertainty analysis concerning the CORSIM computer model. As this last is the ultimate goal, it is worth emphasizing that the incorporation of all uncertainty concerning inputs can significantly affect the model predictions. (Author)
Data Assimilation and Propagation of Uncertainty in Multiscale Cardiovascular Simulation
Schiavazzi, Daniele; Marsden, Alison
2015-11-01
Cardiovascular modeling is the application of computational tools to predict hemodynamics. State-of-the-art techniques couple a 3D incompressible Navier-Stokes solver with a boundary circulation model and can predict local and peripheral hemodynamics, analyze the post-operative performance of surgical designs and complement clinical data collection minimizing invasive and risky measurement practices. The ability of these tools to make useful predictions is directly related to their accuracy in representing measured physiologies. Tuning of model parameters is therefore a topic of paramount importance and should include clinical data uncertainty, revealing how this uncertainty will affect the predictions. We propose a fully Bayesian, multi-level approach to data assimilation of uncertain clinical data in multiscale circulation models. To reduce the computational cost, we use a stable, condensed approximation of the 3D model build by linear sparse regression of the pressure/flow rate relationship at the outlets. Finally, we consider the problem of non-invasively propagating the uncertainty in model parameters to the resulting hemodynamics and compare Monte Carlo simulation with Stochastic Collocation approaches based on Polynomial or Multi-resolution Chaos expansions.
Risk classification and uncertainty propagation for virtual water distribution systems
Energy Technology Data Exchange (ETDEWEB)
Torres, Jacob M. [Department of Geography and Environmental Engineering, Johns Hopkins University, Baltimore, MD 21218 (United States)], E-mail: jato@jhu.edu; Brumbelow, Kelly [Zachry Department of Civil Engineering, Texas A and M University, College Station, TX 77843 (United States); Guikema, Seth D. [Department of Geography and Environmental Engineering, Johns Hopkins University, Baltimore, MD 21218 (United States)
2009-08-15
While the secrecy of real water distribution system data is crucial, it poses difficulty for research as results cannot be publicized. This data includes topological layouts of pipe networks, pump operation schedules, and water demands. Therefore, a library of virtual water distribution systems can be an important research tool for comparative development of analytical methods. A virtual city, 'Micropolis', has been developed, including a comprehensive water distribution system, as a first entry into such a library. This virtual city of 5000 residents is fully described in both geographic information systems (GIS) and EPANet hydraulic model frameworks. A risk classification scheme and Monte Carlo analysis are employed for an attempted water supply contamination attack. Model inputs to be considered include uncertainties in: daily water demand, seasonal demand, initial storage tank levels, the time of day a contamination event is initiated, duration of contamination event, and contaminant quantity. Findings show that reasonable uncertainties in model inputs produce high variability in exposure levels. It is also shown that exposure level distributions experience noticeable sensitivities to population clusters within the contaminant spread area. High uncertainties in exposure patterns lead to greater resources needed for more effective mitigation strategies.
Ultrashort Optical Pulse Propagation in terms of Analytic Signal
Directory of Open Access Journals (Sweden)
Sh. Amiranashvili
2011-01-01
Full Text Available We demonstrate that ultrashort optical pulses propagating in a nonlinear dispersive medium are naturally described through incorporation of analytic signal for the electric field. To this end a second-order nonlinear wave equation is first simplified using a unidirectional approximation. Then the analytic signal is introduced, and all nonresonant nonlinear terms are eliminated. The derived propagation equation accounts for arbitrary dispersion, resonant four-wave mixing processes, weak absorption, and arbitrary pulse duration. The model applies to the complex electric field and is independent of the slowly varying envelope approximation. Still the derived propagation equation posses universal structure of the generalized nonlinear Schrödinger equation (NSE. In particular, it can be solved numerically with only small changes of the standard split-step solver or more complicated spectral algorithms for NSE. We present exemplary numerical solutions describing supercontinuum generation with an ultrashort optical pulse.
Analytic structure of QCD propagators in Minkowski space
Siringo, Fabio
2016-01-01
Analytical functions for the propagators of QCD, including a set of chiral quarks, are derived by a one-loop massive expansion in the Landau gauge, deep in the infrared. By analytic continuation, the spectral functions are studied in Minkowski space, yielding a direct proof of positivity violation and confinement from first principles.The dynamical breaking of chiral symmetry is described on the same footing of gluon mass generation, providing a unified picture. While dealing with the exact Lagrangian, the expansion is based on massive free-particle propagators, is safe in the infrared and is equivalent to the standard perturbation theory in the UV. By dimensional regularization, all diverging mass terms cancel exactly without including mass counterterms that would spoil the gauge and chiral symmetry of the Lagrangian. Universal scaling properties are predicted for the inverse dressing functions and shown to be satisfied by the lattice data. Complex conjugated poles are found for the gluon propagator, in agre...
Uncertainty propagation in orbital mechanics via tensor decomposition
Sun, Yifei; Kumar, Mrinal
2016-03-01
Uncertainty forecasting in orbital mechanics is an essential but difficult task, primarily because the underlying Fokker-Planck equation (FPE) is defined on a relatively high dimensional (6-D) state-space and is driven by the nonlinear perturbed Keplerian dynamics. In addition, an enormously large solution domain is required for numerical solution of this FPE (e.g. encompassing the entire orbit in the x-y-z subspace), of which the state probability density function (pdf) occupies a tiny fraction at any given time. This coupling of large size, high dimensionality and nonlinearity makes for a formidable computational task, and has caused the FPE for orbital uncertainty propagation to remain an unsolved problem. To the best of the authors' knowledge, this paper presents the first successful direct solution of the FPE for perturbed Keplerian mechanics. To tackle the dimensionality issue, the time-varying state pdf is approximated in the CANDECOMP/PARAFAC decomposition tensor form where all the six spatial dimensions as well as the time dimension are separated from one other. The pdf approximation for all times is obtained simultaneously via the alternating least squares algorithm. Chebyshev spectral differentiation is employed for discretization on account of its spectral ("super-fast") convergence rate. To facilitate the tensor decomposition and control the solution domain size, system dynamics is expressed using spherical coordinates in a noninertial reference frame. Numerical results obtained on a regular personal computer are compared with Monte Carlo simulations.
Propagating Uncertainties from Source Model Estimations to Coulomb Stress Changes
Baumann, C.; Jonsson, S.; Woessner, J.
2009-12-01
Multiple studies have shown that static stress changes due to permanent fault displacement trigger earthquakes on the causative and on nearby faults. Calculations of static stress changes in previous studies have been based on fault parameters without considering any source model uncertainties or with crude assumptions about fault model errors based on available different source models. In this study, we investigate the influence of fault model parameter uncertainties on Coulomb Failure Stress change (ΔCFS) calculations by propagating the uncertainties from the fault estimation process to the Coulomb Failure stress changes. We use 2500 sets of correlated model parameters determined for the June 2000 Mw = 5.8 Kleifarvatn earthquake, southwest Iceland, which were estimated by using a repeated optimization procedure and multiple data sets that had been modified by synthetic noise. The model parameters show that the event was predominantly a right-lateral strike-slip earthquake on a north-south striking fault. The variability of the sets of models represents the posterior probability density distribution for the Kleifarvatn source model. First we investigate the influence of individual source model parameters on the ΔCFS calculations. We show through a correlation analysis that for this event, changes in dip, east location, strike, width and in part north location have stronger impact on the Coulomb failure stress changes than changes in fault length, depth, dip-slip and strike-slip. Second we find that the accuracy of Coulomb failure stress changes appears to increase with increasing distance from the fault. The absolute value of the standard deviation decays rapidly with distance within about 5-6 km around the fault from about 3-3.5 MPa down to a few Pa, implying that the influence of parameter changes decrease with increasing distance. This is underlined by the coefficient of variation CV, defined as the ratio of the standard deviation of the Coulomb stress
Methods for estimating uncertainty in factor analytic solutions
Directory of Open Access Journals (Sweden)
P. Paatero
2013-08-01
Full Text Available EPA PMF version 5.0 and the underlying multilinear engine executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS, displacement of factor elements (DISP, and bootstrap enhanced by displacement of factor elements (BS-DISP. The goal of these methods is to capture the uncertainty of PMF analyses due to random errors and rotational ambiguity. It is shown that the three methods complement each other: depending on characteristics of the data set, one method may provide better results than the other two. Results are presented using synthetic data sets, including interpretation of diagnostics, and recommendations are given for parameters to report when documenting uncertainty estimates from EPA PMF or ME-2 applications.
Estimation of the uncertainty of analyte concentration from the measurement uncertainty.
Brown, Simon; Cooke, Delwyn G; Blackwell, Leonard F
2015-09-01
Ligand-binding assays, such as immunoassays, are usually analysed using standard curves based on the four-parameter and five-parameter logistic models. An estimate of the uncertainty of an analyte concentration obtained from such curves is needed for confidence intervals or precision profiles. Using a numerical simulation approach, it is shown that the uncertainty of the analyte concentration estimate becomes significant at the extremes of the concentration range and that this is affected significantly by the steepness of the standard curve. We also provide expressions for the coefficient of variation of the analyte concentration estimate from which confidence intervals and the precision profile can be obtained. Using three examples, we show that the expressions perform well.
Approximate analytical solutions for excitation and propagation in cardiac tissue
Greene, D'Artagnan; Shiferaw, Yohannes
2015-04-01
It is well known that a variety of cardiac arrhythmias are initiated by a focal excitation in heart tissue. At the single cell level these currents are typically induced by intracellular processes such as spontaneous calcium release (SCR). However, it is not understood how the size and morphology of these focal excitations are related to the electrophysiological properties of cardiac cells. In this paper a detailed physiologically based ionic model is analyzed by projecting the excitation dynamics to a reduced one-dimensional parameter space. Based on this analysis we show that the inward current required for an excitation to occur is largely dictated by the voltage dependence of the inward rectifier potassium current (IK 1) , and is insensitive to the detailed properties of the sodium current. We derive an analytical expression relating the size of a stimulus and the critical current required to induce a propagating action potential (AP), and argue that this relationship determines the necessary number of cells that must undergo SCR in order to induce ectopic activity in cardiac tissue. Finally, we show that, once a focal excitation begins to propagate, its propagation characteristics, such as the conduction velocity and the critical radius for propagation, are largely determined by the sodium and gap junction currents with a substantially lesser effect due to repolarizing potassium currents. These results reveal the relationship between ion channel properties and important tissue scale processes such as excitation and propagation.
Kalmikov, A.; Heimbach, P.
2013-12-01
We apply derivative-based uncertainty quantification (UQ) and sensitivity methods to the estimation of Drake Passage transport in a global barotropic configuration of the MIT ocean general circulation model (MITgcm). Sensitivity and uncertainty fields are evaluated via first and second derivative codes of the MITgcm, generated via algorithmic differentiation (AD). Observation uncertainties are projected to uncertainties in the control variables by inversion of the Hessian of the nonlinear least-squares misfit function. Only data-supported components of Hessian information are retained through elimination of the unconstrained uncertainty nullspace. The assimilated observation uncertainty is combined with prior control variable uncertainties to reduce their posterior uncertainty. The spatial patterns of posterior uncertainty reduction and their temporal evolution are explained in terms of barotropic dynamics. Global uncertainty teleconnection mechanisms are identified as barotropic uncertainty waves. Uncertainty coupling across different control fields is demonstrated by assimilation of sea surface height uncertainty. A second step in our UQ scheme consists in propagating prior and posterior uncertainties of the model controls onto model output variables of interest, here Drake Passage transport. Forward uncertainty propagation amounts to matrix transformation of the uncertainty covariances via the model Jacobian and its adjoint. Sources of uncertainties of the transport are revealed through analysis of the adjoint wave dynamics in the model. These adjoint (reversed) mechanisms are associated with the evolution of sensitivity fields and our method formally extends sensitivity analysis to uncertainty quantification. Inverse uncertainty propagation mechanisms can be linked to adjoint dynamics in a similar manner. The posterior correlations of controls are found to dominate the reduction of the transport uncertainty compared to the marginal uncertainty reduction of the
Uncertainty-aware video visual analytics of tracked moving objects
Directory of Open Access Journals (Sweden)
Markus Höferlin
1969-12-01
Full Text Available Vast amounts of video data render manual video analysis useless while recent automatic video analytics techniques suffer from insufficient performance. To alleviate these issues, we present a scalable and reliable approach exploiting the visual analytics methodology. This involves the user in the iterative process of exploration, hypotheses generation, and their verification. Scalability is achieved by interactive filter definitions on trajectory features extracted by the automatic computer vision stage. We establish the interface between user and machine adopting the VideoPerpetuoGram (VPG for visualization and enable users to provide filter-based relevance feedback. Additionally, users are supported in deriving hypotheses by context-sensitive statistical graphics. To allow for reliable decision making, we gather uncertainties introduced by the computer vision step, communicate these information to users through uncertainty visualization, and grant fuzzy hypothesis formulation to interact with the machine. Finally, we demonstrate the effectiveness of our approach by the video analysis mini challenge which was part of the IEEE Symposium on Visual Analytics Science and Technology 2009.
Analytic structure of QCD propagators in Minkowski space
Siringo, Fabio
2016-12-01
Analytical functions for the propagators of QCD, including a set of chiral quarks, are derived by a one-loop massive expansion in the Landau gauge, deep in the infrared. By analytic continuation, the spectral functions are studied in Minkowski space, yielding a direct proof of positivity violation and confinement from first principles. The dynamical breaking of chiral symmetry is described on the same footing of gluon mass generation, providing a unified picture. While dealing with the exact Lagrangian, the expansion is based on massive free-particle propagators, is safe in the infrared and is equivalent to the standard perturbation theory in the UV. By dimensional regularization, all diverging mass terms cancel exactly without including mass counterterms that would spoil the gauge and chiral symmetry of the Lagrangian. Universal scaling properties are predicted for the inverse dressing functions and shown to be satisfied by the lattice data. Complex conjugated poles are found for the gluon propagator, in agreement with the i-particle scenario.
Díez, C. J.; Cabellos, O.; Martínez, J. S.
2015-01-01
Several approaches have been developed in last decades to tackle nuclear data uncertainty propagation problems of burn-up calculations. One approach proposed was the Hybrid Method, where uncertainties in nuclear data are propagated only on the depletion part of a burn-up problem. Because only depletion is addressed, only one-group cross sections are necessary, and hence, their collapsed one-group uncertainties. This approach has been applied successfully in several advanced reactor systems like EFIT (ADS-like reactor) or ESFR (Sodium fast reactor) to assess uncertainties on the isotopic composition. However, a comparison with using multi-group energy structures was not carried out, and has to be performed in order to analyse the limitations of using one-group uncertainties.
Spin-Stabilized Spacecrafts: Analytical Attitude Propagation Using Magnetic Torques
Directory of Open Access Journals (Sweden)
Roberta Veloso Garcia
2009-01-01
Full Text Available An analytical approach for spin-stabilized satellites attitude propagation is presented, considering the influence of the residual magnetic torque and eddy currents torque. It is assumed two approaches to examine the influence of external torques acting during the motion of the satellite, with the Earth's magnetic field described by the quadripole model. In the first approach is included only the residual magnetic torque in the motion equations, with the satellites in circular or elliptical orbit. In the second approach only the eddy currents torque is analyzed, with the satellite in circular orbit. The inclusion of these torques on the dynamic equations of spin stabilized satellites yields the conditions to derive an analytical solution. The solutions show that residual torque does not affect the spin velocity magnitude, contributing only for the precession and the drift of the spacecraft's spin axis and the eddy currents torque causes an exponential decay of the angular velocity magnitude. Numerical simulations performed with data of the Brazilian Satellites (SCD1 and SCD2 show the period that analytical solution can be used to the attitude propagation, within the dispersion range of the attitude determination system performance of Satellite Control Center of Brazil National Research Institute.
Preliminary Results on Uncertainty Quantification for Pattern Analytics
Energy Technology Data Exchange (ETDEWEB)
Stracuzzi, David John [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Brost, Randolph [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Chen, Maximillian Gene [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Malinas, Rebecca [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Peterson, Matthew Gregor [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Phillips, Cynthia A. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robinson, David G. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Woodbridge, Diane [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
This report summarizes preliminary research into uncertainty quantification for pattern ana- lytics within the context of the Pattern Analytics to Support High-Performance Exploitation and Reasoning (PANTHER) project. The primary focus of PANTHER was to make large quantities of remote sensing data searchable by analysts. The work described in this re- port adds nuance to both the initial data preparation steps and the search process. Search queries are transformed from does the specified pattern exist in the data? to how certain is the system that the returned results match the query? We show example results for both data processing and search, and discuss a number of possible improvements for each.
A Multi-Model Approach for Uncertainty Propagation and Model Calibration in CFD Applications
Wang, Jian-xun; Xiao, Heng
2015-01-01
Proper quantification and propagation of uncertainties in computational simulations are of critical importance. This issue is especially challenging for CFD applications. A particular obstacle for uncertainty quantifications in CFD problems is the large model discrepancies associated with the CFD models used for uncertainty propagation. Neglecting or improperly representing the model discrepancies leads to inaccurate and distorted uncertainty distribution for the Quantities of Interest. High-fidelity models, being accurate yet expensive, can accommodate only a small ensemble of simulations and thus lead to large interpolation errors and/or sampling errors; low-fidelity models can propagate a large ensemble, but can introduce large modeling errors. In this work, we propose a multi-model strategy to account for the influences of model discrepancies in uncertainty propagation and to reduce their impact on the predictions. Specifically, we take advantage of CFD models of multiple fidelities to estimate the model ...
Forward propagation of parametric uncertainties through models of NDE inspection scenarios
Cherry, Matthew; Sabbagh, Harold; Aldrin, John; Knopp, Jeremy; Pilchak, Adam
2015-03-01
Forward uncertainty propagation has been a topic of interest to NDE researchers for several years. To this point, the purpose has been to gain an understanding of the uncertainties that can be seen in signals from NDE sensors given uncertainties in the geometric and material parameters of the problem. However, a complex analysis of an inspection scenario with high variability has not been performed. Furthermore, these methods have not seen direct practical application in the areas of model assisted probability of detection or inverse problems. In this paper, uncertainty due to spatial heterogeneity in material systems that undergo NDE inspection will be discussed. Propagation of this uncertainty through forward models of inspection scenarios will be outlined and the mechanisms for representing the spatial heterogeneity will be explained in detail. Examples will be provided that illustrate the effect of high variability in uncertainty propagation in the context of forward modeling.
Analytical probabilistic proton dose calculation and range uncertainties
Bangert, M.; Hennig, P.; Oelfke, U.
2014-03-01
We introduce the concept of analytical probabilistic modeling (APM) to calculate the mean and the standard deviation of intensity-modulated proton dose distributions under the influence of range uncertainties in closed form. For APM, range uncertainties are modeled with a multivariate Normal distribution p(z) over the radiological depths z. A pencil beam algorithm that parameterizes the proton depth dose d(z) with a weighted superposition of ten Gaussians is used. Hence, the integrals ∫ dz p(z) d(z) and ∫ dz p(z) d(z)2 required for the calculation of the expected value and standard deviation of the dose remain analytically tractable and can be efficiently evaluated. The means μk, widths δk, and weights ωk of the Gaussian components parameterizing the depth dose curves are found with least squares fits for all available proton ranges. We observe less than 0.3% average deviation of the Gaussian parameterizations from the original proton depth dose curves. Consequently, APM yields high accuracy estimates for the expected value and standard deviation of intensity-modulated proton dose distributions for two dimensional test cases. APM can accommodate arbitrary correlation models and account for the different nature of random and systematic errors in fractionated radiation therapy. Beneficial applications of APM in robust planning are feasible.
'spup' - an R package for uncertainty propagation in spatial environmental modelling
Sawicka, Kasia; Heuvelink, Gerard
2016-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in
Pragmatic aspects of uncertainty propagation: A conceptual review
Thacker, W.Carlisle
2015-09-11
When quantifying the uncertainty of the response of a computationally costly oceanographic or meteorological model stemming from the uncertainty of its inputs, practicality demands getting the most information using the fewest simulations. It is widely recognized that, by interpolating the results of a small number of simulations, results of additional simulations can be inexpensively approximated to provide a useful estimate of the variability of the response. Even so, as computing the simulations to be interpolated remains the biggest expense, the choice of these simulations deserves attention. When making this choice, two requirement should be considered: (i) the nature of the interpolation and ii) the available information about input uncertainty. Examples comparing polynomial interpolation and Gaussian process interpolation are presented for three different views of input uncertainty.
Pragmatic aspects of uncertainty propagation: A conceptual review
Thacker, W. Carlisle; Iskandarani, Mohamed; Gonçalves, Rafael C.; Srinivasan, Ashwanth; Knio, Omar M.
2015-11-01
When quantifying the uncertainty of the response of a computationally costly oceanographic or meteorological model stemming from the uncertainty of its inputs, practicality demands getting the most information using the fewest simulations. It is widely recognized that, by interpolating the results of a small number of simulations, results of additional simulations can be inexpensively approximated to provide a useful estimate of the variability of the response. Even so, as computing the simulations to be interpolated remains the biggest expense, the choice of these simulations deserves attention. When making this choice, two requirement should be considered: (i) the nature of the interpolation and (ii) the available information about input uncertainty. Examples comparing polynomial interpolation and Gaussian process interpolation are presented for three different views of input uncertainty.
An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method
Energy Technology Data Exchange (ETDEWEB)
Campolina, Daniel; Lima, Paulo Rubens I., E-mail: campolina@cdtn.br, E-mail: pauloinacio@cpejr.com.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil). Servico de Tecnologia de Reatores; Pereira, Claubia; Veloso, Maria Auxiliadora F., E-mail: claubia@nuclear.ufmg.br, E-mail: dora@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Dept. de Engenharia Nuclear
2015-07-01
Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k{sub eff} was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)
Campolina, Daniel de A. M.; Lima, Claubia P. B.; Veloso, Maria Auxiliadora F.
2014-06-01
For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95th percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input.
Propagation of nuclear data uncertainties for ELECTRA burn-up calculations
Sjöstrand, Henrik; Alhassan, Erwin; Duan, Junfeng; Gustavsson, Cecilia; KONING Arjan J.; Pomp, Stephan; Rochman, Dimitri; Österlund, Michael
2013-01-01
The European Lead-Cooled Training Reactor (ELECTRA) has been proposed as a training reactor for fast systems within the Swedish nuclear program. It is a low-power fast reactor cooled by pure liquid lead. In this work, we propagate the uncertainties in Pu-239 transport data to uncertainties in the fuel inventory of ELECTRA during the reactor life using the Total Monte Carlo approach (TMC). Within the TENDL project the nuclear models input parameters were randomized within their uncertainties a...
Analytic Matrix Method for the Study of Propagation Characteristics of a Bent Planar Waveguide
Institute of Scientific and Technical Information of China (English)
LIU Qing; CAO Zhuang-Qi; SHEN Qi-Shun; DOU Xiao-Ming; CHEN Ying-Li
2000-01-01
An analytic matrix method is used to analyze and accurately calculate the propagation constant and bendinglosses of a bent planar waveguide. This method gives not only a dispersion equation with explicit physical insight,but also accurate complex propagation constants.
Sonic Boom Pressure Signature Uncertainty Calculation and Propagation to Ground Noise
West, Thomas K., IV; Bretl, Katherine N.; Walker, Eric L.; Pinier, Jeremy T.
2015-01-01
The objective of this study was to outline an approach for the quantification of uncertainty in sonic boom measurements and to investigate the effect of various near-field uncertainty representation approaches on ground noise predictions. These approaches included a symmetric versus asymmetric uncertainty band representation and a dispersion technique based on a partial sum Fourier series that allows for the inclusion of random error sources in the uncertainty. The near-field uncertainty was propagated to the ground level, along with additional uncertainty in the propagation modeling. Estimates of perceived loudness were obtained for the various types of uncertainty representation in the near-field. Analyses were performed on three configurations of interest to the sonic boom community: the SEEB-ALR, the 69o DeltaWing, and the LM 1021-01. Results showed that representation of the near-field uncertainty plays a key role in ground noise predictions. Using a Fourier series based dispersion approach can double the amount of uncertainty in the ground noise compared to a pure bias representation. Compared to previous computational fluid dynamics results, uncertainty in ground noise predictions were greater when considering the near-field experimental uncertainty.
Methods for uncertainty propagation in life cycle assessment
Groen, E.A.; Heijungs, R.; Bokkers, E.A.M.; Boer, de I.J.M.
2014-01-01
Life cycle assessment (LCA) calculates the environmental impact of a product over its entire life cycle. Uncertainty analysis is an important aspect in LCA, and is usually performed using Monte Carlo sampling. In this study, Monte Carlo sampling, Latin hypercube sampling, quasi Monte Carlo sampling,
Addressing Uncertainty in Signal Propagation and Sensor Performance Predictions
2008-11-01
ance on how to place the sensors. Time of deployment is weeks or months from present, so weather/terrain conditions are only known in a clima ...temporal behavior based on the clima - tological/historical conditions. ERDC/CRREL TR-08-21 13 3 Describing Uncertainty “There are known knowns; there
An analytical approach for the Propagation Saw Test
Benedetti, Lorenzo; Fischer, Jan-Thomas; Gaume, Johan
2016-04-01
The Propagation Saw Test (PST) [1, 2] is an experimental in-situ technique that has been introduced to assess crack propagation propensity in weak snowpack layers buried below cohesive snow slabs. This test attracted the interest of a large number of practitioners, being relatively easy to perform and providing useful insights for the evaluation of snow instability. The PST procedure requires isolating a snow column of 30 centimeters of width and -at least-1 meter in the downslope direction. Then, once the stratigraphy is known (e.g. from a manual snow profile), a saw is used to cut a weak layer which could fail, potentially leading to the release of a slab avalanche. If the length of the saw cut reaches the so-called critical crack length, the onset of crack propagation occurs. Furthermore, depending on snow properties, the crack in the weak layer can initiate the fracture and detachment of the overlying slab. Statistical studies over a large set of field data confirmed the relevance of the PST, highlighting the positive correlation between test results and the likelihood of avalanche release [3]. Recent works provided key information on the conditions for the onset of crack propagation [4] and on the evolution of slab displacement during the test [5]. In addition, experimental studies [6] and simplified models [7] focused on the qualitative description of snowpack properties leading to different failure types, namely full propagation or fracture arrest (with or without slab fracture). However, beside current numerical studies utilizing discrete elements methods [8], only little attention has been devoted to a detailed analytical description of the PST able to give a comprehensive mechanical framework of the sequence of processes involved in the test. Consequently, this work aims to give a quantitative tool for an exhaustive interpretation of the PST, stressing the attention on important parameters that influence the test outcomes. First, starting from a pure
Propagation of Computational Uncertainty Using the Modern Design of Experiments
DeLoach, Richard
2007-01-01
This paper describes the use of formally designed experiments to aid in the error analysis of a computational experiment. A method is described by which the underlying code is approximated with relatively low-order polynomial graduating functions represented by truncated Taylor series approximations to the true underlying response function. A resource-minimal approach is outlined by which such graduating functions can be estimated from a minimum number of case runs of the underlying computational code. Certain practical considerations are discussed, including ways and means of coping with high-order response functions. The distributional properties of prediction residuals are presented and discussed. A practical method is presented for quantifying that component of the prediction uncertainty of a computational code that can be attributed to imperfect knowledge of independent variable levels. This method is illustrated with a recent assessment of uncertainty in computational estimates of Space Shuttle thermal and structural reentry loads attributable to ice and foam debris impact on ascent.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Labasque, Thierry; Aquilina, Luc; Visser, Ate; Vergnaud, Virginie
2014-05-01
An inter-laboratory comparison exercise dedicated to environmental tracers used for groundwater dating was organized in 2012 in France. The goal was to compare sampling and analytical protocols through results obtained by the community of groundwater dating laboratories. Sampling and analytical protocols were compared through three different exercises on various supports : (1) on groundwater from a homogeneous aquifer, (2) on groundwater from a fractured heterogeneous aquifer and (3) on an air standard. The two tests allowed 31 Laboratories from 14 countries to compare their protocols for both sampling and analyses. It allows discussing the uncertainties related to sampling protocols issuing from each laboratory methods. The results show a good agreement between laboratories on the aquifers and the air standard. The dispersion of SF6 results in air standard is low (rsd 2%) compared to CFCs (rsd 3 to 7%), even if its concentration is two orders of magnitude lower. Results obtained in recent groundwater (recharge after 1980) show that the uncertainty on groundwater dating with SF6 is between 3 and 4 years. This large uncertainty is mainly due to sampling and/or analytical problems. For CFCs, uncertainties obtained over all the laboratories are less than 2 years for groundwater with recharge between 1965 and 1996. The goal of the inter-laboratory comparison exercise was also to quantify the analytical uncertainty of the 3H and noble gas measurements and to assess whether they meet the requirements for 3H/3He dating and noble gas paleotemperature reconstruction. The reproducibility of the tritium measurements was 13.5%. The reproducibility of the 3He/4He ratio and 4He, Ne, Ar, Kr and Xe concentrations was 1.4%, 1.8%, 1.5%, 2.2%, 2.9%, and 2.4%. The propagated uncertainty of the tritium and noble gas measurements meets the desired precision for typical 3H/3He dating applications. However, the measurement uncertainties for the noble gas concentrations are insufficient to
Danny Raj, M.; Rengaswamy, R.
2017-03-01
A two-dimensional concentrated emulsion exhibits spontaneous rapid destabilization through an avalanche of coalescence events which propagate through the assembly stochastically. We propose a deterministic model to explain the average dynamics of the avalanching process. The dynamics of the avalanche phenomenon is studied as a function of a composite parameter, the decay time ratio, which characterizes the ratio of the propensity of coalescence to cease propagation to that of propagation. When this ratio is small, the avalanche grows autocatalytically to destabilize the emulsion. Using a scaling analysis, we unravel the relation between a local characteristic of the system and a global system wide effect. The anisotropic nature of local coalescence results in a system size dependent transition from nonautocatalytic to autocatalytic behavior. By incorporating uncertainty into the parameters in the model, several possible realizations of the coalescence avalanche are generated. The results are compared with the Monte Carlo simulations to derive insights into how the uncertainty propagates in the system.
Understanding uncertainty propagation in life cycle assessments of waste management systems
DEFF Research Database (Denmark)
Bisinella, Valentina; Conradsen, Knut; Christensen, Thomas Højlund
2015-01-01
Uncertainty analysis in Life Cycle Assessments (LCAs) of waste management systems often results obscure and complex, with key parameters rarely determined on a case-by-case basis. The paper shows an application of a simplified approach to uncertainty coupled with a Global Sensitivity Analysis (GSA......) perspective on three alternative waste management systems for Danish single-family household waste. The approach provides a fast and systematic method to select the most important parameters in the LCAs, understand their propagation and contribution to uncertainty....
Comparison of nuclear data uncertainty propagation methodologies for PWR burn-up simulations
Diez, Carlos Javier; Hoefer, Axel; Porsch, Dieter; Cabellos, Oscar
2014-01-01
Several methodologies using different levels of approximations have been developed for propagating nuclear data uncertainties in nuclear burn-up simulations. Most methods fall into the two broad classes of Monte Carlo approaches, which are exact apart from statistical uncertainties but require additional computation time, and first order perturbation theory approaches, which are efficient for not too large numbers of considered response functions but only applicable for sufficiently small nuclear data uncertainties. Some methods neglect isotopic composition uncertainties induced by the depletion steps of the simulations, others neglect neutron flux uncertainties, and the accuracy of a given approximation is often very hard to quantify. In order to get a better sense of the impact of different approximations, this work aims to compare results obtained based on different approximate methodologies with an exact method, namely the NUDUNA Monte Carlo based approach developed by AREVA GmbH. In addition, the impact ...
Propagation of Nuclear Data Uncertainties for ELECTRA Burn-up Calculations
Sjöstrand, H.; Alhassan, E.; Duan, J.; Gustavsson, C.; Koning, A. J.; Pomp, S.; Rochman, D.; Österlund, M.
2014-04-01
The European Lead-Cooled Training Reactor (ELECTRA) has been proposed as a training reactor for fast systems within the Swedish nuclear program. It is a low-power fast reactor cooled by pure liquid lead. In this work, we propagate the uncertainties in 239Pu transport data to uncertainties in the fuel inventory of ELECTRA during the reactor lifetime using the Total Monte Carlo approach (TMC). Within the TENDL project, nuclear models input parameters were randomized within their uncertainties and 740 239Pu nuclear data libraries were generated. These libraries are used as inputs to reactor codes, in our case SERPENT, to perform uncertainty analysis of nuclear reactor inventory during burn-up. The uncertainty in the inventory determines uncertainties in: the long-term radio-toxicity, the decay heat, the evolution of reactivity parameters, gas pressure and volatile fission product content. In this work, a methodology called fast TMC is utilized, which reduces the overall calculation time. The uncertainty of some minor actinides were observed to be rather large and therefore their impact on multiple recycling should be investigated further. It was also found that, criticality benchmarks can be used to reduce inventory uncertainties due to nuclear data. Further studies are needed to include fission yield uncertainties, more isotopes, and a larger set of benchmarks.
Mangado, Nerea; Piella, Gemma; Noailly, Jérôme; Pons-Prats, Jordi; Ballester, Miguel Ángel González
2016-01-01
Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering.
Directory of Open Access Journals (Sweden)
Wansik Yu
2016-01-01
Full Text Available The common approach to quantifying the precipitation forecast uncertainty is ensemble simulations where a numerical weather prediction (NWP model is run for a number of cases with slightly different initial conditions. In practice, the spread of ensemble members in terms of flood discharge is used as a measure of forecast uncertainty due to uncertain precipitation forecasts. This study presents the uncertainty propagation of rainfall forecast into hydrological response with catchment scale through distributed rainfall-runoff modeling based on the forecasted ensemble rainfall of NWP model. At first, forecast rainfall error based on the BIAS is compared with flood forecast error to assess the error propagation. Second, the variability of flood forecast uncertainty according to catchment scale is discussed using ensemble spread. Then we also assess the flood forecast uncertainty with catchment scale using an estimation regression equation between ensemble rainfall BIAS and discharge BIAS. Finally, the flood forecast uncertainty with RMSE using specific discharge in catchment scale is discussed. Our study is carried out and verified using the largest flood event by typhoon “Talas” of 2011 over the 33 subcatchments of Shingu river basin (2,360 km2, which is located in the Kii Peninsula, Japan.
Energy Technology Data Exchange (ETDEWEB)
Mullor, R. [Dpto. Estadistica e Investigacion Operativa, Universidad Alicante (Spain); Sanchez, A., E-mail: aisanche@eio.upv.e [Dpto. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica Valencia, Camino de Vera s/n 46022 (Spain); Martorell, S. [Dpto. Ingenieria Quimica y Nuclear, Universidad Politecnica Valencia (Spain); Martinez-Alzamora, N. [Dpto. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica Valencia, Camino de Vera s/n 46022 (Spain)
2011-06-15
Safety related systems performance optimization is classically based on quantifying the effects that testing and maintenance activities have on reliability and cost (R+C). However, R+C quantification is often incomplete in the sense that important uncertainties may not be considered. An important number of studies have been published in the last decade in the field of R+C based optimization considering uncertainties. They have demonstrated that inclusion of uncertainties in the optimization brings the decision maker insights concerning how uncertain the R+C results are and how this uncertainty does matter as it can result in differences in the outcome of the decision making process. Several methods of uncertainty propagation based on the theory of tolerance regions have been proposed in the literature depending on the particular characteristics of the variables in the output and their relations. In this context, the objective of this paper focuses on the application of non-parametric and parametric methods to analyze uncertainty propagation, which will be implemented on a multi-objective optimization problem where reliability and cost act as decision criteria and maintenance intervals act as decision variables. Finally, a comparison of results of these applications and the conclusions obtained are presented.
Parker, Jack C.; Park, Eungyu; Tang, Guoping
2008-11-01
A vertically-integrated analytical model for dissolved phase transport is described that considers a time-dependent DNAPL source based on the upscaled dissolution kinetics model of Parker and Park with extensions to consider time-dependent source zone biodecay, partial source mass reduction, and remediation-enhanced source dissolution kinetics. The model also considers spatial variability in aqueous plume decay, which is treated as the sum of aqueous biodecay and volatilization due to diffusive transport and barometric pumping through the unsaturated zone. The model is implemented in Excel/VBA coupled with (1) an inverse solution that utilizes prior information on model parameters and their uncertainty to condition the solution, and (2) an error analysis module that computes parameter covariances and total prediction uncertainty due to regression error and parameter uncertainty. A hypothetical case study is presented to evaluate the feasibility of calibrating the model from limited noisy field data. The results indicate that prediction uncertainty increases significantly over time following calibration, primarily due to propagation of parameter uncertainty. However, differences between the predicted performance of source zone partial mass reduction and the known true performance were reasonably small. Furthermore, a clear difference is observed between the predicted performance for the remedial action scenario versus that for a no-action scenario, which is consistent with the true system behavior. The results suggest that the model formulation can be effectively utilized to assess monitored natural attenuation and source remediation options if careful attention is given to model calibration and prediction uncertainty issues.
Propagation of statistical and nuclear data uncertainties in Monte Carlo burn-up calculations
Energy Technology Data Exchange (ETDEWEB)
Garcia-Herranz, Nuria [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain)], E-mail: nuria@din.upm.es; Cabellos, Oscar [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain); Sanz, Javier [Departamento de Ingenieria Energetica, Universidad Nacional de Educacion a Distancia, UNED (Spain); Juan, Jesus [Laboratorio de Estadistica, Universidad Politecnica de Madrid, UPM (Spain); Kuijper, Jim C. [NRG - Fuels, Actinides and Isotopes Group, Petten (Netherlands)
2008-04-15
Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB. A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants. This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files.
Propagation of nuclear data uncertainties for ELECTRA burn-up calculations
ostrand, H; Duan, J; Gustavsson, C; Koning, A; Pomp, S; Rochman, D; Osterlund, M
2013-01-01
The European Lead-Cooled Training Reactor (ELECTRA) has been proposed as a training reactor for fast systems within the Swedish nuclear program. It is a low-power fast reactor cooled by pure liquid lead. In this work, we propagate the uncertainties in Pu-239 transport data to uncertainties in the fuel inventory of ELECTRA during the reactor life using the Total Monte Carlo approach (TMC). Within the TENDL project the nuclear models input parameters were randomized within their uncertainties and 740 Pu-239 nuclear data libraries were generated. These libraries are used as inputs to reactor codes, in our case SERPENT, to perform uncertainty analysis of nuclear reactor inventory during burn-up. The uncertainty in the inventory determines uncertainties in: the long-term radio-toxicity, the decay heat, the evolution of reactivity parameters, gas pressure and volatile fission product content. In this work, a methodology called fast TMC is utilized, which reduces the overall calculation time. The uncertainty in the ...
Fragmentation cross-sections and model uncertainties in Cosmic Ray propagation physics
Tomassetti, Nicola
2015-01-01
Abundances and energy spectra of cosmic ray nuclei are being measured with high accuracy by the AMS experiment. These observations can provide tight constraints to the propagation models of galactic cosmic rays. In the view of the release of these data, I present an evaluation of the model uncertainties associated to the cross-sections for secondary production of Li-Be-B nuclei in cosmic rays. I discuss the role of cross section uncertainties in the calculation of the boron-to-carbon and beryllium-to-boron ratios, as well as their impact in the determination of the cosmic-ray transport parameters.
The Dark Side of the Propagators: exploring their analytic properties by a massive expansion
Siringo, Fabio
2017-03-01
Analytical functions for the propagators of QCD, including a set of chiral quarks, are derived by a one-loop massive expansion in the Landau gauge, and are studied in Minkowski space, yielding a direct proof of positivity violation and confinement from first principles. Complex conjugated poles are found for the gluon propagator.
Batista, Rafael Alves; di Matteo, Armando; van Vliet, Arjen; Walz, David
2015-01-01
The results of simulations of the extragalactic propagation of ultra-high energy cosmic rays (UHECRs) have intrinsic uncertainties due to poorly known physical quantities and approximations used in the codes. We quantify the uncertainties in the simulated UHECR spectrum and composition due to different models for the extragalactic background light (EBL), different photodisintegration setups, approximations concerning photopion production and the use of different simulation codes. We discuss the results for several representative source scenarios with proton, nitrogen or iron at injection. For this purpose we used SimProp and CRPropa, two publicly available codes for Monte Carlo simulations of UHECR propagation. CRPropa is a detailed and extensive simulation code, while SimProp aims to achieve acceptable results using a simpler code. We show that especially the choices for the EBL model and the photodisintegration setup can have a considerable impact on the simulated UHECR spectrum and composition.
Wittig, A; Di Lizia, P.; Armellin, R.; Zazzera, FB; Makino, K; Berzş, M
2014-01-01
Current approaches to uncertainty propagation in astrodynamics mainly refer to linearized models or Monte Carlo simulations. Naive linear methods fail in nonlinear dynamics, whereas Monte Carlo simulations tend to be computationally intensive. Differential algebra has already proven to be an efficient compromise by replacing thousands of pointwise integrations of Monte Carlo runs with the fast evaluation of the arbitrary order Taylor expansion of the flow of the dynamics. However, the current...
Uncertainty in Damage Detection, Dynamic Propagation and Just-in-Time Networks
2015-08-03
J. Pure and Appl. Math ., 80 (2012), 93-145. [7] H.T. Banks and W. Clayton Thompson, Least squares estimation of probability measures in the...three distinct areas of investigation: I. Uncertainty Quantification: Propagation and Stochastic Systems; II. Numerical Interface Methods; III. Non...TELEPHONE NUMBER (Include area code) 919-515-3968 Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 Page 2 of 2FORM SF 298 8/4/2015https
Sensor Analytics: Radioactive gas Concentration Estimation and Error Propagation
Energy Technology Data Exchange (ETDEWEB)
Anderson, Dale N.; Fagan, Deborah K.; Suarez, Reynold; Hayes, James C.; McIntyre, Justin I.
2007-04-15
This paper develops the mathematical statistics of a radioactive gas quantity measurement and associated error propagation. The probabilistic development is a different approach to deriving attenuation equations and offers easy extensions to more complex gas analysis components through simulation. The mathematical development assumes a sequential process of three components; I) the collection of an environmental sample, II) component gas extraction from the sample through the application of gas separation chemistry, and III) the estimation of radioactivity of component gases.
Uncertainty propagation in a 3-D thermal code for performance assessment of a nuclear waste disposal
Energy Technology Data Exchange (ETDEWEB)
Dutfoy, A. [Electricite de France (EDF), Research and Development Div., Safety and Reliability Branch, ESF, 92 - Clamart (France); Ritz, J.B. [Electricite de France (EDF), Research and Development Div., Fluid Mechanics and Heat Transfer, MFTT, 78 - Chatou (France)
2001-07-01
Given the very large time scale involved, the performance assessment of a nuclear waste repository requires numerical modelling. Because we are uncertain of the exact value of the input parameters, we have to analyse the impact of these uncertainties on the outcome of the physical models. The EDF Division Research and Development has set a reliability method to propagate these uncertainties or variability through models which requires much less physical simulations than the usual simulation methods. We apply the reliability method MEFISTO to a base case modelling the heat transfers in a virtual disposal in the future site of the French underground research laboratory, in the East of France. This study is led in collaboration with ANDRA which is the French Nuclear Waste Management Agency. With this exercise, we want to evaluate the thermal behaviour of a concept related to the variation of physical parameters and their uncertainty. (author)
GCR Environmental Models III: GCR Model Validation and Propagated Uncertainties in Effective Dose
Slaba, Tony C.; Xu, Xiaojing; Blattnig, Steve R.; Norman, Ryan B.
2014-01-01
This is the last of three papers focused on quantifying the uncertainty associated with galactic cosmic rays (GCR) models used for space radiation shielding applications. In the first paper, it was found that GCR ions with Z>2 and boundary energy below 500 MeV/nucleon induce less than 5% of the total effective dose behind shielding. This is an important finding since GCR model development and validation have been heavily biased toward Advanced Composition Explorer/Cosmic Ray Isotope Spectrometer measurements below 500 MeV/nucleon. Weights were also developed that quantify the relative contribution of defined GCR energy and charge groups to effective dose behind shielding. In the second paper, it was shown that these weights could be used to efficiently propagate GCR model uncertainties into effective dose behind shielding. In this work, uncertainties are quantified for a few commonly used GCR models. A validation metric is developed that accounts for measurements uncertainty, and the metric is coupled to the fast uncertainty propagation method. For this work, the Badhwar-O'Neill (BON) 2010 and 2011 and the Matthia GCR models are compared to an extensive measurement database. It is shown that BON2011 systematically overestimates heavy ion fluxes in the range 0.5-4 GeV/nucleon. The BON2010 and BON2011 also show moderate and large errors in reproducing past solar activity near the 2000 solar maximum and 2010 solar minimum. It is found that all three models induce relative errors in effective dose in the interval [-20%, 20%] at a 68% confidence level. The BON2010 and Matthia models are found to have similar overall uncertainty estimates and are preferred for space radiation shielding applications.
Myers, Casey A; Laz, Peter J; Shelburne, Kevin B; Davidson, Bradley S
2015-05-01
Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse kinematics, inverse dynamics, and muscle force prediction. A series of Monte Carlo simulations were performed that considered intrarater variability in marker placement, movement artifacts in each phase of gait, variability in body segment parameters, and variability in muscle parameters calculated from cadaveric investigations. Propagation of uncertainty was performed by also using the output distributions from one stage as input distributions to subsequent stages. Confidence bounds (5-95%) and sensitivity of outputs to model input parameters were calculated throughout the gait cycle. The combined impact of uncertainty resulted in mean bounds that ranged from 2.7° to 6.4° in joint kinematics, 2.7 to 8.1 N m in joint moments, and 35.8 to 130.8 N in muscle forces. The impact of movement artifact was 1.8 times larger than any other propagated source. Sensitivity to specific body segment parameters and muscle parameters were linked to where in the gait cycle they were calculated. We anticipate that through the increased use of probabilistic tools, researchers will better understand the strengths and limitations of their musculoskeletal simulations and more effectively use simulations to evaluate hypotheses and inform clinical decisions.
Analytical Model for Fictitious Crack Propagation in Concrete Beams
DEFF Research Database (Denmark)
Ulfkjær, J. P.; Krenk, Steen; Brincker, Rune
1995-01-01
An analytical model for load-displacement curves of concrete beams is presented. The load-displacement curve is obtained by combining two simple models. The fracture is modeled by a fictitious crack in an elastic layer around the midsection of the beam. Outside the elastic layer the deformations ...... starts to grow correspond to the same bending moment. Closed-form solutions for the maximum size of the fracture zone and the minimum slope on the load-displacement curve are given.......An analytical model for load-displacement curves of concrete beams is presented. The load-displacement curve is obtained by combining two simple models. The fracture is modeled by a fictitious crack in an elastic layer around the midsection of the beam. Outside the elastic layer the deformations...
Energy Technology Data Exchange (ETDEWEB)
Isukapalli, S.S.; Roy, A.; Georgopoulos, P.G. [Rutgers Univ., Piscataway, NJ (United States). Environmental and Occupational Health Sciences Inst.]|[Univ. of Medicine and Dentistry, Piscataway, NJ (United States)
1998-06-01
Comprehensive uncertainty analyses of complex models of environmental and biological systems are essential but often not feasible due to the computational resources they require. Traditional methods, such as standard Monte Carlo and Latin Hypercube Sampling, for propagating uncertainty and developing probability densities of model outputs, may in fact require performing a prohibitive number of model simulations. An alternative is offered, for a wide range of problems, by the computationally efficient Stochastic Response Surface Methods (SRSMs) for uncertainty propagation. These methods extend the classical response surface methodology to systems with stochastic inputs and outputs. This is accomplished by approximating both inputs and outputs of the uncertain system through stochastic series of well behaved standard random variables; the series expansions of the outputs contain unknown coefficients which are calculated by a method that uses the results of a limited number of model simulations. Two case studies are presented here involving (a) a physiologically-based pharmacokinetic (PBPK) model for perchloroethylene (PERC) for humans, and (b) an atmospheric photochemical model, the Reactive Plume Model (RPM-IV). The results obtained agree closely with those of traditional Monte Carlo and Latin Hypercube Sampling methods, while significantly reducing the required number of model simulations.
Analytical Model for Fictitious Crack Propagation in Concrete Beams
DEFF Research Database (Denmark)
Ulfkjær, J. P.; Krenk, S.; Brincker, Rune
-displacement curve where the fictitious crack starts to develope, and the point where the real crack starts to grow will always correspond to the same bending moment. Closed from solutions for the maximum size of the fracture zone and the minimum slope on the load-displacement curve is given. The latter result......An analytical model for load-displacement curves of unreinforced notched and un-notched concrete beams is presented. The load displacement-curve is obtained by combining two simple models. The fracture is modelled by a fictitious crack in an elastic layer around the mid-section of the beam. Outside...
Energy Technology Data Exchange (ETDEWEB)
Dutfoy, A. [Electricite de France R and D Safety and Reliability Branch (EDF), 92 - Clamart (France); Bouton, M. [Electricite de France R and D National Hydraulic Lab. and Environment (EDF), 78 - Chatou (France)
2001-07-01
Given the complexity of the involved phenomenon, performance assessment of a nuclear waste disposal requires numerical modelling. Because many of the input parameters of models are uncertain, analysis of uncertainties and their impact on the probabilistic outcome has become of major importance. This paper presents the EDF Research and Development Division methodology to propagate uncertainties arising from the parameters through models. This reliability approach provides two important quantitative results: an estimate of the probability that the outcome exceeds some two important quantitative results: an estimate of the probability that the outcome exceeds some specified threshold level (called failure event), and a probabilistic sensitivity measure which quantifies the relative importance of each uncertain variable with respect to the probabilistic outcome. Such results could become an integral component of the decision process for the nuclear disposal. The reliability method proposed in this paper is applied to a radionuclide transport model. (authors)
Angelikopoulos, Panagiotis; Papadimitriou, Costas; Koumoutsakos, Petros
2012-10-01
We present a Bayesian probabilistic framework for quantifying and propagating the uncertainties in the parameters of force fields employed in molecular dynamics (MD) simulations. We propose a highly parallel implementation of the transitional Markov chain Monte Carlo for populating the posterior probability distribution of the MD force-field parameters. Efficient scheduling algorithms are proposed to handle the MD model runs and to distribute the computations in clusters with heterogeneous architectures. Furthermore, adaptive surrogate models are proposed in order to reduce the computational cost associated with the large number of MD model runs. The effectiveness and computational efficiency of the proposed Bayesian framework is demonstrated in MD simulations of liquid and gaseous argon.
Nikolopoulos, Efthymios I.; Polcher, Jan; Anagnostou, Emmanouil N.; Eisner, Stephanie; Fink, Gabriel; Kallos, George
2016-04-01
Precipitation is arguably one of the most important forcing variables that drive terrestrial water cycle processes. The process of precipitation exhibits significant variability in space and time, is associated with different water phases (liquid or solid) and depends on several other factors (aerosols, orography etc), which make estimation and modeling of this process a particularly challenging task. As such, precipitation information from different sensors/products is associated with uncertainty. Propagation of this uncertainty into hydrologic simulations can have a considerable impact on the accuracy of the simulated hydrologic variables. Therefore, to make hydrologic predictions more useful, it is important to investigate and assess the impact of precipitation uncertainty in hydrologic simulations in order to be able to quantify it and identify ways to minimize it. In this work we investigate the impact of precipitation uncertainty in hydrologic simulations using land surface models (e.g. ORCHIDEE) and global hydrologic models (e.g. WaterGAP3) for the simulation of several hydrologic variables (soil moisture, ET, runoff) over the Iberian Peninsula. Uncertainty in precipitation is assessed by utilizing various sources of precipitation input that include one reference precipitation dataset (SAFRAN), three widely-used satellite precipitation products (TRMM 3B42v7, CMORPH, PERSIANN) and a state-of-the-art reanalysis product (WFDEI) based on the ECMWF ERA-Interim reanalysis. Comparative analysis is based on using the SAFRAN-simulations as reference and it is carried out at different space (0.5deg or regional average) and time (daily or seasonal) scales. Furthermore, as an independent verification, simulated discharge is compared against available discharge observations for selected major rivers of Iberian region. Results allow us to draw conclusions regarding the impact of precipitation uncertainty with respect to i) hydrologic variable of interest, ii
Watson, Cameron S.; Carrivick, Jonathan; Quincey, Duncan
2015-10-01
Modelling glacial lake outburst floods (GLOFs) or 'jökulhlaups', necessarily involves the propagation of large and often stochastic uncertainties throughout the source to impact process chain. Since flood routing is primarily a function of underlying topography, communication of digital elevation model (DEM) uncertainty should accompany such modelling efforts. Here, a new stochastic first-pass assessment technique was evaluated against an existing GIS-based model and an existing 1D hydrodynamic model, using three DEMs with different spatial resolution. The analysis revealed the effect of DEM uncertainty and model choice on several flood parameters and on the prediction of socio-economic impacts. Our new model, which we call MC-LCP (Monte Carlo Least Cost Path) and which is distributed in the supplementary information, demonstrated enhanced 'stability' when compared to the two existing methods, and this 'stability' was independent of DEM choice. The MC-LCP model outputs an uncertainty continuum within its extent, from which relative socio-economic risk can be evaluated. In a comparison of all DEM and model combinations, the Shuttle Radar Topography Mission (SRTM) DEM exhibited fewer artefacts compared to those with the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM), and were comparable to those with a finer resolution Advanced Land Observing Satellite Panchromatic Remote-sensing Instrument for Stereo Mapping (ALOS PRISM) derived DEM. Overall, we contend that the variability we find between flood routing model results suggests that consideration of DEM uncertainty and pre-processing methods is important when assessing flow routing and when evaluating potential socio-economic implications of a GLOF event. Incorporation of a stochastic variable provides an illustration of uncertainty that is important when modelling and communicating assessments of an inherently complex process.
Development of Depletion Code Surrogate Models for Uncertainty Propagation in Scenario Studies
Krivtchik, Guillaume; Coquelet-Pascal, Christine; Blaise, Patrick; Garzenne, Claude; Le Mer, Joël; Freynet, David
2014-06-01
The result of transition scenario studies, which enable the comparison of different options of the reactor fleet evolution and management of the future fuel cycle materials, allow to perform technical and economic feasibility studies. The COSI code is developed by CEA and used to perform scenario calculations. It allows to model any fuel type, reactor fleet, fuel facility, and permits the tracking of U, Pu, minor actinides and fission products nuclides on a large time scale. COSI is coupled with the CESAR code which performs the depletion calculations based on one-group cross-section libraries and nuclear data. Different types of uncertainties have an impact on scenario studies: nuclear data and scenario assumptions. Therefore, it is necessary to evaluate their impact on the major scenario results. The methodology adopted to propagate these uncertainties throughout the scenario calculations is a stochastic approach. Considering the amount of inputs to be sampled in order to perform a stochastic calculation of the propagated uncertainty, it appears necessary to reduce the calculation time. Given that evolution calculations represent approximately 95% of the total scenario simulation time, an optimization can be done, with the development and implementation of a surrogate models library of CESAR in COSI. The input parameters of CESAR are sampled with URANIE, the CEA uncertainty platform, and for every sample, the isotopic composition after evolution evaluated with CESAR is stored. Then statistical analysis of the input and output tables allow to model the behavior of CESAR on each CESAR library, i.e. building a surrogate model. Several quality tests are performed on each surrogate model to insure the prediction power is satisfying. Afterward, a new routine implemented in COSI reads these surrogate models and using them in replacement of CESAR calculations. A preliminary study of the calculation time gain shows that the use of surrogate models allows stochastic
Birrell, Jane; Meares, Kevin; Wilkinson, Andrew; Freeston, Mark
2011-11-01
Since its emergence in the early 1990s, a narrow but concentrated body of research has developed examining the role of intolerance of uncertainty (IU) in worry, and yet we still know little about its phenomenology. In an attempt to clarify our understanding of this construct, this paper traces the way in which our understanding and definition of IU have evolved throughout the literature. This paper also aims to further our understanding of IU by exploring the latent variables measures by the Intolerance of Uncertainty Scale (IUS; Freeston, Rheaume, Letarte, Dugas & Ladouceur, 1994). A review of the literature surrounding IU confirmed that the current definitions are categorical and lack specificity. A critical review of existing factor analytic studies was carried out in order to determine the underlying factors measured by the IUS. Systematic searches yielded 9 papers for review. Two factors with 12 consistent items emerged throughout the exploratory studies, and the stability of models containing these two factors was demonstrated in subsequent confirmatory studies. It is proposed that these factors represent (i) desire for predictability and an active engagement in seeking certainty, and (ii) paralysis of cognition and action in the face of uncertainty. It is suggested that these factors may represent approach and avoidance responses to uncertainty. Further research is required to confirm the construct validity of these factors and to determine the stability of this structure within clinical samples.
DEFF Research Database (Denmark)
Abdul Samad, Noor Asma Fazli Bin; Sin, Gürkan; Gernaey, Krist
2013-01-01
This paper presents the application of uncertainty and sensitivity analysis as part of a systematic modelbased process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty...
Energy Technology Data Exchange (ETDEWEB)
Isukapalli, S.S.; Georgopoulos, P.G. [Environmental and Occupational Health Sciences Inst., Piscataway, NJ (United States)
1997-12-31
Uncertainty in biogenic emission estimates and photochemical reaction rates can contribute significantly to modeling error in Photochemical Air Quality Simulation Models (PAQSMs). Uncertainties in isoprene emissions from biogenic sources, and isoprene atmospheric degradation rates have recently received considerable attention with respect to control strategy selection for the reduction of tropospheric ozone levels. This study addresses the effects of uncertainties in isoprene emissions and reaction rates on ambient ozone concentrations predicted by PAQSMs. Since PAQSMs are computationally intensive, propagation of uncertainty in reaction rate constants using traditional methods, such as Monte Carlo methods, is not computationally feasible. Here, a novel computationally efficient method of uncertainty analysis, called the Stochastic Response Surface Method (SRSM), is applied to propagate uncertainty in isoprene emissions and reaction rate parameters. Case studies include estimation of uncertainty in ozone concentrations predicted by (a) a box-model, (b) a plume trajectory model, the Reactive Plume Model (RPM), and (c) an urban-to-regional scale grid model, the Urban Airshed Model (UAM). The results of this analysis are used to characterize the relative importance of uncertainties in isoprene emissions and reaction rates on ozone levels for a wide range of conditions. Furthermore, this work demonstrates the applicability of the SRSM uncertainty propagation methodology to computationally intensive models such as the UAM.
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
For structural system with random basic variables as well as fuzzy basic variables,uncertain propagation from two kinds of basic variables to the response of the structure is investigated.A novel algorithm for obtaining membership function of fuzzy reliability is presented with saddlepoint approximation(SA)based line sampling method.In the presented method,the value domain of the fuzzy basic variables under the given membership level is firstly obtained according to their membership functions.In the value domain of the fuzzy basic variables corresponding to the given membership level,bounds of reliability of the structure response satisfying safety requirement are obtained by employing the SA based line sampling method in the reduced space of the random variables.In this way the uncertainty of the basic variables is propagated to the safety measurement of the structure,and the fuzzy membership function of the reliability is obtained.Compared to the direct Monte Carlo method for propagating the uncertainties of the fuzzy and random basic variables,the presented method can considerably improve computational efficiency with acceptable precision.The presented method has wider applicability compared to the transformation method,because it doesn’t limit the distribution of the variable and the explicit expression of performance function, and no approximation is made for the performance function during the computing process.Additionally,the presented method can easily treat the performance function with cross items of the fuzzy variable and the random variable,which isn’t suitably approximated by the existing transformation methods.Several examples are provided to illustrate the advantages of the presented method.
DEFF Research Database (Denmark)
Quinonero, Joaquin; Girard, Agathe; Larsen, Jan
2003-01-01
The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models such as the Gaus....... The capability of the method is demonstrated for forecasting of time-series and compared to approximate methods.......The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models...... such as the Gaussian process and the relevance vector machine. We derive novel analytic expressions for the predictive mean and variance for Gaussian kernel shapes under the assumption of a Gaussian input distribution in the static case, and of a recursive Gaussian predictive density in iterative forecasting...
Tracing Analytic Ray Curves for Light and Sound Propagation in Non-Linear Media.
Mo, Qi; Yeh, Hengchin; Manocha, Dinesh
2016-11-01
The physical world consists of spatially varying media, such as the atmosphere and the ocean, in which light and sound propagates along non-linear trajectories. This presents a challenge to existing ray-tracing based methods, which are widely adopted to simulate propagation due to their efficiency and flexibility, but assume linear rays. We present a novel algorithm that traces analytic ray curves computed from local media gradients, and utilizes the closed-form solutions of both the intersections of the ray curves with planar surfaces, and the travel distance. By constructing an adaptive unstructured mesh, our algorithm is able to model general media profiles that vary in three dimensions with complex boundaries consisting of terrains and other scene objects such as buildings. Our analytic ray curve tracer with the adaptive mesh improves the efficiency considerably over prior methods. We highlight the algorithm's application on simulation of visual and sound propagation in outdoor scenes.
Mendes, B. S.; Draper, D.
2008-12-01
The issue of model uncertainty and model choice is central in any groundwater modeling effort [Neuman and Wierenga, 2003]; among the several approaches to the problem we favour using Bayesian statistics because it is a method that integrates in a natural way uncertainties (arising from any source) and experimental data. In this work, we experiment with several Bayesian approaches to model choice, focusing primarily on demonstrating the usefulness of the Reversible Jump Markov Chain Monte Carlo (RJMCMC) simulation method [Green, 1995]; this is an extension of the now- common MCMC methods. Standard MCMC techniques approximate posterior distributions for quantities of interest, often by creating a random walk in parameter space; RJMCMC allows the random walk to take place between parameter spaces with different dimensionalities. This fact allows us to explore state spaces that are associated with different deterministic models for experimental data. Our work is exploratory in nature; we restrict our study to comparing two simple transport models applied to a data set gathered to estimate the breakthrough curve for a tracer compound in groundwater. One model has a mean surface based on a simple advection dispersion differential equation; the second model's mean surface is also governed by a differential equation but in two dimensions. We focus on artificial data sets (in which truth is known) to see if model identification is done correctly, but we also address the issues of over and under-paramerization, and we compare RJMCMC's performance with other traditional methods for model selection and propagation of model uncertainty, including Bayesian model averaging, BIC and DIC.References Neuman and Wierenga (2003). A Comprehensive Strategy of Hydrogeologic Modeling and Uncertainty Analysis for Nuclear Facilities and Sites. NUREG/CR-6805, Division of Systems Analysis and Regulatory Effectiveness Office of Nuclear Regulatory Research, U. S. Nuclear Regulatory Commission
A Framework for Propagation of Uncertainties in the Kepler Data Analysis Pipeline
Clarke, Bruce D.; Allen, Christopher; Bryson, Stephen T.; Caldwell, Douglas A.; Chandrasekaran, Hema; Cote, Miles T.; Girouard, Forrest; Jenkins, Jon M.; Klaus, Todd C.; Li, Jie; Middour, Chris; McCauliff, Sean; Quintana, Elisa V.; Tenebaum, Peter; Twicken, Joseph D.; Wohler, Bill; Wu, Hayley
2010-01-01
The Kepler space telescope is designed to detect Earth-like planets around Sun-like stars using transit photometry by simultaneously observing 100,000 stellar targets nearly continuously over a three and a half year period. The 96-megapixel focal plane consists of 42 charge-coupled devices (CCD) each containing two 1024 x 1100 pixel arrays. Cross-correlations between calibrated pixels are introduced by common calibrations performed on each CCD requiring downstream data products access to the calibrated pixel covariance matrix in order to properly estimate uncertainties. The prohibitively large covariance matrices corresponding to the 75,000 calibrated pixels per CCD preclude calculating and storing the covariance in standard lock-step fashion. We present a novel framework used to implement standard propagation of uncertainties (POU) in the Kepler Science Operations Center (SOC) data processing pipeline. The POU framework captures the variance of the raw pixel data and the kernel of each subsequent calibration transformation allowing the full covariance matrix of any subset of calibrated pixels to be recalled on-the-fly at any step in the calibration process. Singular value decomposition (SVD) is used to compress and low-pass filter the raw uncertainty data as well as any data dependent kernels. The combination of POU framework and SVD compression provide downstream consumers of the calibrated pixel data access to the full covariance matrix of any subset of the calibrated pixels traceable to pixel level measurement uncertainties without having to store, retrieve and operate on prohibitively large covariance matrices. We describe the POU Framework and SVD compression scheme and its implementation in the Kepler SOC pipeline.
A framework for propagation of uncertainties in the Kepler data analysis pipeline
Clarke, Bruce D.; Allen, Christopher; Bryson, Stephen T.; Caldwell, Douglas A.; Chandrasekaran, Hema; Cote, Miles T.; Girouard, Forrest; Jenkins, Jon M.; Klaus, Todd C.; Li, Jie; Middour, Chris; McCauliff, Sean; Quintana, Elisa V.; Tenenbaum, Peter; Twicken, Joseph D.; Wohler, Bill; Wu, Hayley
2010-07-01
The Kepler space telescope is designed to detect Earth-like planets around Sun-like stars using transit photometry by simultaneously observing more than 100,000 stellar targets nearly continuously over a three-and-a-half year period. The 96.4-megapixel focal plane consists of 42 Charge-Coupled Devices (CCD), each containing two 1024 x 1100 pixel arrays. Since cross-correlations between calibrated pixels are introduced by common calibrations performed on each CCD, downstream data processing requires access to the calibrated pixel covariance matrix to properly estimate uncertainties. However, the prohibitively large covariance matrices corresponding to the ~75,000 calibrated pixels per CCD preclude calculating and storing the covariance in standard lock-step fashion. We present a novel framework used to implement standard Propagation of Uncertainties (POU) in the Kepler Science Operations Center (SOC) data processing pipeline. The POU framework captures the variance of the raw pixel data and the kernel of each subsequent calibration transformation, allowing the full covariance matrix of any subset of calibrated pixels to be recalled on the fly at any step in the calibration process. Singular Value Decomposition (SVD) is used to compress and filter the raw uncertainty data as well as any data-dependent kernels. This combination of POU framework and SVD compression allows the downstream consumer access to the full covariance matrix of any subset of the calibrated pixels which is traceable to the pixel-level measurement uncertainties, all without having to store, retrieve, and operate on prohibitively large covariance matrices. We describe the POU framework and SVD compression scheme and its implementation in the Kepler SOC pipeline.
Propagation of Isotopic Bias and Uncertainty to Criticality Safety Analyses of PWR Waste Packages
Energy Technology Data Exchange (ETDEWEB)
Radulescu, Georgeta [ORNL
2010-06-01
predicted spent fuel compositions (i.e., determine the penalty in reactivity due to isotopic composition bias and uncertainty) for use in disposal criticality analysis employing burnup credit. The method used in this calculation to propagate the isotopic bias and bias-uncertainty values to k{sub eff} is the Monte Carlo uncertainty sampling method. The development of this report is consistent with 'Test Plan for: Isotopic Validation for Postclosure Criticality of Commercial Spent Nuclear Fuel'. This calculation report has been developed in support of burnup credit activities for the proposed repository at Yucca Mountain, Nevada, and provides a methodology that can be applied to other criticality safety applications employing burnup credit.
Identifying the Uncertainty in Physician Practice Location through Spatial Analytics and Text Mining
Directory of Open Access Journals (Sweden)
Xuan Shi
2016-09-01
Full Text Available In response to the widespread concern about the adequacy, distribution, and disparity of access to a health care workforce, the correct identification of physicians’ practice locations is critical to access public health services. In prior literature, little effort has been made to detect and resolve the uncertainty about whether the address provided by a physician in the survey is a practice address or a home address. This paper introduces how to identify the uncertainty in a physician’s practice location through spatial analytics, text mining, and visual examination. While land use and zoning code, embedded within the parcel datasets, help to differentiate resident areas from other types, spatial analytics may have certain limitations in matching and comparing physician and parcel datasets with different uncertainty issues, which may lead to unforeseen results. Handling and matching the string components between physicians’ addresses and the addresses of the parcels could identify the spatial uncertainty and instability to derive a more reasonable relationship between different datasets. Visual analytics and examination further help to clarify the undetectable patterns. This research will have a broader impact over federal and state initiatives and policies to address both insufficiency and maldistribution of a health care workforce to improve the accessibility to public health services.
Analytical algorithms to quantify the uncertainty in remaining useful life prediction
Sankararaman, S.; Daigle, M.; Saxena, A.; Goebel, K.
This paper investigates the use of analytical algorithms to quantify the uncertainty in the remaining useful life (RUL) estimate of components used in aerospace applications. The prediction of RUL is affected by several sources of uncertainty and it is important to systematically quantify their combined effect by computing the uncertainty in the RUL prediction in order to aid risk assessment, risk mitigation, and decision-making. While sampling-based algorithms have been conventionally used for quantifying the uncertainty in RUL, analytical algorithms are computationally cheaper and sometimes, are better suited for online decision-making. While exact analytical algorithms are available only for certain special cases (for e.g., linear models with Gaussian variables), effective approximations can be made using the first-order second moment method (FOSM), the first-order reliabilitymethod (FORM), and the inverse first-order reliabilitymethod (Inverse FORM). These methods can be used not only to calculate the entire probability distribution of RUL but also to obtain probability bounds on RUL. This paper explains these three methods in detail and illustrates them using the state-space model of a lithium-ion battery.
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-09-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Energy Technology Data Exchange (ETDEWEB)
Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu
2016-09-15
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Brault, A; Lucor, D
2016-01-01
SUMMARY This work aims at quantifying the effect of inherent uncertainties from cardiac output on the sensitivity of a human compliant arterial network response based on stochastic simulations of a reduced-order pulse wave propagation model. A simple pulsatile output form is utilized to reproduce the most relevant cardiac features with a minimum number of parameters associated with left ventricle dynamics. Another source of critical uncertainty is the spatial heterogeneity of the aortic compliance which plays a key role in the propagation and damping of pulse waves generated at each cardiac cycle. A continuous representation of the aortic stiffness in the form of a generic random field of prescribed spatial correlation is then considered. Resorting to a stochastic sparse pseudospectral method, we investigate the spatial sensitivity of the pulse pressure and waves reflection magnitude with respect to the different model uncertainties. Results indicate that uncertainties related to the shape and magnitude of th...
Zhu, T.; Rochman, D.; Vasiliev, A.; Ferroukhi, H.; Wieselquist, W.; Pautz, A.
2014-04-01
Nuclear data uncertainty propagation based on stochastic sampling (SS) is becoming more attractive while leveraging modern computer power. Two variants of the SS approach are compared in this paper. The Total Monte Carlo (TMC) method by the Nuclear Research and Consultancy Group (NRG) generates perturbed ENDF-6-formatted nuclear data by varying nuclear reaction model parameters. At Paul Scherrer Institute (PSI) the Nuclear data Uncertainty Stochastic Sampling (NUSS) system generates perturbed ACE-formatted nuclear data files by applying multigroup nuclear data covariances onto pointwise ACE-formatted nuclear data. Uncertainties of 239Pu and 235U from ENDF/B-VII.1, ZZ-SCALE6/COVA-44G and TENDL covariance libraries are considered in NUSS and propagated in MCNPX calculations for well-studied Jezebel and Godiva fast spectrum critical benchmarks. The corresponding uncertainty results obtained by TMC are compared with NUSS results and the deterministic Sensitivity/Uncertainty method of TSUNAMI-3D from SCALE6 package is also applied to serve as a separate verification. The discrepancies in the propagated 239Pu and 235U uncertainties due to method and covariance differences are discussed.
Energy Technology Data Exchange (ETDEWEB)
Han, Gi Young; Seo, Bo Kyun [Korea Institute of Nuclear Safety,, Daejeon (Korea, Republic of); Kim, Do Hyun; Shin, Chang Ho; Kim, Song Hyun [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of); Sun, Gwang Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2016-06-15
In analyzing residual radiation, researchers generally use a two-step Monte Carlo (MC) simulation. The first step (MC1) simulates neutron transport, and the second step (MC2) transports the decay photons emitted from the activated materials. In this process, the stochastic uncertainty estimated by the MC2 appears only as a final result, but it is underestimated because the stochastic error generated in MC1 cannot be directly included in MC2. Hence, estimating the true stochastic uncertainty requires quantifying the propagation degree of the stochastic error in MC1. The brute force technique is a straightforward method to estimate the true uncertainty. However, it is a costly method to obtain reliable results. Another method, called the adjoint-based method, can reduce the computational time needed to evaluate the true uncertainty; however, there are limitations. To address those limitations, we propose a new strategy to estimate uncertainty propagation without any additional calculations in two-step MC simulations. To verify the proposed method, we applied it to activation benchmark problems and compared the results with those of previous methods. The results show that the proposed method increases the applicability and user-friendliness preserving accuracy in quantifying uncertainty propagation. We expect that the proposed strategy will contribute to efficient and accurate two-step MC calculations.
Directory of Open Access Journals (Sweden)
S.V. Bystrov
2016-05-01
Full Text Available Subject of Research.We present research results for the signal uncertainty problem that naturally arises for the developers of servomechanisms, including analytical design of serial compensators, delivering the required quality indexes for servomechanisms. Method. The problem was solved with the use of Besekerskiy engineering approach, formulated in 1958. This gave the possibility to reduce requirements for input signal composition of servomechanisms by using only two of their quantitative characteristics, such as maximum speed and acceleration. Information about input signal maximum speed and acceleration allows entering into consideration the equivalent harmonic input signal with calculated amplitude and frequency. In combination with requirements for maximum tracking error, the amplitude and frequency of the equivalent harmonic effects make it possible to estimate analytically the value of the amplitude characteristics of the system by error and then convert it to amplitude characteristic of open-loop system transfer function. While previously Besekerskiy approach was mainly used in relation to the apparatus of logarithmic characteristics, we use this approach for analytical synthesis of consecutive compensators. Main Results. Proposed technique is used to create analytical representation of "input–output" and "error–output" polynomial dynamic models of the designed system. In turn, the desired model of the designed system in the "error–output" form of analytical representation of transfer functions is the basis for the design of consecutive compensator, that delivers the desired placement of state matrix eigenvalues and, consequently, the necessary set of dynamic indexes for the designed system. The given procedure of consecutive compensator analytical design on the basis of Besekerskiy engineering approach under conditions of signal uncertainty is illustrated by an example. Practical Relevance. The obtained theoretical results are
Study of Gaussian and Bessel beam propagation using a new analytic approach
Dartora, C. A.; Nobrega, K. Z.
2012-03-01
The main feature of Bessel beams realized in practice is their ability to resist diffractive effects over distances exceeding the usual diffraction length. The theory and experimental demonstration of such waves can be traced back to the seminal work of Durnin and co-workers already in 1987. Despite that fact, to the best of our knowledge, the study of propagation of apertured Bessel beams found no solution in closed analytic form and it often leads to the numerical evaluation of diffraction integrals, which can be very awkward. In the context of paraxial optics, wave propagation in lossless media is described by an equation similar to the non-relativistic Schrödinger equation of quantum mechanics, but replacing the time t in quantum mechanics by the longitudinal coordinate z. Thus, the same mathematical methods can be employed in both cases. Using Bessel functions of the first kind as basis functions in a Hilbert space, here we present a new approach where it is possible to expand the optical wave field in a series, allowing to obtain analytic expressions for the propagation of any given initial field distribution. To demonstrate the robustness of the method two cases were taken into account: Gaussian and zeroth-order Bessel beam propagation.
Directory of Open Access Journals (Sweden)
Soheil Salahshour
2015-02-01
Full Text Available In this paper, we apply the concept of Caputo’s H-differentiability, constructed based on the generalized Hukuhara difference, to solve the fuzzy fractional differential equation (FFDE with uncertainty. This is in contrast to conventional solutions that either require a quantity of fractional derivatives of unknown solution at the initial point (Riemann–Liouville or a solution with increasing length of their support (Hukuhara difference. Then, in order to solve the FFDE analytically, we introduce the fuzzy Laplace transform of the Caputo H-derivative. To the best of our knowledge, there is limited research devoted to the analytical methods to solve the FFDE under the fuzzy Caputo fractional differentiability. An analytical solution is presented to confirm the capability of the proposed method.
Sega, Michela; Pennecchi, Francesca; Rinaldi, Sarah; Rolle, Francesca
2016-05-12
A proper evaluation of the uncertainty associated to the quantification of micropollutants in the environment, like Polycyclic Aromatic Hydrocarbons (PAHs), is crucial for the reliability of the measurement results. The present work describes a comparison between the uncertainty evaluation carried out according to the GUM uncertainty framework and the Monte Carlo (MC) method. This comparison was carried out starting from real data sets obtained from the quantification of benzo[a]pyrene (BaP), spiked on filters commonly used for airborne particulate matter sampling. BaP was chosen as target analyte as it is listed in the current European legislation as marker of the carcinogenic risk for the whole class of PAHs. MC method, being useful for nonlinear models and when the resulting output distribution for the measurand is non-symmetric, can particularly fit the cases in which the results of intrinsically positive quantities are very small and the lower limit of a desired coverage interval, obtained according to the GUM uncertainty framework, can be dramatically close to zero, if not even negative. In the case under study, it was observed that the two approaches for the uncertainty evaluation provide different results for BaP masses in samples containing different masses of the analyte, MC method giving larger coverage intervals. In addition, in cases of analyte masses close to zero, the GUM uncertainty framework would give even negative lower limit of uncertainty coverage interval for the measurand, an unphysical result which is avoided when using MC method. MC simulations, indeed, can be configured in a way that only positive values are generated thus obtaining a coverage interval for the measurand that is always positive.
Antoshchenkova, Ekaterina; Imbert, David; Richet, Yann; Bardet, Lise; Duluc, Claire-Marie; Rebour, Vincent; Gailler, Audrey; Hébert, Hélène
2016-04-01
The aim of this study is to assess evaluation the tsunamigenic potential of the Azores-Gibraltar Fracture Zone (AGFZ). This work is part of the French project TANDEM (Tsunamis in the Atlantic and English ChaNnel: Definition of the Effects through numerical Modeling; www-tandem.cea.fr), special attention is paid to French Atlantic coasts. Structurally, the AGFZ region is complex and not well understood. However, a lot of its faults produce earthquakes with significant vertical slip, of a type that can result in tsunami. We use the major tsunami event of the AGFZ on purpose to have a regional estimation of the tsunamigenic potential of this zone. The major reported event for this zone is the 1755 Lisbon event. There are large uncertainties concerning source location and focal mechanism of this earthquake. Hence, simple deterministic approach is not sufficient to cover on the one side the whole AGFZ with its geological complexity and on the other side the lack of information concerning the 1755 Lisbon tsunami. A parametric modeling environment Promethée (promethee.irsn.org/doku.php) was coupled to tsunami simulation software based on shallow water equations with the aim of propagation of uncertainties. Such a statistic point of view allows us to work with multiple hypotheses simultaneously. In our work we introduce the seismic source parameters in a form of distributions, thus giving a data base of thousands of tsunami scenarios and tsunami wave height distributions. Exploring our tsunami scenarios data base we present preliminary results for France. Tsunami wave heights (within one standard deviation of the mean) can be about 0.5 m - 1 m for the Atlantic coast and approaching 0.3 m for the English Channel.
Directory of Open Access Journals (Sweden)
Nerea Mangado
2016-11-01
Full Text Available Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering.
Energy Technology Data Exchange (ETDEWEB)
Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.; Halappanavar, Mahantesh
2016-09-16
Securing cyber-systems on a continual basis against a multitude of adverse events is a challenging undertaking. Game-theoretic approaches, that model actions of strategic decision-makers, are increasingly being applied to address cybersecurity resource allocation challenges. Such game-based models account for multiple player actions and represent cyber attacker payoffs mostly as point utility estimates. Since a cyber-attacker’s payoff generation mechanism is largely unknown, appropriate representation and propagation of uncertainty is a critical task. In this paper we expand on prior work and focus on operationalizing the probabilistic uncertainty quantification framework, for a notional cyber system, through: 1) representation of uncertain attacker and system-related modeling variables as probability distributions and mathematical intervals, and 2) exploration of uncertainty propagation techniques including two-phase Monte Carlo sampling and probability bounds analysis.
WFR-2D: an analytical model for PWAS-generated 2D ultrasonic guided wave propagation
Shen, Yanfeng; Giurgiutiu, Victor
2014-03-01
This paper presents WaveFormRevealer 2-D (WFR-2D), an analytical predictive tool for the simulation of 2-D ultrasonic guided wave propagation and interaction with damage. The design of structural health monitoring (SHM) systems and self-aware smart structures requires the exploration of a wide range of parameters to achieve best detection and quantification of certain types of damage. Such need for parameter exploration on sensor dimension, location, guided wave characteristics (mode type, frequency, wavelength, etc.) can be best satisfied with analytical models which are fast and efficient. The analytical model was constructed based on the exact 2-D Lamb wave solution using Bessel and Hankel functions. Damage effects were inserted in the model by considering the damage as a secondary wave source with complex-valued directivity scattering coefficients containing both amplitude and phase information from wave-damage interaction. The analytical procedure was coded with MATLAB, and a predictive simulation tool called WaveFormRevealer 2-D was developed. The wave-damage interaction coefficients (WDICs) were extracted from harmonic analysis of local finite element model (FEM) with artificial non-reflective boundaries (NRB). The WFR-2D analytical simulation results were compared and verified with full scale multiphysics finite element models and experiments with scanning laser vibrometer. First, Lamb wave propagation in a pristine aluminum plate was simulated with WFR-2D, compared with finite element results, and verified by experiments. Then, an inhomogeneity was machined into the plate to represent damage. Analytical modeling was carried out, and verified by finite element simulation and experiments. This paper finishes with conclusions and suggestions for future work.
Gronewold, A. D.; Wolpert, R. L.; Reckhow, K. H.
2007-12-01
Most probable number (MPN) and colony-forming-unit (CFU) are two estimates of fecal coliform bacteria concentration commonly used as measures of water quality in United States shellfish harvesting waters. The MPN is the maximum likelihood estimate (or MLE) of the true fecal coliform concentration based on counts of non-sterile tubes in serial dilution of a sample aliquot, indicating bacterial metabolic activity. The CFU is the MLE of the true fecal coliform concentration based on the number of bacteria colonies emerging on a growth plate after inoculation from a sample aliquot. Each estimating procedure has intrinsic variability and is subject to additional uncertainty arising from minor variations in experimental protocol. Several versions of each procedure (using different sized aliquots or different numbers of tubes, for example) are in common use, each with its own levels of probabilistic and experimental error and uncertainty. It has been observed empirically that the MPN procedure is more variable than the CFU procedure, and that MPN estimates are somewhat higher on average than CFU estimates, on split samples from the same water bodies. We construct a probabilistic model that provides a clear theoretical explanation for the observed variability in, and discrepancy between, MPN and CFU measurements. We then explore how this variability and uncertainty might propagate into shellfish harvesting area management decisions through a two-phased modeling strategy. First, we apply our probabilistic model in a simulation-based analysis of future water quality standard violation frequencies under alternative land use scenarios, such as those evaluated under guidelines of the total maximum daily load (TMDL) program. Second, we apply our model to water quality data from shellfish harvesting areas which at present are closed (either conditionally or permanently) to shellfishing, to determine if alternative laboratory analysis procedures might have led to different
Uncertainty propagation in modeling of plasma-assisted hydrogen production from biogas
Zaherisarabi, Shadi; Venkattraman, Ayyaswamy
2016-10-01
With the growing concern of global warming and the resulting emphasis on decreasing greenhouse gas emissions, there is an ever-increasing need to utilize energy-production strategies that can decrease the burning of fossil fuels. In this context, hydrogen remains an attractive clean-energy fuel that can be oxidized to produce water as a by-product. In spite of being an abundant species, hydrogen is seldom found in a form that is directly usable for energy-production. While steam reforming of methane is one popular technique for hydrogen production, plasma-assisted conversion of biogas (carbon dioxide + methane) to hydrogen is an attractive alternative. Apart from producing hydrogen, the other advantage of using biogas as raw material is the fact that two potent greenhouse gases are consumed. In this regard, modeling is an important tool to understand and optimize plasma-assisted conversion of biogas. The primary goal of this work is to perform a comprehensive statistical study that quantifies the influence of uncertain rate constants thereby determining the key reaction pathways. A 0-D chemical kinetics solver in the OpenFOAM suite is used to perform a series of simulations to propagate the uncertainty in rate constants and the resulting mean and standard deviation of outcomes.
Ryerson, F. J.; Ezzedine, S. M.; Antoun, T.
2013-12-01
The success of implementation and execution of numerous subsurface energy technologies such shale gas extraction, geothermal energy, underground coal gasification rely on detailed characterization of the geology and the subsurface properties. For example, spatial variability of subsurface permeability controls multi-phase flow, and hence impacts the prediction of reservoir performance. Subsurface properties can vary significantly over several length scales making detailed subsurface characterization unfeasible if not forbidden. Therefore, in common practices, only sparse measurements of data are available to image or characterize the entire reservoir. For example pressure, P, permeability, k, and production rate, Q, measurements are only available at the monitoring and operational wells. Elsewhere, the spatial distribution of k is determined by various deterministic or stochastic interpolation techniques and P and Q are calculated from the governing forward mass balance equation assuming k is given at all locations. Several uncertainty drivers, such as PSUADE, are then used to propagate and quantify the uncertainty (UQ) of quantities (variable) of interest using forward solvers. Unfortunately, forward-solver techniques and other interpolation schemes are rarely constrained by the inverse problem itself: given P and Q at observation points determine the spatially variable map of k. The approach presented here, motivated by fluid imaging for subsurface characterization and monitoring, was developed by progressively solving increasingly complex realistic problems. The essence of this novel approach is that the forward and inverse partial differential equations are the interpolator themselves for P, k and Q rather than extraneous and sometimes ad hoc schemes. Three cases with different sparsity of data are investigated. In the simplest case, a sufficient number of passive pressure data (pre-production pressure gradients) are given. Here, only the inverse hyperbolic
González, A Gustavo; Angeles Herrador, M; Asuero, Agustín G
2005-02-28
The estimation of the measurement uncertainty of analytical assays based on the LGC/VAM protocol from validation data is fully revisited and discussed in the light of the study of precision, trueness and robustness.
Wave-like warp propagation in circumbinary discs I. Analytic theory and numerical simulations
Facchini, Stefano; Price, Daniel J
2013-01-01
In this paper we analyse the propagation of warps in protostellar circumbinary discs. We use these systems as a test environment in which to study warp propagation in the bending-wave regime, with the addition of an external torque due to the binary gravitational potential. In particular, we want to test the linear regime, for which an analytic theory has been developed. In order to do so, we first compute analytically the steady state shape of an inviscid disc subject to the binary torques. The steady state tilt is a monotonically increasing function of radius. In the absence of viscosity, the disc does not present any twist. Then, we compare the time-dependent evolution of the warped disc calculated via the known linearised equations both with the analytic solutions and with full 3D numerical simulations, which have been performed with the PHANTOM SPH code using 2 million particles. We find a good agreement both in the tilt and in the phase evolution for small inclinations, even at very low viscosities. Mor...
Gosset, Marielle; Casse, Claire; Peugeot, christophe; boone, aaron; pedinotti, vanessa
2015-04-01
Global measurement of rainfall offers new opportunity for hydrological monitoring, especially for some of the largest Tropical river where the rain gauge network is sparse and radar is not available. Member of the GPM constellation, the new French-Indian satellite Mission Megha-Tropiques (MT) dedicated to the water and energy budget in the tropical atmosphere contributes to a better monitoring of rainfall in the inter-tropical zone. As part of this mission, research is developed on the use of satellite rainfall products for hydrological research or operational application such as flood monitoring. A key issue for such applications is how to account for rainfall products biases and uncertainties, and how to propagate them into the end user models ? Another important question is how to choose the best space-time resolution for the rainfall forcing, given that both model performances and rain-product uncertainties are resolution dependent. This paper analyses the potential of satellite rainfall products combined with hydrological modeling to monitor the Niger river floods in the city of Niamey, Niger. A dramatic increase of these floods has been observed in the last decades. The study focuses on the 125000 km2 area in the vicinity of Niamey, where local runoff is responsible for the most extreme floods recorded in recent years. Several rainfall products are tested as forcing to the SURFEX-TRIP hydrological simulations. Differences in terms of rainfall amount, number of rainy days, spatial extension of the rainfall events and frequency distribution of the rain rates are found among the products. Their impacts on the simulated outflow is analyzed. The simulations based on the Real time estimates produce an excess in the discharge. For flood prediction, the problem can be overcome by a prior adjustment of the products - as done here with probability matching - or by analysing the simulated discharge in terms of percentile or anomaly. All tested products exhibit some
Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.
2014-01-01
Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.
A novel stochastic collocation method for uncertainty propagation in complex mechanical systems
Qi, WuChao; Tian, SuMei; Qiu, ZhiPing
2015-02-01
This paper presents a novel stochastic collocation method based on the equivalent weak form of multivariate function integral to quantify and manage uncertainties in complex mechanical systems. The proposed method, which combines the advantages of the response surface method and the traditional stochastic collocation method, only sets integral points at the guide lines of the response surface. The statistics, in an engineering problem with many uncertain parameters, are then transformed into a linear combination of simple functions' statistics. Furthermore, the issue of determining a simple method to solve the weight-factor sets is discussed in detail. The weight-factor sets of two commonly used probabilistic distribution types are given in table form. Studies on the computational accuracy and efforts show that a good balance in computer capacity is achieved at present. It should be noted that it's a non-gradient and non-intrusive algorithm with strong portability. For the sake of validating the procedure, three numerical examples concerning a mathematical function with analytical expression, structural design of a straight wing, and flutter analysis of a composite wing are used to show the effectiveness of the guided stochastic collocation method.
Energy Technology Data Exchange (ETDEWEB)
Pal Verma, Mahendra [Instituto de Investigaciones Electricas, Cuernavaca, Morelos (Mexico)
2008-07-01
A procedure was developed to consider the analytical uncertainty in each parameter of geochemical analysis of geothermal fluid. The estimation of the uncertainty is based on the results of the geochemical analyses of geothermal fluids (numbered from the 0 to the 14), obtained within the framework of the comparisons program among the geochemical laboratories in the last 30 years. Also the propagation of the analytical uncertainty was realized in the calculation of the parameters of the geothermal fluid in the reservoir, through the methods of interval of uncertainty and GUM (Guide to the expression of Uncertainty of Measurement). The application of the methods is illustrated in the pH calculation of the geothermal fluid in the reservoir, considering samples 10 and 11 as separated waters at atmospheric conditions. [Spanish] Se desarrollo un procedimiento para estimar la incertidumbre analitica en cada parametro de analisis geoquimico de fluido geotermico. La estimacion de la incertidumbre esta basada en los resultados de los analisis geoquimicos de fluidos geotermicos (numerados del 0 al 14), obtenidos en el marco del programa de comparaciones entre los laboratorios geoquimicos en los ultimos 30 anos. Tambien se realizo la propagacion de la incertidumbre analitica en el calculo de los parametros del fluido geotermico en el yacimiento, a traves de los metodos de intervalo de incertidumbre y GUM (Guide to the expression of Uncertainty of Measurement). La aplicacion de los metodos se ilustra en el calculo de pH del fluido geotermico en el yacimiento, considerando las muestras 10 y 11 como aguas separadas a las condiciones atmosfericas.
Validation of an analytical compressed elastic tube model for acoustic wave propagation
Van Hirtum, A.; Blandin, R.; Pelorson, X.
2015-12-01
Acoustic wave propagation through a compressed elastic tube is a recurrent problem in engineering. Compression of the tube is achieved by pinching it between two parallel bars so that the pinching effort as well as the longitudinal position of pinching can be controlled. A stadium-based geometrical tube model is combined with a plane wave acoustic model in order to estimate acoustic wave propagation through the elastic tube as a function of pinching effort, pinching position, and outlet termination (flanged or unflanged). The model outcome is validated against experimental data obtained in a frequency range from 3.5 kHz up to 10 kHz by displacing an acoustic probe along the tube's centerline. Due to plane wave model assumptions and the decrease of the lowest higher order mode cut-on frequency with increasing pinching effort, the difference between modeled and measured data is analysed in three frequency bands, up to 5 kHz, 8 kHz, and 9.5 kHz, respectively. It is seen that the mean and standard error within each frequency band do not significantly vary with pinching effort, pinching position, or outlet termination. Therefore, it is concluded that the analytical tube model is suitable to approximate the elastic tube geometry when modeling acoustic wave propagation through the pinched elastic tube with either flanged or unflanged termination.
Analytical and experimental study on wave propagation problems in orthotropic media
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Wave propagation problems in orthotropic media are studiedjointly by analytical and experimental methods in this paper. Dynamic orthotropic photoelasticity, which studies experimentally the dynamic behavior of orthotropic materials on a macroscopic scale by employing orthotropic birefringent materials, is established. A dynamic stress-optic law for orthotropic birefringent materials is postulated and practical methods of calibrating dynamic mechanical constants and dynamic stress-fringe values are proposed. Meanwhile, time domain boundary element method (BEM) for wave propagation in orthotropic media, is also presented based on the theory of elastodynamics. A scheme of stress calculations that is necessary for strength analysis is established. The paper stresses on the applications in wave propagation problems in orthotropic media by demonstrating three examples. The semi-infinite orthotropic plates with and without a circular hole modeled by a unidirectional fiber-reinforced composite under impact loading are analyzed. Time histories of birefringent fringe orders or stresses for specific points of the plates are obtained respectively from the two methods and compared with each other. Based on the above comparative study, the dynamic response of an underground workshop under seismic waves is studied by time domain BEM. The responses of displacements and stresses are solved. The effects of angle and frequency of incident waves and the degree of media anisotropy on dynamic response of the underground workshop are investigated.
Gudimetla, V S Rao; Holmes, Richard B; Riker, Jim F
2012-12-01
An analytical expression for the log-amplitude correlation function for plane wave propagation through anisotropic non-Kolmogorov turbulent atmosphere is derived. The closed-form analytic results are based on the Rytov approximation. These results agree well with wave optics simulation based on the more general Fresnel approximation as well as with numerical evaluations, for low-to-moderate strengths of turbulence. The new expression reduces correctly to the previously published analytic expressions for the cases of plane wave propagation through both nonisotropic Kolmogorov turbulence and isotropic non-Kolmogorov turbulence cases. These results are useful for understanding the potential impact of deviations from the standard isotropic Kolmogorov spectrum.
Rose, K.; Bauer, J. R.; Baker, D. V.
2015-12-01
As big data computing capabilities are increasingly paired with spatial analytical tools and approaches, there is a need to ensure uncertainty associated with the datasets used in these analyses is adequately incorporated and portrayed in results. Often the products of spatial analyses, big data and otherwise, are developed using discontinuous, sparse, and often point-driven data to represent continuous phenomena. Results from these analyses are generally presented without clear explanations of the uncertainty associated with the interpolated values. The Variable Grid Method (VGM) offers users with a flexible approach designed for application to a variety of analyses where users there is a need to study, evaluate, and analyze spatial trends and patterns while maintaining connection to and communicating the uncertainty in the underlying spatial datasets. The VGM outputs a simultaneous visualization representative of the spatial data analyses and quantification of underlying uncertainties, which can be calculated using data related to sample density, sample variance, interpolation error, uncertainty calculated from multiple simulations. In this presentation we will show how we are utilizing Hadoop to store and perform spatial analysis through the development of custom Spark and MapReduce applications that incorporate ESRI Hadoop libraries. The team will present custom 'Big Data' geospatial applications that run on the Hadoop cluster and integrate with ESRI ArcMap with the team's probabilistic VGM approach. The VGM-Hadoop tool has been specially built as a multi-step MapReduce application running on the Hadoop cluster for the purpose of data reduction. This reduction is accomplished by generating multi-resolution, non-overlapping, attributed topology that is then further processed using ESRI's geostatistical analyst to convey a probabilistic model of a chosen study region. Finally, we will share our approach for implementation of data reduction and topology generation
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
Energy Technology Data Exchange (ETDEWEB)
Bidabadi, Mehdi; Rahbari, Alireza [Iran University of Science and Technology, Tehran (Iran, Islamic Republic of)
2009-09-15
This paper presents the effects of the temperature difference between gas and particle, different Lewis numbers, and heat loss from the walls in the structure of premixed flames propagation in a combustible system containing uniformly distributed volatile fuel particles in an oxidizing gas mixture. It is assumed that the fuel particles vaporize first to yield a gaseous fuel, which is oxidized in a gas phase. The analysis is performed in the asymptotic limit, where the value of the characteristic Zeldovich number is large. The structure of the flame is composed of a preheat zone, reaction zone, and convection zone. The governing equations and required boundary conditions are applied in each zone, and an analytical method is used for solving these equations. The obtained results illustrate the effects of the above parameters on the variations of the dimensionless temperature, particle mass friction, flame temperature, and burning velocity for gas and particle
Malkov, M A
2016-01-01
An analytic solution for a Fokker-Planck equation that describes propagation of energetic particles through a scattering medium is obtained. The solution is found in terms of an infinite series of mixed moments of particle distribution. The spatial dispersion of a particle cloud released at t=0 evolves through three phases, ballistic (t>Tc), where Tc is the collision time.The ballistic phase is characterized by a decelerating expansion of the initial point source in form of "box" distribution with broadening walls. The next, transdiffusive phase is marked by the box walls broadened to its size and a noticeable slow down of expansion. Finally, the evolution enters the conventional diffusion phase.
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
First-order analytic propagation of satellites in the exponential atmosphere of an oblate planet
Martinusi, Vladimir; Dell'Elce, Lamberto; Kerschen, Gaëtan
2017-04-01
The paper offers the fully analytic solution to the motion of a satellite orbiting under the influence of the two major perturbations, due to the oblateness and the atmospheric drag. The solution is presented in a time-explicit form, and takes into account an exponential distribution of the atmospheric density, an assumption that is reasonably close to reality. The approach involves two essential steps. The first one concerns a new approximate mathematical model that admits a closed-form solution with respect to a set of new variables. The second step is the determination of an infinitesimal contact transformation that allows to navigate between the new and the original variables. This contact transformation is obtained in exact form, and afterwards a Taylor series approximation is proposed in order to make all the computations explicit. The aforementioned transformation accommodates both perturbations, improving the accuracy of the orbit predictions by one order of magnitude with respect to the case when the atmospheric drag is absent from the transformation. Numerical simulations are performed for a low Earth orbit starting at an altitude of 350 km, and they show that the incorporation of drag terms into the contact transformation generates an error reduction by a factor of 7 in the position vector. The proposed method aims at improving the accuracy of analytic orbit propagation and transforming it into a viable alternative to the computationally intensive numerical methods.
Gudimetla, V S Rao; Holmes, Richard B; Riker, Jim F
2014-01-01
An analytical expression for the log-amplitude correlation function based on the Rytov approximation is derived for spherical wave propagation through an anisotropic non-Kolmogorov refractive turbulent atmosphere. The expression reduces correctly to the previously published analytic expressions for the case of spherical wave propagation through isotropic Kolmogorov turbulence. These results agree well with a wave-optics simulation based on the more general Fresnel approximation, as well as with numerical evaluations, for low-to-moderate strengths of turbulence. These results are useful for understanding the potential impact of deviations from the standard isotropic Kolmogorov spectrum.
Nuclear Data Uncertainty Propagation to Reactivity Coefficients of a Sodium Fast Reactor
Herrero, J. J.; Ochoa, R.; Martínez, J. S.; Díez, C. J.; García-Herranz, N.; Cabellos, O.
2014-04-01
The assessment of the uncertainty levels on the design and safety parameters for the innovative European Sodium Fast Reactor (ESFR) is mandatory. Some of these relevant safety quantities are the Doppler and void reactivity coefficients, whose uncertainties are quantified. Besides, the nuclear reaction data where an improvement will certainly benefit the design accuracy are identified. This work has been performed with the SCALE 6.1 codes suite and its multigroups cross sections library based on ENDF/B-VII.0 evaluation.
Terando, A. J.; Reich, B. J.; Pacifici, K.
2013-12-01
Fire is an important disturbance process in many coupled natural-human systems. Changes in the frequency and severity of fires due to anthropogenic climate change could have significant costs to society and the plant and animal communities that are adapted to a particular fire regime Planning for these changes requires a robust model of the relationship between climate and fire that accounts for multiple sources of uncertainty that are present when simulating ecological and climatological processes. Here we model how anthropogenic climate change could affect the wildfire regime for a region in the Southeast US whose natural ecosystems are dependent on frequent, low-intensity fires while humans are at risk from large catastrophic fires. We develop a modeling framework that incorporates three major sources of uncertainty: (1) uncertainty in the ecological drivers of expected monthly area burned, (2) uncertainty in the environmental drivers influencing the probability of an extreme fire event, and (3) structural uncertainty in different downscaled climate models. In addition we use two policy-relevant emission scenarios (climate stabilization and 'business-as-usual') to characterize the uncertainty in future greenhouse gas forcings. We use a Bayesian framework to incorporate different sources of uncertainty including simulation of predictive errors and Stochastic Search Variable Selection. Our results suggest that although the mean process remains stationary, the probability of extreme fires declines through time, owing to the persistence of high atmospheric moisture content during the peak fire season that dampens the effect of increasing temperatures. Including multiple sources of uncertainty leads to wide prediction intervals, but is potentially more useful for decision-makers that will require adaptation strategies that are robust to rapid but uncertain climate and ecological change.
Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model
Wang, Shitao
2016-05-27
Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model\\'s estimates of the plume\\'s trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate\\'s contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.
Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model
Wang, Shitao; Iskandarani, Mohamed; Srinivasan, Ashwanth; Thacker, W. Carlisle; Winokur, Justin; Knio, Omar M.
2016-05-01
Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model's estimates of the plume's trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate's contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.
Samad, Noor Asma Fazli Abdul; Sin, Gürkan; Gernaey, Krist V; Gani, Rafiqul
2013-11-01
This paper presents the application of uncertainty and sensitivity analysis as part of a systematic model-based process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty, while for sensitivity analysis, global methods including the standardized regression coefficients (SRC) and Morris screening are used to identify the most significant parameters. The potassium dihydrogen phosphate (KDP) crystallization process is used as a case study, both in open-loop and closed-loop operation. In the uncertainty analysis, the impact on the predicted output of uncertain parameters related to the nucleation and the crystal growth model has been investigated for both a one- and two-dimensional crystal size distribution (CSD). The open-loop results show that the input uncertainties lead to significant uncertainties on the CSD, with appearance of a secondary peak due to secondary nucleation for both cases. The sensitivity analysis indicated that the most important parameters affecting the CSDs are nucleation order and growth order constants. In the proposed PAT system design (closed-loop), the target CSD variability was successfully reduced compared to the open-loop case, also when considering uncertainty in nucleation and crystal growth model parameters. The latter forms a strong indication of the robustness of the proposed PAT system design in achieving the target CSD and encourages its transfer to full-scale implementation.
DEFF Research Database (Denmark)
He, Xiulan
parameters and model structures, which are the primary focuses of this PhD research. Parameter uncertainty was analyzed using an optimization tool (PEST: Parameter ESTimation) in combination with a random sampling method (LHS: Latin Hypercube Sampling). Model structure, namely geological architecture...... was analyzed using both a traditional two-point based geostatistical approach and multiple-point geostatistics (MPS). Our results documented that model structure is as important as model parameter regarding groundwater modeling uncertainty. Under certain circumstances the inaccuracy on model structure can...
DEFF Research Database (Denmark)
Sin, Gürkan; Gernaey, Krist; Eliasson Lantz, Anna
2009-01-01
(about 10) out of a total 56 were mainly responsible for the output uncertainty. Among these significant parameters, one finds parameters related to fermentation characteristics such as biomass metabolism, chemical equilibria and mass-transfer. Overall the uncertainty and sensitivity analysis are found...... promising for helping to build reliable mechanistic models and to interpret the model outputs properly. These tools make part of good modeling practice, which can contribute to successful PAT applications for increased process understanding, operation and control purposes. © 2009 American Institute...
Calibration and Forward Uncertainty Propagation for Large-eddy Simulations of Engineering Flows
Energy Technology Data Exchange (ETDEWEB)
Templeton, Jeremy Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Blaylock, Myra L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Domino, Stefan P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hewson, John C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kumar, Pritvi Raj [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ling, Julia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Najm, Habib N. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ruiz, Anthony [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Safta, Cosmin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stewart, Alessia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wagner, Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-09-01
The objective of this work is to investigate the efficacy of using calibration strategies from Uncertainty Quantification (UQ) to determine model coefficients for LES. As the target methods are for engineering LES, uncertainty from numerical aspects of the model must also be quantified. 15 The ultimate goal of this research thread is to generate a cost versus accuracy curve for LES such that the cost could be minimized given an accuracy prescribed by an engineering need. Realization of this goal would enable LES to serve as a predictive simulation tool within the engineering design process.
Uncertainty propagation in up-scaling of subsoil parameters, no fixed distributions allowed
Lourens, Aris; van Geer, Frans C.
2013-01-01
When creating numerical groundwater models, the structure and properties of the subsoil is indispensable information. Like all model data, these data are subject to uncertainty. Building a groundwater model, the available geological information, like the geological structure and parameter values, ha
Xu, Yanlong
2015-08-01
The coupled mode theory with coupling of diffraction modes and waveguide modes is usually used on the calculations of transmission and reflection coefficients for electromagnetic waves traveling through periodic sub-wavelength structures. In this paper, I extend this method to derive analytical solutions of high-order dispersion relations for shear horizontal (SH) wave propagation in elastic plates with periodic stubs. In the long wavelength regime, the explicit expression is obtained by this theory and derived specially by employing an effective medium. This indicates that the periodical stubs are equivalent to an effective homogenous layer in the long wavelength. Notably, in the short wavelength regime, high-order diffraction modes in the plate and high-order waveguide modes in the stubs are considered with modes coupling to compute the band structures. Numerical results of the coupled mode theory fit pretty well with the results of the finite element method (FEM). In addition, the band structures\\' evolution with the height of the stubs and the thickness of the plate shows clearly that the method can predict well the Bragg band gaps, locally resonant band gaps and high-order symmetric and anti-symmetric thickness-twist modes for the periodically structured plates. © 2015 Elsevier B.V.
Hong, Jinglan
2012-06-01
Uncertainty information is essential for the proper use of life cycle assessment and environmental assessments in decision making. To investigate the uncertainties of biodiesel and determine the level of confidence in the assertion that biodiesel is more environmentally friendly than diesel, an explicit analytical approach based on the Taylor series expansion for lognormal distribution was applied in the present study. A biodiesel case study demonstrates the probability that biodiesel has a lower global warming and non-renewable energy score than diesel, that is 92.3% and 93.1%, respectively. The results indicate the level of confidence in the assertion that biodiesel is more environmentally friendly than diesel based on the global warming and non-renewable energy scores.
Ultra-Scalable Algorithms for Large-Scale Uncertainty Quantification in Inverse Wave Propagation
2016-03-04
pactness of the data misfit Hessian for 2D inverse shape [12] and inverse medium [13] acoustic scattering, we extended the theoretical analysis to...project are [5, 7, 9, 11, 18, 19, 21, 24–30, 32, 35– 40, 42, 52, 58]. Highlights are listed below. • Foundational work on mathematical error analysis of...propagation problems included, admits many variational formulations employing different energy settings and implying ultimate conver- gence in
Uncertainty Propagation and the Fano Based Infromation Theoretic Method: A Radar Example
2015-02-01
sources of uncertainty on system performance are characterized. Numerical computation of entropic estimates on high dimensional signature processes...law of large numbers, the asymptotic equipartition property asserts that there are large regions within the entropic signature subspace which will...address nonlinear paradigms in sensing. While not new n the literature, the unified use of the Fano equality with the Data Processing inequality in a
Singh, A.; Serbin, S. P.; Kingdon, C.; Townsend, P. A.
2013-12-01
A major goal of remote sensing, and imaging spectroscopy in particular, is the development of generalizable algorithms to repeatedly and accurately map ecosystem properties such as canopy chemistry across space and time. Existing methods must therefore be tested across a range of measurement approaches to identify and overcome limits to the consistent retrieval of such properties from spectroscopic imagery. Here we illustrate a general approach for the estimation of key foliar biochemical and morphological traits from spectroscopic imagery derived from the AVIRIS instrument and the propagation of errors from the leaf to the image scale using partial least squares regression (PLSR) techniques. Our method involves the integration of three types of data representing different scales of observation: At the image scale, the images were normalized for atmospheric, illumination and BRDF effects. Spectra from field plot locations were extracted from the 51AVIRIS images and were averaged when the field plot was larger than a single pixel. At the plot level, the scaling was conducted using multiple replicates (1000) derived from the leaf-level uncertainty estimates to generate plot-level estimates with their associated uncertainties. Leaf-level estimates of foliar traits (%N, %C, %Fiber, %Cellulose, %Lignin, LMA) were scaled to the canopy based on relative species composition of each plot. Image spectra were iteratively split into 50/50 randomized calibration-validation datasets and multiple (500) trait-predictive PLSR models were generated, this time sampling from within the plot-level uncertainty distribution. This allowed the propagation of uncertainty from the leaf-level dependent variables to the plot level, and finally to models built using AVIRIS image spectra. Moreover, this method allows us to generate spatially explicit maps of uncertainty in our sampled traits. Both LMA and %N PLSR models had a R2 greater than 0.8, root mean square errors (RMSEs) for both
Directory of Open Access Journals (Sweden)
Elvis Joacir de França
2006-01-01
Full Text Available Instrumental neutron activation analysis (INAA is a measurement technique of high metrological level for the determination of chemical elements. In the context of BIOTA/FAPESP Program, leaves of trees have been evaluated by INAA for biomonitoring purposes of the Atlantic Forest. To assure the comparability of results in environmental studies, a leaf sample of Marlierea tomentosa (Myrtaceae family showing the lowest concentrations of chemical elements was selected for the evaluation of analytical quality of the determination under unfavorable conditions. Nevertheless, the homogeneity of chemical concentrations of sample at the 95% of confidence level has been achieved and INAA has presented repeatability of 2% for the determination of Br, Co, Cs, Fe, K, Na, Rb and Sr, the uncertainty could have been overestimated. For the evaluation of uncertainty due to the variability of chemical concentrations in the sample, Jackknife and Bootstrap methods were used to estimate the maximum expected percent standard deviation. The uncertainty budget was considered adequate for the reporting chemical concentrations of environmental samples determined by INAA.A análise por ativação neutrônica instrumental (INAA é uma técnica analítica de alto nível metrológico para a determinação de elementos químicos. No contexto do programa BIOTA/FAPESP, folhas de árvores vêm sendo avaliadas empregando-se INAA para a biomonitoração da Mata Atlântica. Para garantir a comparabilidade dos resultados em estudos ambientais, amostra de folhas de Marlierea tomentosa, cujas concentrações de elementos químicos obtidas foram as menores, foi selecionada para a avaliação da qualidade analítica na mais desfavorável situação. Esta avaliação levou em consideração a homogeneidade das concentrações de elementos e a estimativa da repetitividade analítica. Embora a homogeneidade das concentrações tenha sido detectada em nível de 95% de confiança e a INAA tenha
Alhassan, Erwin; Sjöstrand, Henrik; Duan, Junfeng; Gustavsson, Cecilia; Koning, Arjan; Pomp, Stephan; Rochman, Dimitri; Österlund, Michael
2013-01-01
Analyses are carried out to assess the impact of nuclear data uncertainties on keff for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of Pu-239 random ENDF-formated libraries generated using the TALYS based system were processed into ACE format with NJOY99.336 code and used as input into the Serpent Monte Carlo neutron transport code to obtain distribution in keff. The keff distribution obtained was compared with the latest major nuclea...
Alhassan, Erwin; Duan, Junfeng; Gustavsson, Cecilia; Koning, Arjan; Pomp, Stephan; Rochman, Dimitri; Österlund, Michael
2013-01-01
Analyses are carried out to assess the impact of nuclear data uncertainties on keff for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of Pu-239 random ENDF-formated libraries generated using the TALYS based system were processed into ACE format with NJOY99.336 code and used as input into the Serpent Monte Carlo neutron transport code to obtain distribution in keff. The keff distribution obtained was compared with the latest major nuclear data libraries - JEFF-3.1.2, ENDF/B-VII.1 and JENDL-4.0. A method is proposed for the selection of benchmarks for specific applications using the Total Monte Carlo approach. Finally, an accept/reject criterion was investigated based on chi square values obtained using the Pu-239 Jezebel criticality benchmark. It was observed that nuclear data uncertainties in keff were reduced considerably from 748 to 443 pcm by applying a more rigid acceptance criteria for accepting random files.
Gates, Robert L
2015-01-01
This work proposes a scheme for significantly reducing the computational complexity of discretized problems involving the non-smooth forward propagation of uncertainty by combining the adaptive hierarchical sparse grid stochastic collocation method (ALSGC) with a hierarchy of successively finer spatial discretizations (e.g. finite elements) of the underlying deterministic problem. To achieve this, we build strongly upon ideas from the Multilevel Monte Carlo method (MLMC), which represents a well-established technique for the reduction of computational complexity in problems affected by both deterministic and stochastic error contributions. The resulting approach is termed the Multilevel Adaptive Sparse Grid Collocation (MLASGC) method. Preliminary results for a low-dimensional, non-smooth parametric ODE problem are promising: the proposed MLASGC method exhibits an error/cost-relation of $\\varepsilon \\sim t^{-0.95}$ and therefore significantly outperforms the single-level ALSGC ($\\varepsilon \\sim t^{-0.65}$) a...
Bilionis, Ilias; Gonzalez, Marcial
2016-01-01
The prohibitive cost of performing Uncertainty Quantification (UQ) tasks with a very large number of input parameters can be addressed, if the response exhibits some special structure that can be discovered and exploited. Several physical responses exhibit a special structure known as an active subspace (AS), a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction with the AS represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the model, we design a two-step maximum likelihood optimization procedure that ensures the ...
Masaki, Y.; Hanasaki, N.; Takahashi, K.; Hijioka, Y.
2014-12-01
Future water availability is one of key social issues under ongoing climate change. Since 70% of human water abstraction is for irrigational use, accurate evaluation of irrigational water is highly desirable for reliable estimation of water demand. However, recent studies on future hydrological environments projected by different impact models showed that there are substantial differences in their results between the models. A large part of the differences is considered to be attributable to different calculation schemes implemented by different impact models and input data driving the models. To obtain more reliable future projections, it is crucial to identify possible sources of these differences in both meteorological data sets and the impact models.We investigated possible sources of uncertainty in quantitative evaluation of irrigational water with a global hydrological model, H08. Since irrigational water is primarily lost from croplands via evapotranspiration, we focus on uncertainties in data of the atmospheric humidity, a major determinant of evapotranspiration, generated by general circulation models (GCMs). In fact, although great attention has been paid for GCM biases in the temperature and precipitation in most climate impact studies, less attention has been paid for GCM biases in the humidity. To evaluate propagation of uncertainty in humidity data into irrigational water withdrawal, we used bias-corrected meteorological data, for all meteorological elements but for the humidity, of five GCMs (GFDL-ESM2M, HadGEM2-ES, IPSL-CM5A-LR, MIROC-ESM-CHEM and NorESM1-M). We detected differences of over 10%RH between the GCMs in monthly humidity data averaged over irrigated croplands over the world. Estimation of annual global irrigational water withdrawal from the five GCMs ranges 1,218-1,341 km3/yr. We made sensitivity analysis by adding hypothetical humidity biases of 5%RH to the GCM humidity data; annual global irrigational water withdrawal would be changed
Gosset, M.; Roca, R.
2012-04-01
The use of satellite based rainfall in research or operational Hydrological application is becoming more and more frequent. This is specially true in the Tropics where ground based gages (or radar) network are generally scarce and generally degrading. The new French-Indian satellite Mission Megha-Tropiques (MT) dedicated to the water and energy budget in the tropical atmosphere will contribute to a better monitoring of rainfall in the inter-tropical zone. As part of this mission, research is developed on the use of MT rainfall products for hydrological research or operational application such as flood monitoring. A key issue for such applications is how to account for rainfall products biases and uncertainties, and how to propagate them in the end user models ? Another important question is how to chose the best space-time resolution for the rainfall forcing, given that both model performances and rain-product uncertainties are resolution dependent. This talk will present on going investigations and perspectives on this subject, with examples from the Megha_tropiques Ground validation sites. Several sensitivity studies have been carried out in the Oueme Basin in Benin, West Africa, one the instrumented basin that will be used for MT products direct and hydrological validation.
Schippers, P.; Volker, A.W.F.; Golliard, J.; Jong, C. de
2006-01-01
Propagation and sonar performance are modelled by TNO’s ALMOST program, already being developed since the Eighties. It models propagation between sonar and target based on ray theory, including effects of sediment bottoms, reverberation and ambient noise. Moreover, antenna directivity (beam forming)
Applied Analytical Methods for Solving Some Problems of Wave Propagation in the Coastal Areas
Gagoshidze, Shalva; Kodua, Manoni
2016-04-01
Analytical methods, easy for application, are proposed for the solution of the following four classical problems of coastline hydro mechanics: 1. Refraction of waves on coast slopes of arbitrary steepness; 2. Wave propagation in tapering water areas; 3. Longitudinal waves in open channels; 4. Long waves on uniform and non-uniform flows of water. The first three of these problems are solved by the direct Galerkin-Kantorovich method with a choice , of basic functions which completely satisfy all boundary conditions. This approach leads to obtaining new evolutionary equations which can be asymptotically solved by the WKB method. The WKB solution of the first problem enables us to easily determine the three-dimensional field of velocities and to construct the refraction picture of the wave surface near the coast having an arbitrary angle of slope to the horizon varying from 0° to 180°. This solution, in particular for a vertical cliff, fully agrees with Stoker's particular but difficult solution. Moreover, it is shown for the first time that our Schrödinger type evolutionary equation leads to the formation of the so-called "potential wells" if the angle of coast slope to the horizon exceeds 45°, while the angle given at infinity (i.e. at a large distance from the shore) between the wave crests and the coastline exceeds 75°. This theoretical result expressed in terms of elementary functions is well consistent with the experimental observations and with lot of aerial photographs of waves in the coastal zones of the oceans [1,2]. For the second problem we introduce the notions of "wide" and "narrow" water areas. It is shown that Green's law on the wave height growth holds only for the narrow part of the water area, whereas in the wide part the tapering of the water area leads to an insignificant decrease of the wave height. For the third problem, the bank slopes of trapezoidal channels are assumed to have an arbitrary angle of steepness. So far we have known the
Stolarski, R. S.; Douglass, A. R.
1986-01-01
Models of stratospheric photochemistry are generally tested by comparing their predictions for the composition of the present atmosphere with measurements of species concentrations. These models are then used to make predictions of the atmospheric sensitivity to perturbations. Here the problem of the sensitivity of such a model to chlorine perturbations ranging from the present influx of chlorine-containing compounds to several times that influx is addressed. The effects of uncertainties in input parameters, including reaction rate coefficients, cross sections, solar fluxes, and boundary conditions, are evaluated using a Monte Carlo method in which the values of the input parameters are randomly selected. The results are probability distributions for present atmosheric concentrations and for calculated perturbations due to chlorine from fluorocarbons. For more than 300 Monte Carlo runs the calculated ozone perturbation for continued emission of fluorocarbons at today's rates had a mean value of -6.2 percent, with a 1-sigma width of 5.5 percent. Using the same runs but only allowing the cases in which the calculated present atmosphere values of NO, NO2, and ClO at 25 km altitude fell within the range of measurements yielded a mean ozone depletion of -3 percent, with a 1-sigma deviation of 2.2 percent. The model showed a nonlinear behavior as a function of added fluorocarbons. The mean of the Monte Carlo runs was less nonlinear than the model run using mean value of the input parameters.
Mani, Ali; Zangle, Thomas A; Santiago, Juan G
2009-04-01
We develop two models to describe ion transport in variable-height micro- and nanochannels. For the first model, we obtain a one-dimensional (unsteady) partial differential equation governing flow and charge transport through a shallow and wide electrokinetic channel. In this model, the effects of electric double layer (EDL) on axial transport are taken into account using exact solutions of the Poisson-Boltzmann equation. The second simpler model, which is approachable analytically, assumes that the EDLs are confined to near-wall regions. Using a characteristics analysis, we show that the latter model captures concentration polarization (CP) effects and provides useful insight into its dynamics. Two distinct CP regimes are identified: CP with propagation in which enrichment and depletion shocks propagate outward, and CP without propagation where polarization effects stay local to micro- nanochannel interfaces. The existence of each regime is found to depend on a nanochannel Dukhin number and mobility of the co-ion nondimensionalized by electroosmotic mobility. Interestingly, microchannel dimensions and axial diffusion are found to play an insignificant role in determining whether CP propagates. The steady state condition of propagating CP is shown to be controlled by channel heights, surface chemistry, and co-ion mobility instead of the reservoir condition. Both models are validated against experimental results in Part II of this two-paper series.
Muthu, Satish; Childress, Amy; Brant, Jonathan
2014-08-15
Membrane fouling assessed from a fundamental standpoint within the context of the Derjaguin-Landau-Verwey-Overbeek (DLVO) model. The DLVO model requires that the properties of the membrane and foulant(s) be quantified. Membrane surface charge (zeta potential) and free energy values are characterized using streaming potential and contact angle measurements, respectively. Comparing theoretical assessments for membrane-colloid interactions between research groups requires that the variability of the measured inputs be established. The impact that such variability in input values on the outcome from interfacial models must be quantified to determine an acceptable variance in inputs. An interlaboratory study was conducted to quantify the variability in streaming potential and contact angle measurements when using standard protocols. The propagation of uncertainty from these errors was evaluated in terms of their impact on the quantitative and qualitative conclusions on extended DLVO (XDLVO) calculated interaction terms. The error introduced into XDLVO calculated values was of the same magnitude as the calculated free energy values at contact and at any given separation distance. For two independent laboratories to draw similar quantitative conclusions regarding membrane-foulant interfacial interactions the standard error in contact angle values must be⩽2.5°, while that for the zeta potential values must be⩽7 mV.
Ortman, Robert L.; Carr, Domenic A.; James, Ryan; Long, Daniel; O'Shaughnessy, Matthew R.; Valenta, Christopher R.; Tuell, Grady H.
2016-05-01
We have developed a prototype real-time computer for a bathymetric lidar capable of producing point clouds attributed with total propagated uncertainty (TPU). This real-time computer employs a "mixed-mode" architecture comprised of an FPGA, CPU, and GPU. Noise reduction and ranging are performed in the digitizer's user-programmable FPGA, and coordinates and TPU are calculated on the GPU. A Keysight M9703A digitizer with user-programmable Xilinx Virtex 6 FPGAs digitizes as many as eight channels of lidar data, performs ranging, and delivers the data to the CPU via PCIe. The floating-point-intensive coordinate and TPU calculations are performed on an NVIDIA Tesla K20 GPU. Raw data and computed products are written to an SSD RAID, and an attributed point cloud is displayed to the user. This prototype computer has been tested using 7m-deep waveforms measured at a water tank on the Georgia Tech campus, and with simulated waveforms to a depth of 20m. Preliminary results show the system can compute, store, and display about 20 million points per second.
DEFF Research Database (Denmark)
Jurado-Navas, Antonio
2015-01-01
Recently, a new and generalized statistical model, called Málaga or simply M distribution, has been proposed to characterize the irradiance fluctuations of an unbounded optical wavefront (plane and spherical waves) propagating through a turbulent medium under all irradiance fluctuation conditions...
Analytical approach of laser beam propagation in the hollow polygonal light pipe.
Zhu, Guangzhi; Zhu, Xiao; Zhu, Changhong
2013-08-10
An analytical method of researching the light distribution properties on the output end of a hollow n-sided polygonal light pipe and a light source with a Gaussian distribution is developed. The mirror transformation matrices and a special algorithm of removing void virtual images are created to acquire the location and direction vector of each effective virtual image on the entrance plane. The analytical method is demonstrated by Monte Carlo ray tracing. At the same time, four typical cases are discussed. The analytical results indicate that the uniformity of light distribution varies with the structural and optical parameters of the hollow n-sided polygonal light pipe and light source with a Gaussian distribution. The analytical approach will be useful to design and choose the hollow n-sided polygonal light pipe, especially for high-power laser beam homogenization techniques.
Energy Technology Data Exchange (ETDEWEB)
Segre, S.E. [Rome Univ. 2. Tor Vergata, Rome (Italy). Istituto Nazionale Fisica della Materia, Dipartimento di Fisica
2001-07-01
The known analytic expressions for the evolution of the polarization of electromagnetic waves propagating in a plasma with uniformly sheared magnetic field are extended to the case where the shear is not constant. Exact analytic expressions are found for the case when the space variations of the medium are such that the magnetic field components and the plasma density satisfy a particular condition (eq. 13), possibly in a convenient reference frame of polarization space. [Italian] Le espressioni, gia' note, per l'evoluzione della polarizzazione di onde elettromagnetiche propaganti in un plasma magnetizzato con shear costante vengono estese a casi in cui questo non e' costante. Si trovano soluzioni analitiche esatte per il caso in cui le variazioni spaziali del mezzo sono tali da soddisfare una particolare condizione (eq. 13), eventualmente in un opportuno sistema di riferimento nello spazio della polarizzazione (lo spazio di Poincare').
Brault, Antoine; Dumas, Laurent; Lucor, Didier
2016-12-10
This work aims at quantifying the effect of inherent uncertainties from cardiac output on the sensitivity of a human compliant arterial network response based on stochastic simulations of a reduced-order pulse wave propagation model. A simple pulsatile output form is used to reproduce the most relevant cardiac features with a minimum number of parameters associated with left ventricle dynamics. Another source of significant uncertainty is the spatial heterogeneity of the aortic compliance, which plays a key role in the propagation and damping of pulse waves generated at each cardiac cycle. A continuous representation of the aortic stiffness in the form of a generic random field of prescribed spatial correlation is then considered. Making use of a stochastic sparse pseudospectral method, we investigate the sensitivity of the pulse pressure and waves reflection magnitude over the arterial tree with respect to the different model uncertainties. Results indicate that uncertainties related to the shape and magnitude of the prescribed inlet flow in the proximal aorta can lead to potent variation of both the mean value and standard deviation of blood flow velocity and pressure dynamics due to the interaction of different wave propagation and reflection features. Lack of accurate knowledge in the stiffness properties of the aorta, resulting in uncertainty in the pulse wave velocity in that region, strongly modifies the statistical response, with a global increase in the variability of the quantities of interest and a spatial redistribution of the regions of higher sensitivity. These results will provide some guidance in clinical data acquisition and future coupling of arterial pulse wave propagation reduced-order model with more complex beating heart models.
An Analytic Solution to the Propagation of Cylindrical Blast Waves in a Radiative Gas
Directory of Open Access Journals (Sweden)
B.G Verma
1977-01-01
Full Text Available In this paper, we have obtained a set of non-similarity in closed forms for the propagation of a cylindrical blast wave in a radiative gas. An explosion in a gas of constant density and pressure has been considered by assuming the existence of an initial uniform magnetic field in the axial direction. The disturbance is supposed to be headed by a shock surface of variable strength and the total energy of the wave varies with time.
Accounting for the analytical properties of the quark propagator from Dyson-Schwinger equation
Dorkin, S M; Kampfer, B
2014-01-01
An approach based on combined solutions of the Bethe-Salpeter (BS) and Dyson-Schwinger (DS) equations within the ladder-rainbow approximation in the presence of singularities is proposed to describe the meson spectrum as quark antiquark bound states. We consistently implement into the BS equation the quark propagator functions from the DS equation, with and without pole-like singularities, and show that, by knowing the precise positions of the poles and their residues, one is able to develop reliable methods of obtaining finite interaction BS kernels and to solve the BS equation numerically. We show that, for bound states with masses $M 1 $ GeV, however, the propagator functions reveal pole-like structures. Consequently, for each type of mesons (unflavored, strange and charmed) we analyze the relevant intervals of $M$ where the pole-like singularities of the corresponding quark propagator influence the solution of the BS equation and develop a framework within which they can be consistently accounted for. The...
Yang, Yang; Zhang, Lixiang; Lim, C. W.
2011-04-01
This paper is concerned with the characteristics of wave propagation in double-walled carbon nanotubes (DWCNTs). The DWCNTs is simulated with a Timoshenko beam model based on the nonlocal continuum elasticity theory, referred to as an analytically nonlocal Timoshenko-beam (ANT) model. The governing equations of the DWCNTs beam consist of a set of four equations that are derived from the variational principle of the beam with high-order boundary conditions at the both ends, in which the effects of the nano-scale nonlocality and the van der Waals interaction between inner and outer tubes are inclusive. The characteristics of the wave propagation in the DWCNTs beam were analyzed with the new ANT model proposed and the comparisons with the partially nonlocal Timoshenko-beam (PNT) models in publication were made in details. The results show that the nonlocal effects of the ANT model proposed in the present study on the wave propagations are more significant because it is in stronger stiffness enhancement to the DWCNTs beam.
Romanofsky, Robert R.
1989-01-01
In this report, a thorough analytical procedure is developed for evaluating the frequency-dependent loss characteristics and effective permittivity of microstrip lines. The technique is based on the measured reflection coefficient of microstrip resonator pairs. Experimental data, including quality factor Q, effective relative permittivity, and fringing for 50-omega lines on gallium arsenide (GaAs) from 26.5 to 40.0 GHz are presented. The effects of an imperfect open circuit, coupling losses, and loading of the resonant frequency are considered. A cosine-tapered ridge-guide text fixture is described. It was found to be well suited to the device characterization.
Rouze, Ned C; Palmeri, Mark L; Nightingale, Kathryn R
2015-08-01
Recent measurements of shear wave propagation in viscoelastic materials have been analyzed by constructing the two-dimensional Fourier transform (2D-FT) of the spatial-temporal shear wave signal and using an analysis procedure derived under the assumption the wave is described as a plane wave, or as the asymptotic form of a wave expanding radially from a cylindrically symmetric source. This study presents an exact, analytic expression for the 2D-FT description of shear wave propagation in viscoelastic materials following asymmetric Gaussian excitations and uses this expression to evaluate the bias in 2D-FT measurements obtained using the plane or cylindrical wave assumptions. A wide range of biases are observed depending on specific values of frequency, aspect ratio R of the source asymmetry, and material properties. These biases can be reduced significantly by weighting the shear wave signal in the spatial domain to correct for the geometric spreading of the shear wavefront using a factor of x(p). The optimal weighting power p is found to be near the theoretical value of 0.5 for the case of a cylindrical source with R = 1, and decreases for asymmetric sources with R > 1.
Orbit propagation using semi-analytical theory and its applications in space debris field
Dutt, Pooja; Anilkumar, A. K.
2017-02-01
Lifetime estimation of space objects is very important for space debris related studies including mitigation studies and manoeuvre designs. It is essential to have a fast and accurate lifetime prediction tool for studies related to long term evolution of space debris environment. This paper presents the details of the Orbit Prediction using Semi-Analytic Theory (OPSAT) used for lifetime estimation of space objects. It uses BFGS Quasi-Newton algorithm to minimize least square error on apogee and perigee altitudes of a given TLE set to estimate ballistic coefficient (BC). This BC is used for future orbit prediction. OPSAT is evaluated for long term and short term orbit prediction using TLE data. It has been used for identification of potential candidate for active debris removal (ADR) and future projection of space debris environment with ADR.
Post van der Burg, Max; Cullinane Thomas, Catherine; Holcombe, Tracy R.; Nelson, Richard D.
2016-01-01
The Landscape Conservation Cooperatives (LCCs) are a network of partnerships throughout North America that are tasked with integrating science and management to support more effective delivery of conservation at a landscape scale. In order to achieve this integration, some LCCs have adopted the approach of providing their partners with better scientific information in an effort to facilitate more effective and coordinated conservation decisions. Taking this approach has led many LCCs to begin funding research to provide the information for improved decision making. To ensure that funding goes to research projects with the highest likelihood of leading to more integrated broad scale conservation, some LCCs have also developed approaches for prioritizing which information needs will be of most benefit to their partnerships. We describe two case studies in which decision analytic tools were used to quantitatively assess the relative importance of information for decisions made by partners in the Plains and Prairie Potholes LCC. The results of the case studies point toward a few valuable lessons in terms of using these tools with LCCs. Decision analytic tools tend to help shift focus away from research oriented discussions and toward discussions about how information is used in making better decisions. However, many technical experts do not have enough knowledge about decision making contexts to fully inform the latter type of discussion. When assessed in the right decision context, however, decision analyses can point out where uncertainties actually affect optimal decisions and where they do not. This helps technical experts understand that not all research is valuable in improving decision making. But perhaps most importantly, our results suggest that decision analytic tools may be more useful for LCCs as way of developing integrated objectives for coordinating partner decisions across the landscape, rather than simply ranking research priorities.
Directory of Open Access Journals (Sweden)
B. Scherllin-Pirscher
2011-05-01
Full Text Available Due to the measurement principle of the radio occultation (RO technique, RO data are highly suitable for climate studies. Single RO profiles can be used to build climatological fields of different atmospheric parameters like bending angle, refractivity, density, pressure, geopotential height, and temperature. RO climatologies are affected by random (statistical errors, sampling errors, and systematic errors, yielding a total climatological error. Based on empirical error estimates, we provide a simple analytical error model for these error components, which accounts for vertical, latitudinal, and seasonal variations. The vertical structure of each error component is modeled constant around the tropopause region. Above this region the error increases exponentially, below the increase follows an inverse height power-law. The statistical error strongly depends on the number of measurements. It is found to be the smallest error component for monthly mean 10° zonal mean climatologies with more than 600 measurements per bin. Due to smallest atmospheric variability, the sampling error is found to be smallest at low latitudes equatorwards of 40°. Beyond 40°, this error increases roughly linearly, with a stronger increase in hemispheric winter than in hemispheric summer. The sampling error model accounts for this hemispheric asymmetry. However, we recommend to subtract the sampling error when using RO climatologies for climate research since the residual sampling error remaining after such subtraction is estimated to be 50 % of the sampling error for bending angle and 30 % or less for the other atmospheric parameters. The systematic error accounts for potential residual biases in the measurements as well as in the retrieval process and generally dominates the total climatological error. Overall the total error in monthly means is estimated to be smaller than 0.07 % in refractivity and 0.15 K in temperature at low to mid latitudes, increasing towards
Dalarsson, Mariana; 10.1364/OE.17.006747
2012-01-01
We have investigated the transmission and reflection properties of structures incorporating left-handed materials with graded index of refraction. We present an exact analytical solution to Helmholtz' equation for a graded index profile changing according to a hyperbolic tangent function along the propagation direction. We derive expressions for the field intensity along the graded index structure, and we show excellent agreement between the analytical solution and the corresponding results obtained by accurate numerical simulations. Our model straightforwardly allows for arbitrary spectral dispersion.
Luridiana, Valentina; Aggarwal, Kanti; Bautista, Manuel; Bergemann, Maria; Delahaye, Franck; del Zanna, Giulio; Ferland, Gary; Lind, Karin; Manchado, Arturo; Mendoza, Claudio; Delgado, Adal Mesa; Díaz, Manuel Núñez; Shaw, Richard A; Wesson, Roger
2011-01-01
This workshop brought together scientists (including atomic physicists, theoretical astrophysicists and astronomers) concerned with the completeness and accuracy of atomic data for astrophysical applications. The topics covered in the workshop included the evaluation of uncertainties in atomic data, the propagation of such uncertainties in chemical abundances, and the feedback between observations and calculations. On a different level, we also discussed communication issues such as how to ensure that atomic data are correctly understood and used, and which forum is the best one for a fluid interaction between all communities involved in the production and use of atomic data. This paper reports on the discussions held during the workshop and introduces AstroAtom, a blog created as a platform for timely and open discussions on the needs and concerns over atomic data, and their effects on astronomical research. The complete proceedings will be published on http://astroatom.wordpress.com/.
DEFF Research Database (Denmark)
Plósz, Benedek; De Clercq, Jeriffa; Nopens, Ingmar;
2011-01-01
-wide model on the general model performance is evaluated. A long-term simulation of a bulking event is conducted that spans temperature evolution throughout a summer/winter sequence. The model prediction in terms of nitrogen removal, solids inventory in the bioreactors and solids retention time as a function...... results demonstrates a considerably improved 1-D model realism using the convection-dispersion model in terms of SBH, XTSS,RAS and XTSS,Eff. Third, to assess the propagation of uncertainty derived from settler model structure to the biokinetic model, the impact of the SST model as sub-model in a plant...
Alhassan, Erwin; Sjöstrand, Henrik; Duan, Junfeng; Gustavsson, Cecilia; Koning, Arjan; Pomp, Stephan; Rochman, Dimitri; Österlund, Michael
2014-01-01
Analyses are carried out to assess the impact of nuclear data uncertainties on some reactor safety parameters for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of Pu-239 random ENDF-format libraries, generated using the TALYS based system were processed into ACE format with NJOY99.336 code and used as input into the Serpent Monte Carlo code to obtain distribution in reactor safety parameters. The distribution in keff obtained was compar...
Directory of Open Access Journals (Sweden)
Goovaerts Pierre
2006-02-01
Full Text Available Abstract Background Smoothing methods have been developed to improve the reliability of risk cancer estimates from sparsely populated geographical entities. Filtering local details of the spatial variation of the risk leads however to the detection of larger clusters of low or high cancer risk while most spatial outliers are filtered out. Static maps of risk estimates and the associated prediction variance also fail to depict the uncertainty attached to the spatial distribution of risk values and does not allow its propagation through local cluster analysis. This paper presents a geostatistical methodology to generate multiple realizations of the spatial distribution of risk values. These maps are then fed into spatial operators, such as in local cluster analysis, allowing one to assess how risk spatial uncertainty translates into uncertainty about the location of spatial clusters and outliers. This novel approach is applied to age-adjusted breast and pancreatic cancer mortality rates recorded for white females in 295 US counties of the Northeast (1970–1994. A public-domain executable with example datasets is provided. Results Geostatistical simulation generates risk maps that are more variable than the smooth risk map estimated by Poisson kriging and reproduce better the spatial pattern captured by the risk semivariogram model. Local cluster analysis of the set of simulated risk maps leads to a clear visualization of the lower reliability of the classification obtained for pancreatic cancer versus breast cancer: only a few counties in the large cluster of low risk detected in West Virginia and Southern Pennsylvania are significant over 90% of all simulations. On the other hand, the cluster of high breast cancer mortality in Niagara county, detected after application of Poisson kriging, appears on 60% of simulated risk maps. Sensitivity analysis shows that 500 realizations are needed to achieve a stable classification for pancreatic cancer
Alhassan, E.; Sjöstrand, H.; Duan, J.; Gustavsson, C.; Koning, A. J.; Pomp, S.; Rochman, D.; Österlund, M.
2014-04-01
Analyses are carried out to assess the impact of nuclear data uncertainties on keff for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of 239Pu random ENDF-formatted libraries generated using the TALYS based system were processed into ACE format with NJOY-99.336 code and used as input into the Serpent Monte Carlo neutron transport code to obtain distribution in keff. The mean of the keff distribution obtained was compared with the major nuclear data libraries, JEFF-3.1.1, ENDF/B-VII.1 and JENDL-4.0. A method is proposed for the selection of benchmarks for specific applications using the Total Monte Carlo approach. Finally, an accept/reject criterion was investigated based on χ2 values obtained using the 239Pu Jezebel criticality benchmark. It was observed that nuclear data uncertainties in keff were reduced considerably from 748 to 443 pcm by applying a more rigid acceptance criteria for accepting random files.
Chandra, Rohit
2013-01-01
An analytical model for estimating the link loss for the on-body wave propagation around the torso is presented. The model is based on the attenuation of the creeping waves over an elliptical approximation of the human torso and includes the influence of the arms. The importance of including the arms' effect for a proper estimation of the link loss is discussed. The model is validated by the full-wave electromagnetic simulations on a numerical phantom.
Directory of Open Access Journals (Sweden)
W. Wieselquist
2013-01-01
Full Text Available Capabilities for uncertainty quantification (UQ with respect to nuclear data have been developed at PSI in the recent years and applied to the UAM benchmark. The guiding principle for the PSI UQ development has been to implement nonintrusive “black box” UQ techniques in state-of-the-art, production-quality codes used already for routine analyses. Two complimentary UQ techniques have been developed thus far: (i direct perturbation (DP and (ii stochastic sampling (SS. The DP technique is, first and foremost, a robust and versatile sensitivity coefficient calculation, applicable to all types of input and output. Using standard uncertainty propagation, the sensitivity coefficients are folded with variance/covariance matrices (VCMs leading to a local first-order UQ method. The complementary SS technique samples uncertain inputs according to their joint probability distributions and provides a global, all-order UQ method. This paper describes both DP and SS implemented in the lattice physics code CASMO-5MX (a special PSI-modified version of CASMO-5M and a preliminary SS technique implemented in MCNPX, routinely used in criticality safety and fluence analyses. Results are presented for the UAM benchmark exercises I-1 (cell and I-2 (assembly.
Mendoza, Beltran M.A.; Heijungs, R.; Guinée, J.B.; Tukker, A.
2016-01-01
Purpose Despite efforts to treat uncertainty due to methodological choices in life cycle assessment (LCA) such as standardization, one-at-a-time (OAT) sensitivity analysis, and analytical and statistical methods, no method exists that propagate this source of uncertainty for all relevant processes s
Anderson, R.; Gronewold, A.; Alameddine, I.; Reckhow, K.
2008-12-01
The latest official assessment of United States (US) surface water quality indicates that pathogens are a leading cause of coastal shoreline water quality standard violations. Rainfall-runoff and hydrodynamic water quality models are commonly used to predict fecal indicator bacteria (FIB) concentrations in these waters and to subsequently identify climate change, land use, and pollutant mitigation scenarios which might improve water quality and lead to reinstatement of a designated use. While decay, settling, and other loss kinetics dominate FIB fate and transport in freshwater systems, previous authors identify tidal advection as a dominant fate and transport process in coastal estuaries. As a result, acknowledging hydrodynamic model input (e.g. watershed runoff) variability and parameter (e.g tidal dynamics parameter) uncertainty is critical to building a robust coastal water quality model. Despite the widespread application of watershed models (and associated model calibration procedures), we find model inputs and parameters are commonly encoded as deterministic point estimates (as opposed to random variables), an approach which effectively ignores potential sources of variability and uncertainty. Here, we present an innovative approach to building, calibrating, and propagating uncertainty and variability through a coupled data-based mechanistic (DBM) rainfall-runoff and tidal prism water quality model. While we apply the model to an ungauged tributary of the Newport River Estuary (one of many currently impaired shellfish harvesting waters in Eastern North Carolina), our model can be used to evaluate water quality restoration scenarios for coastal waters with a wide range of designated uses. We begin by calibrating the DBM rainfall-runoff model, as implemented in the IHACRES software package, using a regionalized calibration approach. We then encode parameter estimates as random variables (in the rainfall-runoff component of our comprehensive model) via the
Energy Technology Data Exchange (ETDEWEB)
Holland, Michael K. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); O' Rourke, Patrick E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-05-04
An SRNL H-Canyon Test Bed performance evaluation project was completed jointly by SRNL and LANL on a prototype monochromatic energy dispersive x-ray fluorescence instrument, the hiRX. A series of uncertainty propagations were generated based upon plutonium and uranium measurements performed using the alpha-prototype hiRX instrument. Data reduction and uncertainty modeling provided in this report were performed by the SRNL authors. Observations and lessons learned from this evaluation were also used to predict the expected uncertainties that should be achievable at multiple plutonium and uranium concentration levels provided instrument hardware and software upgrades being recommended by LANL and SRNL are performed.
Meija, Juris; Pagliano, Enea; Mester, Zoltán
2014-09-02
Uncertainty of the result from the method of standard addition is often underestimated due to neglect of the covariance between the intercept and the slope. In order to simplify the data analysis from standard addition experiments, we propose x-y coordinate swapping in conventional linear regression. Unlike the ratio of the intercept and slope, which is the result of the traditional method of standard addition, the result of the inverse standard addition is obtained directly from the intercept of the swapped calibration line. Consequently, the uncertainty evaluation becomes markedly simpler. The method is also applicable to nonlinear curves, such as the quadratic model, without incurring any additional complexity.
Energy Technology Data Exchange (ETDEWEB)
Morales Prieto, M.; Ortega Saiz, P.
2011-07-01
Analysis of analytical uncertainties of the methodology of simulation of processes for obtaining isotopic ending inventory of spent fuel, the ARIANE experiment explores the part of simulation of burning.
Energy Technology Data Exchange (ETDEWEB)
Dunn, Floyd E. [Argonne National Lab. (ANL), Argonne, IL (United States); Hu, Lin-wen [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States). Nuclear Reactor Lab.; Wilson, Erik [Argonne National Lab. (ANL), Argonne, IL (United States)
2016-12-01
The STAT code was written to automate many of the steady-state thermal hydraulic safety calculations for the MIT research reactor, both for conversion of the reactor from high enrichment uranium fuel to low enrichment uranium fuel and for future fuel re-loads after the conversion. A Monte-Carlo statistical propagation approach is used to treat uncertainties in important parameters in the analysis. These safety calculations are ultimately intended to protect against high fuel plate temperatures due to critical heat flux or departure from nucleate boiling or onset of flow instability; but additional margin is obtained by basing the limiting safety settings on avoiding onset of nucleate boiling. STAT7 can simultaneously analyze all of the axial nodes of all of the fuel plates and all of the coolant channels for one stripe of a fuel element. The stripes run the length of the fuel, from the bottom to the top. Power splits are calculated for each axial node of each plate to determine how much of the power goes out each face of the plate. By running STAT7 multiple times, full core analysis has been performed by analyzing the margin to ONB for each axial node of each stripe of each plate of each element in the core.
Verification of uncertainty budgets
DEFF Research Database (Denmark)
Heydorn, Kaj; Madsen, B.S.
2005-01-01
The quality of analytical results is expressed by their uncertainty, as it is estimated on the basis of an uncertainty budget; little effort is, however, often spent on ascertaining the quality of the uncertainty budget. The uncertainty budget is based on circumstantial or historical data, and th...
Mazoyer, Johan; Norman, Colin; N'Diaye, Mamadou; van der Marel, Roeland P; Soummer, Rémi
2015-01-01
The new frontier in the quest for the highest contrast levels in the focal plane of a coronagraph is now the correction of the large diffractive artifacts effects introduced at the science camera by apertures of increasing complexity. The coronagraph for the WFIRST/AFTA mission will be the first of such instruments in space with a two Deformable Mirrors wavefront control system. Regardless of the control algorithm for these multi Deformable Mirrors, they will have to rely on quick and accurate simulation of the propagation effects introduced by the out-of-pupil surface. In the first part of this paper, we present the analytical description of the different approximations to simulate these propagation effects. In Annex A, we prove analytically that, in the special case of surfaces inducing a converging beam, the Fresnel method yields high fidelity for simulations of these effects. We provide numerical simulations showing this effect. In the second part, we use these tools in the framework of the Active Compens...
Nayfeh, A. H.; Kaiser, J. E.; Marshall, R. L.; Hurst, L. J.
1978-01-01
The performance of sound suppression techniques in ducts that produce refraction effects due to axial velocity gradients was evaluated. A computer code based on the method of multiple scales was used to calculate the influence of axial variations due to slow changes in the cross-sectional area as well as transverse gradients due to the wall boundary layers. An attempt was made to verify the analytical model through direct comparison of experimental and computational results and the analytical determination of the influence of axial gradients on optimum liner properties. However, the analytical studies were unable to examine the influence of non-parallel ducts on the optimum linear conditions. For liner properties not close to optimum, the analytical predictions and the experimental measurements were compared. The circumferential variations of pressure amplitudes and phases at several axial positions were examined in straight and variable-area ducts, hard-wall and lined sections with and without a mean flow. Reasonable agreement between the theoretical and experimental results was obtained.
Plósz, Benedek Gy; De Clercq, Jeriffa; Nopens, Ingmar; Benedetti, Lorenzo; Vanrolleghem, Peter A
2011-01-01
In WWTP models, the accurate assessment of solids inventory in bioreactors equipped with solid-liquid separators, mostly described using one-dimensional (1-D) secondary settling tank (SST) models, is the most fundamental requirement of any calibration procedure. Scientific knowledge on characterising particulate organics in wastewater and on bacteria growth is well-established, whereas 1-D SST models and their impact on biomass concentration predictions are still poorly understood. A rigorous assessment of two 1-DSST models is thus presented: one based on hyperbolic (the widely used Takács-model) and one based on parabolic (the more recently presented Plósz-model) partial differential equations. The former model, using numerical approximation to yield realistic behaviour, is currently the most widely used by wastewater treatment process modellers. The latter is a convection-dispersion model that is solved in a numerically sound way. First, the explicit dispersion in the convection-dispersion model and the numerical dispersion for both SST models are calculated. Second, simulation results of effluent suspended solids concentration (XTSS,Eff), sludge recirculation stream (XTSS,RAS) and sludge blanket height (SBH) are used to demonstrate the distinct behaviour of the models. A thorough scenario analysis is carried out using SST feed flow rate, solids concentration, and overflow rate as degrees of freedom, spanning a broad loading spectrum. A comparison between the measurements and the simulation results demonstrates a considerably improved 1-D model realism using the convection-dispersion model in terms of SBH, XTSS,RAS and XTSS,Eff. Third, to assess the propagation of uncertainty derived from settler model structure to the biokinetic model, the impact of the SST model as sub-model in a plant-wide model on the general model performance is evaluated. A long-term simulation of a bulking event is conducted that spans temperature evolution throughout a summer
Casse, C.; Gosset, M.; Peugeot, C.; Boone, A.; Pedinotti, V.
2013-12-01
The use of satellite based rainfall in research or operational Hydrological application is becoming more and more frequent. This is specially true in the Tropics where ground based gauge (or radar) network are generally scarce and often degrading. Member of the GPM constellation, the new French-Indian satellite Mission Megha-Tropiques (MT) dedicated to the water and energy budget in the tropical atmosphere contributes to a better monitoring of rainfall in the inter-tropical zone. As part of this mission, research is developed on the use of MT rainfall products for hydrological research or operational application such as flood monitoring. A key issue for such applications is how to account for rainfall products biases and uncertainties, and how to propagate them in the end user models ? Another important question is how to choose the best space-time resolution for the rainfall forcing, given that both model performances and rain-product uncertainties are resolution dependent. This talk will present on going investigations and perspectives on this subject, with examples from the Megha_tropiques Ground validation sites in West Africa. The CNRM model Surfex-ISBA/TRIP has been set up to simulate the hydrological behavior of the Niger River. This modeling set up is being used to study the predictability of Niger Floods events in the city of Niamey and the performances of satellite rainfall products as forcing for such predictions. One of the interesting feature of the Niger outflow in Niamey is its double peak : a first peak attributed to 'local' rainfall falling in small to medium size basins situated in the region of Niamey, and a second peak linked to the rainfall falling in the upper par of the river, and slowly propagating through the river towards Niamey. The performances of rainfall products are found to differ between the wetter/upper part of the basin, and the local/sahelian areas. Both academic tests with artificially biased or 'perturbed' rainfield and actual
Directory of Open Access Journals (Sweden)
Cinzia Caliendo
2015-01-01
Full Text Available The propagation of the fundamental symmetric Lamb mode S0 along wz-BN/AlN thin composite plates suitable for telecommunication and sensing applications is studied. The investigation of the acoustic field profile across the plate thickness revealed the presence of modes having longitudinal polarization, the Anisimkin Jr. plate modes (AMs, travelling at a phase velocity close to that of the wz-BN longitudinal bulk acoustic wave propagating in the same direction. The study of the S0 mode phase velocity and coupling coefficient (K2 dispersion curves, for different electrical boundary conditions, has shown that eight different coupling configurations are allowable that exhibit a K2 as high as about 4% and very high phase velocity (up to about 16,700 m/s. The effect of the thickness and material type of the metal floating electrode on the K2 dispersion curves has also been investigated, specifically addressing the design of an enhanced coupling device. The gravimetric sensitivity of the BN/AlN-based acoustic waveguides was then calculated for both the AMs and elliptically polarized S0 modes; the AM-based sensor velocity and attenuation shifts due to the viscosity of a surrounding liquid was theoretically predicted. The performed investigation suggests that wz-BN/AlN is a very promising substrate material suitable for developing GHz band devices with enhanced electroacoustic coupling efficiency and suitable for application in telecommunications and sensing fields.
Hu, Huayu
2015-01-01
Nonperturbative calculation of QED processes participated by a strong electromagnetic field, especially provided by strong laser facilities at present and in the near future, generally resorts to the Furry picture with the usage of analytical solutions of the particle dynamical equation, such as the Klein-Gordon equation and Dirac equation. However only for limited field configurations such as a plane-wave field could the equations be solved analytically. Studies have shown significant interests in QED processes in a strong field composed of two counter-propagating laser waves, but the exact solutions in such a field is out of reach. In this paper, inspired by the observation of the structure of the solutions in a plane-wave field, we develop a new method and obtain the analytical solution for the Klein-Gordon equation and equivalently the action function of the solution for the Dirac equation in this field, under a largest dynamical parameter condition that there exists an inertial frame in which the particl...
Cui, Linyan; Xue, Bindang; Zhou, Fugen
2013-11-01
The effects of moderate-to-strong non-Kolmogorov turbulence on the angle of arrival (AOA) fluctuations for plane and spherical waves are investigated in detail both analytically and numerically. New analytical expressions for the variance of AOA fluctuations are derived for moderate-to-strong non-Kolmogorov turbulence. The new expressions cover a wider range of non-Kolmogorov turbulence strength and reduce correctly to previously published analytic expressions for the cases of plane and spherical wave propagation through both weak non-Kolmogorov turbulence and moderate-to-strong Kolmogorov turbulence cases. The final results indicate that, as turbulence strength becomes greater, the expressions developed with the Rytov theory deviate from those given in this work. This deviation becomes greater with stronger turbulence, up to moderate-to-strong turbulence strengths. Furthermore, general spectral power law has significant influence on the variance of AOA fluctuations in non-Kolmogorov turbulence. These results are useful for understanding the potential impact of deviations from the standard Kolmogorv spectrum.
Directory of Open Access Journals (Sweden)
Barbara Mickowska
2013-02-01
Full Text Available The aim of this study was to assess the importance of validation and uncertainty estimation related to the results of amino acid analysis using the ion-exchange chromatography with post-column derivatization technique. The method was validated and the components of standard uncertainty were identified and quantified to recognize the major contributions to uncertainty of analysis. Estimated relative extended uncertainty (k=2, P=95% varied in range from 9.03% to 12.68%. Quantification of the uncertainty components indicates that the contribution of the calibration concentration uncertainty is the largest and it plays the most important role in the overall uncertainty in amino acid analysis. It is followed by uncertainty of area of chromatographic peaks and weighing procedure of samples. The uncertainty of sample volume and calibration peak area may be negligible. The comparison of CV% with estimated relative uncertainty indicates that interpretation of research results can be misled without uncertainty estimation.
Miles, R N
2016-03-01
A mathematical model is presented to examine the propagation of bending waves on a plant stem that are induced by vibratory excitation from an attached insect. This idealized model represents the insect body as a mass and the legs as a linear spring along with a general time-varying force that is assumed to act in parallel with the spring. The spring connects the mass to a stem modeled as a beam having uniform geometric and material properties. The linearly elastic beam is assumed to undergo pure vibratory bending and to be infinitely long in each direction. The equations that govern the insect-induced, coupled motions of both the beam and the mass are solved for arbitrary time varying forces produced by the insect's legs. Solutions for the frequency response indicate that the response is dominated by frequency components near the natural resonant frequency of the attached insect while at higher frequencies the amplitude of the response is strongly influenced only by the properties of the stem.
Energy Technology Data Exchange (ETDEWEB)
Romero-Garcia, V [Instituto de Ciencia de Materiales de Madrid, Consejo Superior de Investigaciones Cientificas (Spain); Sanchez-Perez, J V [Centro de Tecnologias Fisicas: Acustica, Materiales y Astrofisica, Universidad Politecnica de Valencia (Spain); Garcia-Raffi, L M, E-mail: virogar1@gmail.com [Instituto Universitario de Matematica Pura y Aplicada, Universidad Politecnica de Valencia (Spain)
2011-07-06
The use of sonic crystals (SCs) as environmental noise barriers has certain advantages from both the acoustical and the constructive points of view with regard to conventional ones. However, the interaction between the SCs and the ground has not been studied yet. In this work we are reporting a semi-analytical model, based on the multiple scattering theory and on the method of images, to study this interaction considering the ground as a finite impedance surface. The results obtained here show that this model could be used to design more effective noise barriers based on SCs because the excess attenuation of the ground could be modelled in order to improve the attenuation properties of the array of scatterers. The results are compared with experimental data and numerical predictions thus finding good agreement between them.
Error propagation in polarimetric demodulation
Ramos, A Asensio
2008-01-01
The polarization analysis of the light is typically carried out using modulation schemes. The light of unknown polarization state is passed through a set of known modulation optics and a detector is used to measure the total intensity passing the system. The modulation optics is modified several times and, with the aid of such several measurements, the unknown polarization state of the light can be inferred. How to find the optimal demodulation process has been investigated in the past. However, since the modulation matrix has to be measured for a given instrument and the optical elements can present problems of repeatability, some uncertainty is present in the elements of the modulation matrix and/or covariances between these elements. We analyze in detail this issue, presenting analytical formulae for calculating the covariance matrix produced by the propagation of such uncertainties on the demodulation matrix, on the inferred Stokes parameters and on the efficiency of the modulation process. We demonstrate...
Mazoyer, Johan; Pueyo, Laurent; Norman, Colin; N'Diaye, Mamadou; van der Marel, Roeland P.; Soummer, Rémi
2016-03-01
The new frontier in the quest for the highest contrast levels in the focal plane of a coronagraph is now the correction of the large diffraction artifacts introduced at the science camera by apertures of increasing complexity. Indeed, the future generation of space- and ground-based coronagraphic instruments will be mounted on on-axis and/or segmented telescopes; the design of coronagraphic instruments for such observatories is currently a domain undergoing rapid progress. One approach consists of using two sequential deformable mirrors (DMs) to correct for aberrations introduced by secondary mirror structures and segmentation of the primary mirror. The coronagraph for the WFIRST-AFTA mission will be the first of such instruments in space with a two-DM wavefront control system. Regardless of the control algorithm for these multiple DMs, they will have to rely on quick and accurate simulation of the propagation effects introduced by the out-of-pupil surface. In the first part of this paper, we present the analytical description of the different approximations to simulate these propagation effects. In Appendix A, we prove analytically that in the special case of surfaces inducing a converging beam, the Fresnel method yields high fidelity for simulations of these effects. We provide numerical simulations showing this effect. In the second part, we use these tools in the framework of the active compensation of aperture discontinuities (ACAD) technique applied to pupil geometries similar to WFIRST-AFTA. We present these simulations in the context of the optical layout of the high-contrast imager for complex aperture telescopes, which will test ACAD on a optical bench. The results of this analysis show that using the ACAD method, an apodized pupil Lyot coronagraph, and the performance of our current DMs, we are able to obtain, in numerical simulations, a dark hole with a WFIRST-AFTA-like. Our numerical simulation shows that we can obtain contrast better than 2×10-9 in
Taming systematic uncertainties at the LHC with the central limit theorem
Fichet, Sylvain
2016-01-01
We study the simplifications occurring in any likelihood function in the presence of a large number of small systematic uncertainties. We find that the marginalisation of these uncertainties can be done analytically by means of second-order error propagation, error combination, the Lyapunov central limit theorem, and under mild approximations which are typically satisfied for LHC likelihoods. The outcomes of this analysis are i) a very light treatment of systematic uncertainties ii) a convenient way of reporting the main effects of systematic uncertainties such as the detector effects occuring in LHC measurements.
Leśniewska, Barbara; Kisielewska, Katarzyna; Wiater, Józefa; Godlewska-Żyłkiewicz, Beata
2016-01-01
A new fast method for determination of mobile zinc fractions in soil is proposed in this work. The three-stage modified BCR procedure used for fractionation of zinc in soil was accelerated by using ultrasounds. The working parameters of an ultrasound probe, a power and a time of sonication, were optimized in order to acquire the content of analyte in soil extracts obtained by ultrasound-assisted sequential extraction (USE) consistent with that obtained by conventional modified Community Bureau of Reference (BCR) procedure. The content of zinc in extracts was determined by flame atomic absorption spectrometry. The developed USE procedure allowed for shortening the total extraction time from 48 h to 27 min in comparison to conventional modified BCR procedure. The method was fully validated, and the uncertainty budget was evaluated. The trueness and reproducibility of the developed method was confirmed by analysis of certified reference material of lake sediment BCR-701. The applicability of the procedure for fast, low costs and reliable determination of mobile zinc fraction in soil, which may be useful for assessing of anthropogenic impacts on natural resources and environmental monitoring purposes, was proved by analysis of different types of soil collected from Podlaskie Province (Poland).
Asllanaj, Fatmir; Contassot-Vivier, Sylvain; Liemert, André; Kienle, Alwin
2014-01-01
We examine the accuracy of a modified finite volume method compared to analytical and Monte Carlo solutions for solving the radiative transfer equation. The model is used for predicting light propagation within a two-dimensional absorbing and highly forward-scattering medium such as biological tissue subjected to a collimated light beam. Numerical simulations for the spatially resolved reflectance and transmittance are presented considering refractive index mismatch with Fresnel reflection at the interface, homogeneous and two-layered media. Time-dependent as well as steady-state cases are considered. In the steady state, it is found that the modified finite volume method is in good agreement with the other two methods. The relative differences between the solutions are found to decrease with spatial mesh refinement applied for the modified finite volume method obtaining method is used for the time semi-discretization of the radiative transfer equation. An agreement among the modified finite volume method, Runge-Kutta method, and Monte Carlo solutions are shown, but with relative differences higher than in the steady state.
2016-01-01
Uncertainty is associated with GIS- Multi Criteria Decision Analysis (GIS-MCDA) when applied to disaster modeling. Technically speaking, GIS-MCDA model outcomes are prone to multiple types of uncertainty and error. In order to minimize the inherent uncertainty, within this research we introduced a novel approach of spatial explicit uncertainty and sensitivity analysis for GIS-MCDA models. This novel approach is developed based on early works published by FEZIZADEH et al. 2014a, 2014b and make...
Understanding and reducing statistical uncertainties in nebular abundance determinations
Wesson, R.; Stock, D. J.; Scicluna, P.
2012-06-01
Whenever observations are compared to theories, an estimate of the uncertainties associated with the observations is vital if the comparison is to be meaningful. However, many or even most determinations of temperatures, densities and abundances in photoionized nebulae do not quote the associated uncertainty. Those that do typically propagate the uncertainties using analytical techniques which rely on assumptions that generally do not hold. Motivated by this issue, we have developed Nebular Empirical Analysis Tool (NEAT), a new code for calculating chemical abundances in photoionized nebulae. The code carries out a standard analysis of lists of emission lines using long-established techniques to estimate the amount of interstellar extinction, calculate representative temperatures and densities, compute ionic abundances from both collisionally excited lines and recombination lines, and finally to estimate total elemental abundances using an ionization correction scheme. NEATuses a Monte Carlo technique to robustly propagate uncertainties from line flux measurements through to the derived abundances. We show that, for typical observational data, this approach is superior to analytic estimates of uncertainties. NEAT also accounts for the effect of upward biasing on measurements of lines with low signal-to-noise ratio, allowing us to accurately quantify the effect of this bias on abundance determinations. We find not only that the effect can result in significant overestimates of heavy element abundances derived from weak lines, but also that taking it into account reduces the uncertainty of these abundance determinations. Finally, we investigate the effect of possible uncertainties in R, the ratio of selective-to-total extinction, on abundance determinations. We find that the uncertainty due to this parameter is negligible compared to the statistical uncertainties due to typical line flux measurement uncertainties.
Understanding and reducing statistical uncertainties in nebular abundance determinations
Wesson, R; Scicluna, P
2012-01-01
Whenever observations are compared to theories, an estimate of the uncertainties associated with the observations is vital if the comparison is to be meaningful. However, many determinations of temperatures, densities and abundances in photoionized nebulae do not quote the associated uncertainty. Those that do typically propagate the uncertainties using analytical techniques which rely on assumptions that generally do not hold. Motivated by this issue, we have developed NEAT (Nebular Empirical Analysis Tool), a new code for calculating chemical abundances in photoionized nebulae. The code carries out an analysis of lists of emission lines using long-established techniques to estimate the amount of interstellar extinction, calculate representative temperatures and densities, compute ionic abundances from both collisionally excited lines and recombination lines, and finally to estimate total elemental abundances using an ionization correction scheme. NEAT uses a Monte Carlo technique to robustly propagate uncer...
Directory of Open Access Journals (Sweden)
S. Bönisch
2004-02-01
Full Text Available Este trabalho teve por objetivos utilizar krigagem por indicação para espacializar propriedades de solos expressas por atributos numéricos, gerar uma representação acompanhada de medida espacial de incerteza e modelar a propagação de incerteza por procedimentos fuzzy de álgebra de mapas. Foram estudados os atributos: teores de potássio (K e de alumínio (Al trocáveis, saturação por bases (V, soma de bases (S, capacidade de troca catiônica (CTC e teor de areia total (AT, extraídos de 222 perfis pedológicos e de 219 amostras extras, localizados no estado de Santa Catarina. Quando os atributos foram expressos em classes de fertilidade, a incerteza de Al, S e V aumentou e a de K e CTC diminuiu, considerando intervalos de confiança de 95 % de probabilidade. Constatou-se que um maior número de dados numéricos de K, S e V levou a uma maior incerteza na inferência espacial, enquanto o maior número de dados numéricos de AT e CTC diminuiu o grau de incerteza. A incerteza diminuiu quando diferentes representações numéricas foram integradas.The objectives of this study were to use kriging indicators to generate a representation of soil properties expressed by numeric attributes, to assess the uncertainty in estimates, and to model the uncertainty propagation generated by the fuzzy procedures of map algebra. The studied attributes were exchangeable potassium (K and aluminum (Al contents, sum of bases (SB, cationic exchange capacity (CEC, base saturation (V, and total sand content (TST, extracted from 222 pedologic profiles and 219 extra samples, located in Santa Catarina State, Brazil. When the attributes were expressed in fertility classes, the uncertainty of Al, SB, and V increased while the uncertainty of K and CEC decreased, for intervals of confidence of 95% probability. A larger number of numeric data for K, SB, and V provided a larger uncertainty for space inference, while the uncertainty degree decreased for the largest number
Response analysis based on smallest interval-set of parameters for structures with uncertainty
Institute of Scientific and Technical Information of China (English)
Xiao-jun WANG; Lei WANG; Zhi-ping QIU
2012-01-01
An integral analytic process from quantification to propagation based on limited uncertain parameters is investigated to deal with practical engineering problems.A new method by use of the smallest interval-set/hyper-rectangle containing all experimental data is proposed to quantify the parameter uncertainties. With the smallest parameter interval-set,the uncertainty propagation evaluation of the most favorable response and the least favorable response of the structures is studied based on the interval analysis.The relationship between the proposed interval analysis method (IAM) and the classical IAM is discussed.Two numerical examples are presented to demonstrate the feasibility and validity of the proposed method.
Wieselquist, W.; Zhu, T.; Vasiliev, A; Ferroukhi, H.
2013-01-01
Capabilities for uncertainty quantification (UQ) with respect to nuclear data have been developed at PSI in the recent years and applied to the UAM benchmark. The guiding principle for the PSI UQ development has been to implement nonintrusive “black box” UQ techniques in state-of-the-art, production-quality codes used already for routine analyses. Two complimentary UQ techniques have been developed thus far: (i) direct perturbation (DP) and (ii) stochastic sampling (SS). The DP technique is, ...
Efficient Quantification of Uncertainties in Complex Computer Code Results Project
National Aeronautics and Space Administration — Propagation of parameter uncertainties through large computer models can be very resource intensive. Frameworks and tools for uncertainty quantification are...
Enstedt, M.; Wellander, N.
2016-11-01
A chaos expanded Fourier split-step method is derived and applied to a narrow-angle parabolic equation. The parabolic equation has earlier been used to study deterministic settings. In this paper we develop a spectral-based Fourier split-step method that will take a limited degree of information about the environment into account. Our main focus is on proposing an efficient method for computational electromagnetics in stochastic settings. In this paper we study electromagnetic wave propagation in the troposphere in the case when the refraction index belongs to a uniform distribution.
Vector network analyzer (VNA) measurements and uncertainty assessment
Shoaib, Nosherwan
2017-01-01
This book describes vector network analyzer measurements and uncertainty assessments, particularly in waveguide test-set environments, in order to establish their compatibility to the International System of Units (SI) for accurate and reliable characterization of communication networks. It proposes a fully analytical approach to measurement uncertainty evaluation, while also highlighting the interaction and the linear propagation of different uncertainty sources to compute the final uncertainties associated with the measurements. The book subsequently discusses the dimensional characterization of waveguide standards and the quality of the vector network analyzer (VNA) calibration techniques. The book concludes with an in-depth description of the novel verification artefacts used to assess the performance of the VNAs. It offers a comprehensive reference guide for beginners to experts, in both academia and industry, whose work involves the field of network analysis, instrumentation and measurements.
Conroy, Charlie; White, Martin
2008-01-01
The stellar masses, mean ages, metallicities, and star formation histories of galaxies are now commonly estimated via stellar population synthesis (SPS) techniques. SPS relies on stellar evolution calculations from the main sequence to stellar death, stellar spectral libraries, phenomenological dust models, and stellar initial mass functions (IMFs). The present work is the first in a series that explores the impact of uncertainties in key phases of stellar evolution and the IMF on the derived physical properties of galaxies and the expected luminosity evolution for a passively evolving set of stars. A Monte-Carlo Markov-Chain approach is taken to fit near-UV through near-IR photometry of a representative sample of low- and high-redshift galaxies with this new SPS model. Significant results include the following: 1) including uncertainties in stellar evolution, stellar masses at z~0 carry errors of ~0.3 dex at 95% CL with little dependence on luminosity or color, while at z~2, the masses of bright red galaxies...
Manning, Robert M.
2004-01-01
The extended wide-angle parabolic wave equation applied to electromagnetic wave propagation in random media is considered. A general operator equation is derived which gives the statistical moments of an electric field of a propagating wave. This expression is used to obtain the first and second order moments of the wave field and solutions are found that transcend those which incorporate the full paraxial approximation at the outset. Although these equations can be applied to any propagation scenario that satisfies the conditions of application of the extended parabolic wave equation, the example of propagation through atmospheric turbulence is used. It is shown that in the case of atmospheric wave propagation and under the Markov approximation (i.e., the delta-correlation of the fluctuations in the direction of propagation), the usual parabolic equation in the paraxial approximation is accurate even at millimeter wavelengths. The comprehensive operator solution also allows one to obtain expressions for the longitudinal (generalized) second order moment. This is also considered and the solution for the atmospheric case is obtained and discussed. The methodology developed here can be applied to any qualifying situation involving random propagation through turbid or plasma environments that can be represented by a spectral density of permittivity fluctuations.
Energy Technology Data Exchange (ETDEWEB)
Andres, T.H
2002-05-01
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
Samaniego, L. E.; Kumar, R.; Schaefer, D.; Huang, S.; Yang, T.; Mishra, V.; Eisner, S.; Vetter, T.; Pechlivanidis, I.; Liersch, S.; Flörke, M.; Krysanova, V.
2015-12-01
Droughts are creeping hydro-meteorological events that bring societiesand natural systems to their limits and inducing considerablesocio-economic losses. Currently it is hypothesized that climate changewill exacerbate current trends leading a more severe and extendeddroughts, as well as, larger than normal recovery periods. Currentassessments, however, lack of a consistent framework to deal withcompatible initial conditions for the impact models and a set ofstandardized historical and future forcings. The ISI-MIP project provides an unique opportunity to understand thepropagation of model and forcing uncertainty into century-long timeseries of drought characteristics using an ensemble of model predictionsacross a broad range of climate scenarios and regions. In the presentstudy, we analyze this issue using the hydrologic simulations carriedout with HYPE, mHM, SWIM, VIC, and WaterGAP3 in seven large continentalriver basins: Amazon, Blue Nile, Ganges, Niger, Mississippi, Rhine,Yellow. All models are calibrated against observed streamflow duringthe period 1971-2001 using the same forcings based on the WATCH datasets. These constrained models were then forced with bias correctedoutputs of five CMIP-5 GCMs under four RCP scenarios (i.e. 2.6, 4.5,6.0, and 8.5 W/m2) for the period 1971-2099. A non-parametric kernel density approach is used to estimate thetemporal evolution of a monthly runoff index based on simulatedstreamflow. Hydrologic simulations corresponding to each GCM during thehistoric period of 1981-2010 serve as reference for the estimation ofthe basin specific monthly probability distribution functions. GCMspecific reference pdfs are then used to recast the future hydrologicmodel outputs from different RCP scenarios. Based on these results,drought severity and duration are investigated during periods: 1)2006-2035, 2) 2036-2065 and 3) 2070-2099. Two main hypothesis areinvestigated: 1) model predictive uncertainty of drought indices amongdifferent hydrologic
Energy Technology Data Exchange (ETDEWEB)
Barrado, A. I.; Garcia, S.; Perez, R. M.
2013-06-01
This paper presents an evaluation of uncertainty associated to analytical measurement of eighteen polycyclic aromatic compounds (PACs) in ambient air by liquid chromatography with fluorescence detection (HPLC/FD). The study was focused on analyses of PM{sub 1}0, PM{sub 2}.5 and gas phase fractions. Main analytical uncertainty was estimated for eleven polycyclic aromatic hydrocarbons (PAHs), four nitro polycyclic aromatic hydrocarbons (nitro-PAHs) and two hydroxy polycyclic aromatic hydrocarbons (OH-PAHs) based on the analytical determination, reference material analysis and extraction step. Main contributions reached 15-30% and came from extraction process of real ambient samples, being those for nitro- PAHs the highest (20-30%). Range and mean concentration of PAC mass concentrations measured in gas phase and PM{sub 1}0/PM{sub 2}.5 particle fractions during a full year are also presented. Concentrations of OH-PAHs were about 2-4 orders of magnitude lower than their parent PAHs and comparable to those sparsely reported in literature. (Author) 7 refs.
Facets of Uncertainty in Digital Elevation and Slope Modeling
Institute of Scientific and Technical Information of China (English)
ZHANG Jingxiong; LI Deren
2005-01-01
This paper investigates the differences that result from applying different approaches to uncertainty modeling and reports an experimental examining error estimation and propagation in elevation and slope,with the latter derived from the former. It is confirmed that significant differences exist between uncertainty descriptors, and propagation of uncertainty to end products is immensely affected by the specification of source uncertainty.
DEFF Research Database (Denmark)
Bisinella, Valentina; Conradsen, Knut; Christensen, Thomas Højlund;
2016-01-01
Purpose: Identification of key inputs and their effect on results from Life Cycle Assessment (LCA) models is fundamental. Because parameter importance varies greatly between cases due to the interaction of sensitivity and uncertainty, these features should never be defined a priori. However...... and uncertainty in a Global Sensitivity Analysis (GSA) framework. Methods: The proposed analytical method based on the calculation of sensitivity coefficients (SC) is evaluated against Monte Carlo sampling on traditional uncertainty assessment procedures, both for individual parameters and for full parameter sets...... of additivity of variances and GSA is tested on results from both uncertainty propagation methods. Then, we examine the differences in discernibility analyses results carried out with varying numbers of sampling points and parameters. Results and discussion: The proposed analytical method complies...
Energy Technology Data Exchange (ETDEWEB)
Thomas, R.E.
1982-03-01
An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software.
Error Analysis and Propagation in Metabolomics Data Analysis.
Moseley, Hunter N B
2013-01-01
Error analysis plays a fundamental role in describing the uncertainty in experimental results. It has several fundamental uses in metabolomics including experimental design, quality control of experiments, the selection of appropriate statistical methods, and the determination of uncertainty in results. Furthermore, the importance of error analysis has grown with the increasing number, complexity, and heterogeneity of measurements characteristic of 'omics research. The increase in data complexity is particularly problematic for metabolomics, which has more heterogeneity than other omics technologies due to the much wider range of molecular entities detected and measured. This review introduces the fundamental concepts of error analysis as they apply to a wide range of metabolomics experimental designs and it discusses current methodologies for determining the propagation of uncertainty in appropriate metabolomics data analysis. These methodologies include analytical derivation and approximation techniques, Monte Carlo error analysis, and error analysis in metabolic inverse problems. Current limitations of each methodology with respect to metabolomics data analysis are also discussed.
Uncertainty propagation in neuronal dynamical systems
Torres Valderrama, A.; Blom, J.G.
2013-01-01
One of the most notorious characteristics of neuronal electrical activity is its variability, whose origin is not just instrumentation noise, but mainly the intrinsically stochastic nature of neural computations. Neuronal models based on deterministic differential equations cannot account for such v
Fuzzy Uncertainty Evaluation for Fault Tree Analysis
Energy Technology Data Exchange (ETDEWEB)
Kim, Ki Beom; Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of); Jae, Moo Sung [Hanyang University, Seoul (Korea, Republic of)
2015-05-15
This traditional probabilistic approach can calculate relatively accurate results. However it requires a long time because of repetitive computation due to the MC method. In addition, when informative data for statistical analysis are not sufficient or some events are mainly caused by human error, the probabilistic approach may not be possible because uncertainties of these events are difficult to be expressed by probabilistic distributions. In order to reduce the computation time and quantify uncertainties of top events when basic events whose uncertainties are difficult to be expressed by probabilistic distributions exist, the fuzzy uncertainty propagation based on fuzzy set theory can be applied. In this paper, we develop a fuzzy uncertainty propagation code and apply the fault tree of the core damage accident after the large loss of coolant accident (LLOCA). The fuzzy uncertainty propagation code is implemented and tested for the fault tree of the radiation release accident. We apply this code to the fault tree of the core damage accident after the LLOCA in three cases and compare the results with those computed by the probabilistic uncertainty propagation using the MC method. The results obtained by the fuzzy uncertainty propagation can be calculated in relatively short time, covering the results obtained by the probabilistic uncertainty propagation.
Bartley, David; Lidén, Göran
2008-08-01
The reporting of measurement uncertainty has recently undergone a major harmonization whereby characteristics of a measurement method obtained during establishment and application are combined componentwise. For example, the sometimes-pesky systematic error is included. A bias component of uncertainty can be often easily established as the uncertainty in the bias. However, beyond simply arriving at a value for uncertainty, meaning to this uncertainty if needed can sometimes be developed in terms of prediction confidence in uncertainty-based intervals covering what is to be measured. To this end, a link between concepts of accuracy and uncertainty is established through a simple yet accurate approximation to a random variable known as the non-central Student's t-distribution. Without a measureless and perpetual uncertainty, the drama of human life would be destroyed. Winston Churchill.
Angevine, W. M.; Brioude, J. F.; McKeen, S. A.
2014-12-01
Lagrangian particle dispersion models, used to estimate emissions from observations, require meteorological fields as input. Uncertainty in the driving meteorology is one of the major uncertainties in the results. The propagation of uncertainty through the system is not simple, and has not been thoroughly explored. Here, we take an ensemble approach. Six different configurations of the Weather Research and Forecast (WRF) model drive otherwise identical simulations with FLEXPART for 49 days over eastern North America. The ensemble spreads of wind speed, mixing height, and tracer concentration are presented. Uncertainty of tracer concentrations due solely to meteorological uncertainty is 30-40%. Spatial and temporal averaging reduces the uncertainty marginally. Tracer age uncertainty due solely to meteorological uncertainty is 15-20%. These are lower bounds on the uncertainty, because a number of processes are not accounted for in the analysis. It is not yet known exactly how these uncertainties will propagate through inversions to affect emissions estimates.
Estimation of sedimentary proxy records together with associated uncertainty
Directory of Open Access Journals (Sweden)
B. Goswami
2014-06-01
Full Text Available Sedimentary proxy records constitute a significant portion of the recorded evidence that allow us to investigate paleoclimatic conditions and variability. However, uncertainties in the dating of proxy archives limit our ability to fix the timing of past events and interpret proxy record inter-comparisons. While there are various age-modeling approaches to improve the estimation of the age-depth relations of archives, relatively less focus has been given to the propagation of the age (and radiocarbon calibration uncertainties into the final proxy record. We present a generic Bayesian framework to estimate proxy records along with their associated uncertainty starting with the radiometric age-depth and proxy-depth measurements, and a radiometric calibration curve if required. We provide analytical expressions for the posterior proxy probability distributions at any given calendar age, from which the expected proxy values and their uncertainty can be estimated. We illustrate our method using two synthetic datasets and then use it to construct the proxy records for groundwater inflow and surface erosion from Lonar lake in central India. Our analysis reveals interrelations between the uncertainty of the proxy record over time and the variance of proxy along the depth of the archive. For the Lonar lake proxies, we show that, rather than the age uncertainties, it is the proxy variance combined with calibration uncertainty that accounts for most of the final uncertainty. We represent the proxy records as probability distributions on a precise, error-free time scale that makes further time series analyses and inter-comparison of proxies relatively simpler and clearer. Our approach provides a coherent understanding of age uncertainties within sedimentary proxy records that involve radiometric dating. It can be potentially used within existing age modeling structures to bring forth a reliable and consistent framework for proxy record estimation.
Lindley, Dennis V
2013-01-01
Praise for the First Edition ""...a reference for everyone who is interested in knowing and handling uncertainty.""-Journal of Applied Statistics The critically acclaimed First Edition of Understanding Uncertainty provided a study of uncertainty addressed to scholars in all fields, showing that uncertainty could be measured by probability, and that probability obeyed three basic rules that enabled uncertainty to be handled sensibly in everyday life. These ideas were extended to embrace the scientific method and to show how decisions, containing an uncertain element, could be rationally made.
Directory of Open Access Journals (Sweden)
N. Eckert
2008-10-01
Full Text Available For snow avalanches, passive defense structures are generally designed by considering high return period events. In this paper, taking inspiration from other natural hazards, an alternative method based on the maximization of the economic benefit of the defense structure is proposed. A general Bayesian framework is described first. Special attention is given to the problem of taking the poor local information into account in the decision-making process. Therefore, simplifying assumptions are made. The avalanche hazard is represented by a Peak Over Threshold (POT model. The influence of the dam is quantified in terms of runout distance reduction with a simple relation derived from small-scale experiments using granular media. The costs corresponding to dam construction and the damage to the element at risk are roughly evaluated for each dam height-hazard value pair, with damage evaluation corresponding to the maximal expected loss. Both the classical and the Bayesian risk functions can then be computed analytically. The results are illustrated with a case study from the French avalanche database. A sensitivity analysis is performed and modelling assumptions are discussed in addition to possible further developments.
Validity of Parametrized Quark Propagator
Institute of Scientific and Technical Information of China (English)
ZHUJi-Zhen; ZHOULi-Juan; MAWei-Xing
2005-01-01
Based on an extensively study of the Dyson-Schwinger equations for a fully dressed quark propagator in the “rainbow”approximation, a parametrized fully dressed quark propagator is proposed in this paper. The parametrized propagator describes a confining quark propagator in hadron since it is analytic everywhere in complex p2-plane and has no Lemmann representation. The validity of the new propagator is discussed by comparing its predictions on selfenergy functions A/(p2), Bl(p2) and effective mass M$(p2) of quark with flavor f to their corresponding theoretical results produced by Dyson-Schwinger equations. Our comparison shows that the parametrized quark propagator is a good approximation to the fully dressed quark propagator given by the solutions of Dyson-Schwinger equations in the rainbow approximation and is convenient to use in any theoretical calculations.
Validity of Parametrized Quark Propagator
Institute of Scientific and Technical Information of China (English)
ZHU Ji-Zhen; ZHOU Li-Juan; MA Wei-Xing
2005-01-01
Based on an extensively study of the Dyson-Schwinger equations for a fully dressed quark propagator in the "rainbow" approximation, a parametrized fully dressed quark propagator is proposed in this paper. The parametrized propagator describes a confining quark propagator in hadron since it is analytic everywhere in complex p2-plane and has no Lemmann representation. The validity of the new propagator is discussed by comparing its predictions on selfenergy functions Af(p2), Bf(p2) and effective mass Mf(p2) of quark with flavor f to their corresponding theoretical results produced by Dyson-Schwinger equations. Our comparison shows that the parametrized quark propagator is a good approximation to the fully dressed quark propagator given by the solutions of Dyson-Schwinger equations in the rainbow approximation and is convenient to use in any theoretical calculations.
Statistically Qualified Neuro-Analytic system and Method for Process Monitoring
Energy Technology Data Exchange (ETDEWEB)
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
1998-11-04
An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.
Analyzing Bullwhip Effect in Supply Networks under Exogenous Uncertainty
Directory of Open Access Journals (Sweden)
Mitra Darvish
2014-05-01
Full Text Available This paper explains a model for analyzing and measuring the propagation of order amplifications (i.e. bullwhip effect for a single-product supply network topology considering exogenous uncertainty and linear and time-invariant inventory management policies for network entities. The stream of orders placed by each entity of the network is characterized assuming customer demand is ergodic. In fact, we propose an exact formula in order to measure the bullwhip effect in the addressed supply network topology considering the system in Markovian chain framework and presenting a matrix of network member relationships and relevant order sequences. The formula turns out using a mathematical method called frequency domain analysis. The major contribution of this paper is analyzing the bullwhip effect considering exogenous uncertainty in supply networks and using the Fourier transform in order to simplify the relevant calculations. We present a number of numerical examples to assess the analytical results accuracy in quantifying the bullwhip effect.
Impact of discharge data uncertainty on nutrient load uncertainty
Westerberg, Ida; Gustavsson, Hanna; Sonesten, Lars
2016-04-01
Uncertainty in the rating-curve model of the stage-discharge relationship leads to uncertainty in discharge time series. These uncertainties in turn affect many other analyses based on discharge data, such as nutrient load estimations. It is important to understand how large the impact of discharge data uncertainty is on such analyses, since they are often used as the basis to take important environmental management decisions. In the Baltic Sea basin, nutrient load estimates from river mouths are a central information basis for managing and reducing eutrophication in the Baltic Sea. In this study we investigated rating curve uncertainty and its propagation to discharge data uncertainty and thereafter to uncertainty in the load of phosphorous and nitrogen for twelve Swedish river mouths. We estimated rating curve uncertainty using the Voting Point method, which accounts for random and epistemic errors in the stage-discharge relation and allows drawing multiple rating-curve realisations consistent with the total uncertainty. We sampled 40,000 rating curves, and for each sampled curve we calculated a discharge time series from 15-minute water level data for the period 2005-2014. Each discharge time series was then aggregated to daily scale and used to calculate the load of phosphorous and nitrogen from linearly interpolated monthly water samples, following the currently used methodology for load estimation. Finally the yearly load estimates were calculated and we thus obtained distributions with 40,000 load realisations per year - one for each rating curve. We analysed how the rating curve uncertainty propagated to the discharge time series at different temporal resolutions, and its impact on the yearly load estimates. Two shorter periods of daily water quality sampling around the spring flood peak allowed a comparison of load uncertainty magnitudes resulting from discharge data with those resulting from the monthly water quality sampling.
Stereo-particle image velocimetry uncertainty quantification
Bhattacharya, Sayantan; Charonko, John J.; Vlachos, Pavlos P.
2017-01-01
Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric
Gear Crack Propagation Investigation
1995-01-01
Reduced weight is a major design goal in aircraft power transmissions. Some gear designs incorporate thin rims to help meet this goal. Thin rims, however, may lead to bending fatigue cracks. These cracks may propagate through a gear tooth or into the gear rim. A crack that propagates through a tooth would probably not be catastrophic, and ample warning of a failure could be possible. On the other hand, a crack that propagates through the rim would be catastrophic. Such cracks could lead to disengagement of a rotor or propeller from an engine, loss of an aircraft, and fatalities. To help create and validate tools for the gear designer, the NASA Lewis Research Center performed in-house analytical and experimental studies to investigate the effect of rim thickness on gear-tooth crack propagation. Our goal was to determine whether cracks grew through gear teeth (benign failure mode) or through gear rims (catastrophic failure mode) for various rim thicknesses. In addition, we investigated the effect of rim thickness on crack propagation life. A finite-element-based computer program simulated gear-tooth crack propagation. The analysis used principles of linear elastic fracture mechanics, and quarter-point, triangular elements were used at the crack tip to represent the stress singularity. The program had an automated crack propagation option in which cracks were grown numerically via an automated remeshing scheme. Crack-tip stress-intensity factors were estimated to determine crack-propagation direction. Also, various fatigue crack growth models were used to estimate crack-propagation life. Experiments were performed in Lewis' Spur Gear Fatigue Rig to validate predicted crack propagation results. Gears with various backup ratios were tested to validate crack-path predictions. Also, test gears were installed with special crack-propagation gages in the tooth fillet region to measure bending-fatigue crack growth. From both predictions and tests, gears with backup ratios
Characterizing Epistemic Uncertainty for Launch Vehicle Designs
Novack, Steven D.; Rogers, Jim; Al Hassan, Mohammad; Hark, Frank
2016-01-01
NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk, and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results, and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods, such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty, are rendered obsolete, since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods. This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper describes how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Energy Technology Data Exchange (ETDEWEB)
Haihua Zhao; Vincent A. Mousseau
2011-09-01
Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical
Optimizing production under uncertainty
DEFF Research Database (Denmark)
Rasmussen, Svend
This Working Paper derives criteria for optimal production under uncertainty based on the state-contingent approach (Chambers and Quiggin, 2000), and discusses po-tential problems involved in applying the state-contingent approach in a normative context. The analytical approach uses the concept o...... the relative benefits and of using the state-contingent approach in a norma-tive context, compared to the EV model....
A monomial chaos approach for efficient uncertainty quantification on nonlinear problems
Witteveen, J.A.S.; Bijl, H.
2008-01-01
A monomial chaos approach is presented for efficient uncertainty quantification in nonlinear computational problems. Propagating uncertainty through nonlinear equations can be computationally intensive for existing uncertainty quantification methods. It usually results in a set of nonlinear equation
A Monomial Chaos Approach for Efficient Uncertainty Quantification in Computational Fluid Dynamics
Witteveen, J.A.S.; Bijl, H.
2006-01-01
A monomial chaos approach is proposed for efficient uncertainty quantification in nonlinear computational problems. Propagating uncertainty through nonlinear equations can still be computationally intensive for existing uncertainty quantification methods. It usually results in a set of nonlinear equ
Liu, Baoding
2015-01-01
When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, c...
Institute of Scientific and Technical Information of China (English)
孙可明; 张树翠
2016-01-01
Shale gas deposits in shale reservoirs, whose bedding structures are different from conventional heterogeneous reservoirs. This also makes the shale reservoirs have different crack propagation laws in hydraulic fracturing. In order to investigate the crack propagation laws of shale reservoir hydraulic fracturing, complex variable function and conformal transformation is adopted here to deduce the solutions of crack tip stress concentration. The fracture propagation criteri-ons have been put forward when the perpendicular to the minimum in-situ stress hydraulic fracture meets oblique bedding by considering shale reservoirs heterogeneity as well as strength anisotropy and then comparing fluid pressure satisfied fracture propagation in all directions. In order to show how diﬃcult the hydraulic fracture turning to bedding, the critical strength ratios of bedding and rock mass are defined respectively when the fracture just initiates in bedding and propagates along bedding. Based on these two critical strength ratios, the bedding strength range which fracture initiates in bedding and propagates along bedding can be obtained. Analytical analysis show as follows:the critical strength ratio of bedding crack initiation increases with the decrease of bedding strike angle and dip angle and the increase of min-principal stress and the rock strength. The critical strength ratio of bedding crack initiation increases when the max-principal stress de-creases and the middle-principal stress increase if the strike angle is less than 35.26◦. Otherwise the critical strength ratio of bedding crack initiation decreases when the max-principal stress decreases and the middle-principal stress increase if the strike angle is greater than 35.26◦. The critical strength ratio of bedding crack propagation increases when bed-ding strike angle, dip angle and in-situ stress difference decrease and the rock strength increase. Only when the critical conditions of bedding crack initiation and
Solomatine, Dimitri
2016-04-01
When speaking about model uncertainty many authors implicitly assume the data uncertainty (mainly in parameters or inputs) which is probabilistically described by distributions. Often however it is look also into the residual uncertainty as well. It is hence reasonable to classify the main approaches to uncertainty analysis with respect to the two main types of model uncertainty that can be distinguished: A. The residual uncertainty of models. In this case the model parameters and/or model inputs are considered to be fixed (deterministic), i.e. the model is considered to be optimal (calibrated) and deterministic. Model error is considered as the manifestation of uncertainty. If there is enough past data about the model errors (i.e. it uncertainty), it is possible to build a statistical or machine learning model of uncertainty trained on this data. The following methods can be mentioned: (a) quantile regression (QR) method by Koenker and Basset in which linear regression is used to build predictive models for distribution quantiles [1] (b) a more recent approach that takes into account the input variables influencing such uncertainty and uses more advanced machine learning (non-linear) methods (neural networks, model trees etc.) - the UNEEC method [2,3,7] (c) and even more recent DUBRAUE method (Dynamic Uncertainty Model By Regression on Absolute Error), a autoregressive model of model residuals (it corrects the model residual first and then carries out the uncertainty prediction by a autoregressive statistical model) [5] B. The data uncertainty (parametric and/or input) - in this case we study the propagation of uncertainty (presented typically probabilistically) from parameters or inputs to the model outputs. In case of simple functions representing models analytical approaches can be used, or approximation methods (e.g., first-order second moment method). However, for real complex non-linear models implemented in software there is no other choice except using
Uncertainty Quantification for Polynomial Systems via Bernstein Expansions
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2012-01-01
This paper presents a unifying framework to uncertainty quantification for systems having polynomial response metrics that depend on both aleatory and epistemic uncertainties. The approach proposed, which is based on the Bernstein expansions of polynomials, enables bounding the range of moments and failure probabilities of response metrics as well as finding supersets of the extreme epistemic realizations where the limits of such ranges occur. These bounds and supersets, whose analytical structure renders them free of approximation error, can be made arbitrarily tight with additional computational effort. Furthermore, this framework enables determining the importance of particular uncertain parameters according to the extent to which they affect the first two moments of response metrics and failure probabilities. This analysis enables determining the parameters that should be considered uncertain as well as those that can be assumed to be constants without incurring significant error. The analytical nature of the approach eliminates the numerical error that characterizes the sampling-based techniques commonly used to propagate aleatory uncertainties as well as the possibility of under predicting the range of the statistic of interest that may result from searching for the best- and worstcase epistemic values via nonlinear optimization or sampling.
Climate model uncertainty vs. conceptual geological uncertainty in hydrological modeling
Directory of Open Access Journals (Sweden)
T. O. Sonnenborg
2015-04-01
Full Text Available Projections of climate change impact are associated with a cascade of uncertainties including CO2 emission scenario, climate model, downscaling and impact model. The relative importance of the individual uncertainty sources is expected to depend on several factors including the quantity that is projected. In the present study the impacts of climate model uncertainty and geological model uncertainty on hydraulic head, stream flow, travel time and capture zones are evaluated. Six versions of a physically based and distributed hydrological model, each containing a unique interpretation of the geological structure of the model area, are forced by 11 climate model projections. Each projection of future climate is a result of a GCM-RCM model combination (from the ENSEMBLES project forced by the same CO2 scenario (A1B. The changes from the reference period (1991–2010 to the future period (2081–2100 in projected hydrological variables are evaluated and the effects of geological model and climate model uncertainties are quantified. The results show that uncertainty propagation is context dependent. While the geological conceptualization is the dominating uncertainty source for projection of travel time and capture zones, the uncertainty on the climate models is more important for groundwater hydraulic heads and stream flow.
Whitepaper on Uncertainty Quantification for MPACT
Energy Technology Data Exchange (ETDEWEB)
Williams, Mark L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-12-17
The MPACT code provides the ability to perform high-fidelity deterministic calculations to obtain a wide variety of detailed results for very complex reactor core models. However MPACT currently does not have the capability to propagate the effects of input data uncertainties to provide uncertainties in the calculated results. This white paper discusses a potential method for MPACT uncertainty quantification (UQ) based on stochastic sampling.
Ultra High Energy Nuclei Propagation
Aloisio, Roberto
2008-01-01
We discuss the problem of ultra high energy nuclei propagation in astrophysical backgrounds. We present a new analytical computation scheme based on the hypothesis of continuos energy losses in a kinetic formulation of the particles propagation. This scheme enables the computation of the fluxes of ultra high energy nuclei as well as the fluxes of secondaries (nuclei and nucleons) produced by the process of photo-disintegration suffered by nuclei.
Modeling and inverse problems in the presence of uncertainty
Banks, H T; Thompson, W Clayton
2014-01-01
Modeling and Inverse Problems in the Presence of Uncertainty collects recent research-including the authors' own substantial projects-on uncertainty propagation and quantification. It covers two sources of uncertainty: where uncertainty is present primarily due to measurement errors and where uncertainty is present due to the modeling formulation itself. After a useful review of relevant probability and statistical concepts, the book summarizes mathematical and statistical aspects of inverse problem methodology, including ordinary, weighted, and generalized least-squares formulations. It then
Orbital State Uncertainty Realism
Horwood, J.; Poore, A. B.
2012-09-01
Fundamental to the success of the space situational awareness (SSA) mission is the rigorous inclusion of uncertainty in the space surveillance network. The *proper characterization of uncertainty* in the orbital state of a space object is a common requirement to many SSA functions including tracking and data association, resolution of uncorrelated tracks (UCTs), conjunction analysis and probability of collision, sensor resource management, and anomaly detection. While tracking environments, such as air and missile defense, make extensive use of Gaussian and local linearity assumptions within algorithms for uncertainty management, space surveillance is inherently different due to long time gaps between updates, high misdetection rates, nonlinear and non-conservative dynamics, and non-Gaussian phenomena. The latter implies that "covariance realism" is not always sufficient. SSA also requires "uncertainty realism"; the proper characterization of both the state and covariance and all non-zero higher-order cumulants. In other words, a proper characterization of a space object's full state *probability density function (PDF)* is required. In order to provide a more statistically rigorous treatment of uncertainty in the space surveillance tracking environment and to better support the aforementioned SSA functions, a new class of multivariate PDFs are formulated which more accurately characterize the uncertainty of a space object's state or orbit. The new distribution contains a parameter set controlling the higher-order cumulants which gives the level sets a distinctive "banana" or "boomerang" shape and degenerates to a Gaussian in a suitable limit. Using the new class of PDFs within the general Bayesian nonlinear filter, the resulting filter prediction step (i.e., uncertainty propagation) is shown to have the *same computational cost as the traditional unscented Kalman filter* with the former able to maintain a proper characterization of the uncertainty for up to *ten
DEFF Research Database (Denmark)
Heydorn, Kaj; Anglov, Thomas
2002-01-01
Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration unce...
Directory of Open Access Journals (Sweden)
W. M. Angevine
2014-07-01
Full Text Available Lagrangian particle dispersion models require meteorological fields as input. Uncertainty in the driving meteorology is one of the major uncertainties in the results. The propagation of uncertainty through the system is not simple, and has not been thoroughly explored. Here, we take an ensemble approach. Six different configurations of the Weather Research and Forecast (WRF model drive otherwise identical simulations with FLEXPART for 49 days over eastern North America. The ensemble spreads of wind speed, mixing height, and tracer concentration are presented. Uncertainty of tracer concentrations due solely to meteorological uncertainty is 30–40%. Spatial and temporal averaging reduces the uncertainty marginally. Tracer age uncertainty due solely to meteorological uncertainty is 15–20%. These are lower bounds on the uncertainty, because a number of processes are not accounted for in the analysis.
Angevine, W. M.; Brioude, J.; McKeen, S.; Holloway, J. S.
2014-12-01
Lagrangian particle dispersion models require meteorological fields as input. Uncertainty in the driving meteorology is one of the major uncertainties in the results. The propagation of uncertainty through the system is not simple, and it has not been thoroughly explored. Here, we take an ensemble approach. Six different configurations of the Weather Research and Forecast (WRF) model drive otherwise identical simulations with FLEXPART-WRF for 49 days over eastern North America. The ensemble spreads of wind speed, mixing height, and tracer concentration are presented. Uncertainty of tracer concentrations due solely to meteorological uncertainty is 30-40%. Spatial and temporal averaging reduces the uncertainty marginally. Tracer age uncertainty due solely to meteorological uncertainty is 15-20%. These are lower bounds on the uncertainty, because a number of processes are not accounted for in the analysis.
Directory of Open Access Journals (Sweden)
D. D. Lucas
2004-10-01
Full Text Available A study of the current significant uncertainties in dimethylsulfide (DMS gas-phase chemistry provides insight into additional research needed to decrease these uncertainties. The DMS oxidation cycle in the remote marine boundary layer is simulated using a diurnally-varying box model with 56 uncertain chemical and physical parameters. Two analytical methods (direct integration and probabilistic collocation are used to determine the most influential parameters (sensitivity analysis and sources of uncertainty (uncertainty analysis affecting the concentrations of DMS, SO_{2}, methanesulfonic acid (MSA, and H_{2}SO_{4}. The key parameters identified by the sensitivity analysis are associated with DMS emissions, mixing in to and out of the boundary layer, heterogeneous removal of soluble sulfur-containing compounds, and the DMS+OH addition and abstraction reactions. MSA and H_{2}SO_{4} are also sensitive to the rate constants of numerous other reactions, which limits the effectiveness of mechanism reduction techniques. Propagating the parameter uncertainties through the model leads to concentrations that are uncertain by factors of 1.8 to 3.0. The main sources of uncertainty are from DMS emissions and heterogeneous scavenging. Uncertain chemical rate constants, however, collectively account for up to 50–60% of the net uncertainties in MSA and H_{2}SO_{4}. The concentration uncertainties are also calculated at different temperatures, where they vary mainly due to temperature-dependent chemistry. With changing temperature, the uncertainties of DMS and SO_{2} remain steady, while the uncertainties of MSA and H_{2}SO_{4} vary by factors of 2 to 4.
Uncertainty Quantification in Hybrid Dynamical Systems
Sahai, Tuhin
2011-01-01
Uncertainty quantification (UQ) techniques are frequently used to ascertain output variability in systems with parametric uncertainty. Traditional algorithms for UQ are either system-agnostic and slow (such as Monte Carlo) or fast with stringent assumptions on smoothness (such as polynomial chaos and Quasi-Monte Carlo). In this work, we develop a fast UQ approach for hybrid dynamical systems by extending the polynomial chaos methodology to these systems. To capture discontinuities, we use a wavelet-based Wiener-Haar expansion. We develop a boundary layer approach to propagate uncertainty through separable reset conditions. We also introduce a transport theory based approach for propagating uncertainty through hybrid dynamical systems. Here the expansion yields a set of hyperbolic equations that are solved by integrating along characteristics. The solution of the partial differential equation along the characteristics allows one to quantify uncertainty in hybrid or switching dynamical systems. The above method...
Uncertainty quantification in hybrid dynamical systems
Sahai, Tuhin; Pasini, José Miguel
2013-03-01
Uncertainty quantification (UQ) techniques are frequently used to ascertain output variability in systems with parametric uncertainty. Traditional algorithms for UQ are either system-agnostic and slow (such as Monte Carlo) or fast with stringent assumptions on smoothness (such as polynomial chaos and Quasi-Monte Carlo). In this work, we develop a fast UQ approach for hybrid dynamical systems by extending the polynomial chaos methodology to these systems. To capture discontinuities, we use a wavelet-based Wiener-Haar expansion. We develop a boundary layer approach to propagate uncertainty through separable reset conditions. We also introduce a transport theory based approach for propagating uncertainty through hybrid dynamical systems. Here the expansion yields a set of hyperbolic equations that are solved by integrating along characteristics. The solution of the partial differential equation along the characteristics allows one to quantify uncertainty in hybrid or switching dynamical systems. The above methods are demonstrated on example problems.
Optimizing production under uncertainty
DEFF Research Database (Denmark)
Rasmussen, Svend
This Working Paper derives criteria for optimal production under uncertainty based on the state-contingent approach (Chambers and Quiggin, 2000), and discusses po-tential problems involved in applying the state-contingent approach in a normative context. The analytical approach uses the concept...... of state-contingent production functions and a definition of inputs including both sort of input, activity and alloca-tion technology. It also analyses production decisions where production is combined with trading in state-contingent claims such as insurance contracts. The final part discusses...
Energy Technology Data Exchange (ETDEWEB)
Wendelberger, James G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-08-10
These are slides from a presentation made by a researcher from Los Alamos National Laboratory. The following topics are covered: sources of error for NDA gamma measurements, precision and accuracy are two important characteristics of measurements, four items processed in a material balance area during the inventory time period, inventory difference and propagation of variance, sum in quadrature, and overview of the ID/POV process.
Measurement uncertainty of isotopologue fractions in fluxomics determined via mass spectrometry.
Guerrasio, R; Haberhauer-Troyer, C; Steiger, M; Sauer, M; Mattanovich, D; Koellensperger, G; Hann, S
2013-06-01
Metabolic flux analysis implies mass isotopomer distribution analysis and determination of mass isotopologue fractions (IFs) of proteinogenic amino acids of cell cultures. In this work, for the first time, this type of analysis is comprehensively investigated in terms of measurement uncertainty by calculating and comparing budgets for different mass spectrometric techniques. The calculations addressed amino acids of Pichia pastoris grown on 10% uniformly (13)C labeled glucose. Typically, such experiments revealed an enrichment of (13)C by at least one order of magnitude in all proteinogenic amino acids. Liquid chromatography-time-of-flight mass spectrometry (LC-TOFMS), liquid chromatography-tandem mass spectrometry (LC-MS/MS) and gas chromatography-mass spectrometry (GC-MS) analyses were performed. The samples were diluted to fit the linear dynamic range of the mass spectrometers used (10 μM amino acid concentration). The total combined uncertainties of IFs as well as the major uncertainty contributions affecting the IFs were determined for phenylalanine, which was selected as exemplary model compound. A bottom-up uncertainty propagation was performed according to Quantifying Uncertainty in Analytical Measurement and using the Monte Carlo method by considering all factors leading to an IF, i.e., the process of measurement and the addition of (13)C-glucose. Excellent relative expanded uncertainties (k = 1) of 0.32, 0.75, and 0.96% were obtained for an IF value of 0.7 by LC-MS/MS, GC-MS, and LC-TOFMS, respectively. The major source of uncertainty, with a relative contribution of 20-80% of the total uncertainty, was attributed to the signal intensity (absolute counts) uncertainty calculated according to Poisson counting statistics, regardless which of the mass spectrometry platforms was used. Uncertainty due to measurement repeatability was of importance in LC-MS/MS, showing a relative contribution up to 47% of the total uncertainty, whereas for GC-MS and LC
Institute of Scientific and Technical Information of China (English)
贺炜; 惠战强; 吴惠民
2012-01-01
采用解析方法,在考虑材料损耗和色散的情况下,详细研究了无啁啾高斯脉冲和啁啾高斯脉冲在半导体光放大器中传输的物理过程,分析了强度增益、脉冲宽度和频率啁嗽与线宽增强因子、色散系数、小信号增益特征参数及初始啁啾之间的关系.结果表明:当输入变换极限的高斯脉冲时,色散会引起增益压缩,脉冲展宽和频率啁啾；同样情况下,线宽增强因子越大,脉宽加宽越明显,输出脉冲啁嗽越大,且随着线宽增强因子的增大,输出脉冲啁啾极大值向特征参数值较小的一边移动.当输入啁啾高斯脉冲时,初始脉冲啁嗽越大,增益压缩越明显,啁啾系数为正时,脉冲单纯展宽,输出啁啾随特征参数的增大而逐渐减小,啁啾系数为负时,初始啁啾与群速度色散导致的啁啾相互竞争,致使脉冲先被压缩后被展宽；脉冲最窄处对应的特征参数随线宽增强因子的增大而先增大后减小,输出啁啾随特征参数的增大而经历振荡后趋于平稳.%Analytical characterization of un-chirped Gaussian pulse and chirped Gaussian pulse propagating through a semiconductor optical amplifier (SOA) is presented under consideration of material loss and dispersion.The physical mechanism of interaction between Gaussian pulse and semiconductor material is analyzed.Energy gain,pulse width as well as frequency chirp of Gaussian pulse output from SOA are investigated.The results demonstrate that linewidth enhancement factor,dispersion coefficient and feature parameter all play important roles in deciding the output pulse characteristic.The material dispersion has no obvious impact on gain compression induced by group velocity dispersion.The pulse width is broadened under the combined effect of material dispersion and group velocity dispersion.When a chirped Gaussian pulse propagates in an SOA,the same chirp component means the same gain compression no matter the chirp is positive
Ferrarese, Giorgio
2011-01-01
Lectures: A. Jeffrey: Lectures on nonlinear wave propagation.- Y. Choquet-Bruhat: Ondes asymptotiques.- G. Boillat: Urti.- Seminars: D. Graffi: Sulla teoria dell'ottica non-lineare.- G. Grioli: Sulla propagazione del calore nei mezzi continui.- T. Manacorda: Onde nei solidi con vincoli interni.- T. Ruggeri: "Entropy principle" and main field for a non linear covariant system.- B. Straughan: Singular surfaces in dipolar materials and possible consequences for continuum mechanics
2014-01-01
Lagrangian particle dispersion models require meteorological fields as input. Uncertainty in the driving meteorology is one of the major uncertainties in the results. The propagation of uncertainty through the system is not simple, and has not been thoroughly explored. Here, we take an ensemble approach. Six different configurations of the Weather Research and Forecast (WRF) model drive otherwise identical simulations with FLEXPART for 49 days over eastern N...
DEFF Research Database (Denmark)
Nguyen, Daniel Xuyen
This paper presents a model of trade that explains why firms wait to export and why many exporters fail. Firms face uncertain demands that are only realized after the firm enters the destination. The model retools the timing of uncertainty resolution found in productivity heterogeneity models...... the high rate of exit seen in the first years of exporting. Finally, when faced with multiple countries in which to export, some firms will choose to sequentially export in order to slowly learn more about its chances for success in untested markets....
2003-01-01
This paper is based on a talk given in Varenna, Lake Como, Italy, September 2003, at the Summer School TOWARDS ELECTRONIC DEMOCRACY: INTERNET-BASED DECISION SUPPORT organized by CNR-IMATI-Milano branch and supported by THE EUROPEAN SCIENCE FOUNDATION, establishing the programme TOWARDS ELECTRONIC DEMOCRACY (TED) with the objective of discussing and evaluating how advances in interactive decision analytic tools might help develop inclusive e-democratic systems which involve t...
Uncertainty Quantification in Aeroelasticity
Beran, Philip; Stanford, Bret; Schrock, Christopher
2017-01-01
Physical interactions between a fluid and structure, potentially manifested as self-sustained or divergent oscillations, can be sensitive to many parameters whose values are uncertain. Of interest here are aircraft aeroelastic interactions, which must be accounted for in aircraft certification and design. Deterministic prediction of these aeroelastic behaviors can be difficult owing to physical and computational complexity. New challenges are introduced when physical parameters and elements of the modeling process are uncertain. By viewing aeroelasticity through a nondeterministic prism, where key quantities are assumed stochastic, one may gain insights into how to reduce system uncertainty, increase system robustness, and maintain aeroelastic safety. This article reviews uncertainty quantification in aeroelasticity using traditional analytical techniques not reliant on computational fluid dynamics; compares and contrasts this work with emerging methods based on computational fluid dynamics, which target richer physics; and reviews the state of the art in aeroelastic optimization under uncertainty. Barriers to continued progress, for example, the so-called curse of dimensionality, are discussed.
Uncertainties in land use data
Directory of Open Access Journals (Sweden)
G. Castilla
2007-11-01
Full Text Available This paper deals with the description and assessment of uncertainties in land use data derived from Remote Sensing observations, in the context of hydrological studies. Land use is a categorical regionalised variable reporting the main socio-economic role each location has, where the role is inferred from the pattern of occupation of land. The properties of this pattern that are relevant to hydrological processes have to be known with some accuracy in order to obtain reliable results; hence, uncertainty in land use data may lead to uncertainty in model predictions. There are two main uncertainties surrounding land use data, positional and categorical. The first one is briefly addressed and the second one is explored in more depth, including the factors that influence it. We (1 argue that the conventional method used to assess categorical uncertainty, the confusion matrix, is insufficient to propagate uncertainty through distributed hydrologic models; (2 report some alternative methods to tackle this and other insufficiencies; (3 stress the role of metadata as a more reliable means to assess the degree of distrust with which these data should be used; and (4 suggest some practical recommendations.
Energy Technology Data Exchange (ETDEWEB)
Davis, C B
1987-08-01
The uncertainties of calculations of loss-of-feedwater transients at Davis-Besse Unit 1 were determined to address concerns of the US Nuclear Regulatory Commission relative to the effectiveness of feed and bleed cooling. Davis-Besse Unit 1 is a pressurized water reactor of the raised-loop Babcock and Wilcox design. A detailed, quality-assured RELAP5/MOD2 model of Davis-Besse was developed at the Idaho National Engineering Laboratory. The model was used to perform an analysis of the loss-of-feedwater transient that occurred at Davis-Besse on June 9, 1985. A loss-of-feedwater transient followed by feed and bleed cooling was also calculated. The evaluation of uncertainty was based on the comparisons of calculations and data, comparisons of different calculations of the same transient, sensitivity calculations, and the propagation of the estimated uncertainty in initial and boundary conditions to the final calculated results.
Assessment of SFR Wire Wrap Simulation Uncertainties
Energy Technology Data Exchange (ETDEWEB)
Delchini, Marc-Olivier G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Popov, Emilian L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Pointer, William David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Swiler, Laura P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2016-09-30
Predictive modeling and simulation of nuclear reactor performance and fuel are challenging due to the large number of coupled physical phenomena that must be addressed. Models that will be used for design or operational decisions must be analyzed for uncertainty to ascertain impacts to safety or performance. Rigorous, structured uncertainty analyses are performed by characterizing the model’s input uncertainties and then propagating the uncertainties through the model to estimate output uncertainty. This project is part of the ongoing effort to assess modeling uncertainty in Nek5000 simulations of flow configurations relevant to the advanced reactor applications of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. Three geometries are under investigation in these preliminary assessments: a 3-D pipe, a 3-D 7-pin bundle, and a single pin from the Thermal-Hydraulic Out-of-Reactor Safety (THORS) facility.
Quantifying Mixed Uncertainties in Cyber Attacker Payoffs
Energy Technology Data Exchange (ETDEWEB)
Chatterjee, Samrat; Halappanavar, Mahantesh; Tipireddy, Ramakrishna; Oster, Matthew R.; Saha, Sudip
2015-04-15
Representation and propagation of uncertainty in cyber attacker payoffs is a key aspect of security games. Past research has primarily focused on representing the defender’s beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and intervals. Within cyber-settings, continuous probability distributions may still be appropriate for addressing statistical (aleatory) uncertainties where the defender may assume that the attacker’s payoffs differ over time. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information about the attacker’s payoff generation mechanism. Such epistemic uncertainties are more suitably represented as probability boxes with intervals. In this study, we explore the mathematical treatment of such mixed payoff uncertainties.
Uncertainty quantification theory, implementation, and applications
Smith, Ralph C
2014-01-01
The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...
Pole solutions for flame front propagation
Kupervasser, Oleg
2015-01-01
This book deals with solving mathematically the unsteady flame propagation equations. New original mathematical methods for solving complex non-linear equations and investigating their properties are presented. Pole solutions for flame front propagation are developed. Premixed flames and filtration combustion have remarkable properties: the complex nonlinear integro-differential equations for these problems have exact analytical solutions described by the motion of poles in a complex plane. Instead of complex equations, a finite set of ordinary differential equations is applied. These solutions help to investigate analytically and numerically properties of the flame front propagation equations.
Energy Technology Data Exchange (ETDEWEB)
Machado, Marcio D.; Alvim, Antonio C.M.; Melo, P.F.F. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia; Moreira, Francisco J.; Santos, Teresinha I.C. [FURNAS, Rio de Janeiro, RJ (Brazil)
1997-12-01
This work uses a new methodology to evaluate DNBR, named mini-RTDP. The standard thermal design procedure method (STDP) currently in use establishes a limit design value which cannot be surpassed. This limit value is determined taking into account the uncertainties of the empirical correlation used in COBRAIIIC/MIT code, modified do Angra-1 conditions. The correlation used is the Westinghouse`s W-3 and the minimum DNBR (MDNBR) value cannot be less than 1.3. The new methodology reduces the excessive level of conservatism associated with the parameters used in the DNBR calculation, which are in their most unfavorable values of the STDP methodology, by using their best estimate values. The final goal is to obtain a new DNBR design limit which will provide a margin gain due to more realistic parameters values used in the methodology. (author). 11 refs., 2 tabs.
Propagating Instabilities in Solids
Kyriakides, Stelios
1998-03-01
Instability is one of the factors which limit the extent to which solids can be loaded or deformed and plays a pivotal role in the design of many structures. Such instabilities often result in localized deformation which precipitates catastrophic failure. Some materials have the capacity to recover their stiffness following a certain amount of localized deformation. This local recovery in stiffness arrests further local deformation and spreading of the instability to neighboring material becomes preferred. Under displacement controlled loading the propagation of the transition fronts can be achieved in a steady-state manner at a constant stress level known as the propagation stress. The stresses in the transition fronts joining the highly deformed zone to the intact material overcome the instability nucleation stresses and, as a result, the propagation stress is usually much lower than the stress required to nucleate the instability. The classical example of this class of material instabilities is L/"uders bands which tend to affect mild steels and other metals. Recent work has demonstrated that propagating instabilities occur in several other materials. Experimental and analytical results from four examples will be used to illustrate this point: First the evolution of L=FCders bands in mild steel strips will be revisited. The second example involves the evolution of stress induced phase transformations (austenite to martensite phases and the reverse) in a shape memory alloy under displacement controlled stretching. The third example is the crushing behavior of cellular materials such as honeycombs and foams made from metals and polymers. The fourth example involves the axial broadening/propagation of kink bands in aligned fiber/matrix composites under compression. The microstructure and, as a result, the micromechanisms governing the onset, localization, local arrest and propagation of instabilities in each of the four materials are vastly different. Despite this
Analytic Approach to Perturbative QCD
Magradze, B
2000-01-01
The two-loop invariant (running) coupling of QCD is written in terms of the Lambert W function. The analyticity structure of the coupling in the complex Q^2-plane is established. The corresponding analytic coupling is reconstructed via a dispersion relation. We also consider some other approximations to the QCD beta-function, when the corresponding couplings are solved in terms of the Lambert function. The Landau gauge gluon propagator has been considered in the renormalization group invariant analytic approach (IAA). It is shown that there is a nonperturbative ambiguity in determination of the anomalous dimension function of the gluon field. Several analytic solutions for the propagator at the one-loop order are constructed. Properties of the obtained analytical solutions are discussed.
Using Nuclear Theory, Data and Uncertainties in Monte Carlo Transport Applications
Energy Technology Data Exchange (ETDEWEB)
Rising, Michael Evan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-11-03
These are slides for a presentation on using nuclear theory, data and uncertainties in Monte Carlo transport applications. The following topics are covered: nuclear data (experimental data versus theoretical models, data evaluation and uncertainty quantification), fission multiplicity models (fixed source applications, criticality calculations), uncertainties and their impact (integral quantities, sensitivity analysis, uncertainty propagation).
Energy Technology Data Exchange (ETDEWEB)
NONE
1996-06-01
Literature on uncertainty assessment for risk-analytical purposes has been compiled. Databases Inspec, Compendex, Energy Science and Technology, Chemical Abstracts, Chemical Safety Newsbase, HSEline and MathSci were searched. Roughly 80 references have been selected from these databases and divided according to the following uncertainty classes: 1. Statistical uncertainty; 2. Data uncertainty; 3. Presumption uncertainty; 4. Uncertainty of consequence models; 5. Cognitive uncertainty. (EG)
Quantifying uncertainty in LCA-modelling of waste management systems.
Clavreul, Julie; Guyonnet, Dominique; Christensen, Thomas H
2012-12-01
Uncertainty analysis in LCA studies has been subject to major progress over the last years. In the context of waste management, various methods have been implemented but a systematic method for uncertainty analysis of waste-LCA studies is lacking. The objective of this paper is (1) to present the sources of uncertainty specifically inherent to waste-LCA studies, (2) to select and apply several methods for uncertainty analysis and (3) to develop a general framework for quantitative uncertainty assessment of LCA of waste management systems. The suggested method is a sequence of four steps combining the selected methods: (Step 1) a sensitivity analysis evaluating the sensitivities of the results with respect to the input uncertainties, (Step 2) an uncertainty propagation providing appropriate tools for representing uncertainties and calculating the overall uncertainty of the model results, (Step 3) an uncertainty contribution analysis quantifying the contribution of each parameter uncertainty to the final uncertainty and (Step 4) as a new approach, a combined sensitivity analysis providing a visualisation of the shift in the ranking of different options due to variations of selected key parameters. This tiered approach optimises the resources available to LCA practitioners by only propagating the most influential uncertainties.
Experimental uncertainty estimation and statistics for data having interval uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)
2007-05-01
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Estimating uncertainties in complex joint inverse problems
Afonso, Juan Carlos
2016-04-01
to the forward and statistical models, I will also address other uncertainties associated with data and uncertainty propagation.
Treatment of Uncertainties in Probabilistic Tsunami Hazard
Thio, H. K.
2012-12-01
Over the last few years, we have developed a framework for developing probabilistic tsunami inundation maps, which includes comprehensive quantification of earthquake recurrence as well as uncertainties, and applied it to the development of a tsunami hazard map of California. The various uncertainties in tsunami source and propagation models are an integral part of a comprehensive probabilistic tsunami hazard analysis (PTHA), and often drive the hazard at low probability levels (i.e. long return periods). There is no unique manner in which uncertainties are included in the analysis although in general, we distinguish between "natural" or aleatory variability, such as slip distribution and event magnitude, and uncertainties due to an incomplete understanding of the behavior of the earth, called epistemic uncertainties, such as scaling relations and rupture segmentation. Aleatory uncertainties are typically included through integration over distribution functions based on regression analyses, whereas epistemic uncertainties are included using logic trees. We will discuss how the different uncertainties were included in our recent probabilistic tsunami inundation maps for California, and their relative importance on the final results. Including these uncertainties in offshore exceedance waveheights is straightforward, but the problem becomes more complicated once the non-linearity of near-shore propagation and inundation are encountered. By using the probabilistic off-shore waveheights as input level for the inundation models, the uncertainties up to that point can be included in the final maps. PTHA provides a consistent analysis of tsunami hazard and will become an important tool in diverse areas such as coastal engineering and land use planning. The inclusive nature of the analysis, where few assumptions are made a-priori as to which sources are significant, means that a single analysis can provide a comprehensive view of the hazard and its dominant sources
Quantum dynamics via a time propagator in Wigner's phase space
DEFF Research Database (Denmark)
Grønager, Michael; Henriksen, Niels Engholm
1995-01-01
that the simple classical deterministic motion breaks down surprisingly fast in an anharmonic potential. Finally, we discuss the possibility of using the scheme as a useful approach to quantum dynamics in many dimensions. To that end we present a Monte Carlo integration scheme using the norm of the propagator......We derive an expression for a short-time phase space propagator. We use it in a new propagation scheme and demonstrate that it works for a Morse potential. The propagation scheme is used to propagate classical distributions which do not obey the Heisenberg uncertainty principle. It is shown...
Methodology for characterizing modeling and discretization uncertainties in computational simulation
Energy Technology Data Exchange (ETDEWEB)
ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.
2000-03-01
This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.
Radial propagation of turbulence in tokamaks
Energy Technology Data Exchange (ETDEWEB)
Garbet, X.; Laurent, L.; Samain, A. [Association Euratom-CEA, Centre d`Etudes de Cadarache, 13 - Saint-Paul-lez-Durance (France). Dept. de Recherches sur la Fusion Controlee; Chinardet, J. [CISI Ingenierie, Centre d`Etudes de Cadarache, 13 - Saint-Paul-lez-Durance (France)
1993-12-01
It is shown in this paper that a turbulence propagation can be due to toroidal or non linear mode coupling. An analytical analysis indicates that the toroidal coupling acts through a convection while the non linear effects induce a diffusion. Numerical simulations suggest that the toroidal propagation is usually the fastest process, except perhaps in some highly turbulent regimes. The consequence is the possibility of non local effects on the fluctuation level and the associated transport. (authors). 7 figs., 19 refs.
Special Course on Acoustic Wave Propagation
1979-08-01
exesiple) et cules se propagent 41 is surface du liquido . WW.JF~q W - , -- r -w w 144 Dens ce cax Von (10) 4 =/.+ Sane entrer dans le ddtail des...543-546. 57. STUFF, R., Analytic solution for the sound propagation through the atmospheric wind boundary layer. Proc. Noise Control Conf., Warszawa...between nodal surfaces of one-half wavelength. Evidently this property, like the energy conservation one, is available for use as a " control " on any
Uncertainty of the variation in length of gauge blocks by mechanical comparison: a worked example
Matus, M.
2012-09-01
This paper is a study on the determination of the measurement uncertainty for a relatively simple and widespread calibration task. The measurement of a special form deviation of gauge blocks using the so-called five-point technique is discussed in detail. It is shown that the mainstream treatment of the measurement uncertainty (i.e. propagation of uncertainties) cannot be applied to this problem for principal reasons; the use of supplement 1 of the GUM (Monte Carlo method) is mandatory. The proposed model equation is probably the simplest ‘real world’ example where the use of supplement 1 of the GUM not only gives better results, but gives results at all. The model is simple enough to serve as a didactical example. Explicit analytical expressions for the probability density functions, expectation values, uncertainties and coverage intervals are given which is helpful for the validation of dedicated software products. Eventually a complete avoidance of the standardized form parameters in calibration certificates is proposed. The statement of ‘corner-deviations’ would be much more useful, especially for the evaluation of key comparisons.
Analytical Study of Electromagnetic Wave in Superlattice
Institute of Scientific and Technical Information of China (English)
LINChang; ZHANGXiu-Lian
2004-01-01
The theoretical description of soliton solutions and exact analytical solutions in the sine-Gordon equation is extended to superlattice physics. A family of interesting exact solutions and a new exact analytical solution have been obtained for the electromagnetic wave propagating through a superlattice. In more general cases, the vector potential along the propagating direction obeys the sine-Gordon equation. Some mathematical results of theoretical investigation are given for different cases in supedattices.
Analytical Study of Electromagnetic Wave in Superlattice
Institute of Scientific and Technical Information of China (English)
LIN Chang; ZHANG Xiu-Lian
2004-01-01
The theoretical description of soliton solutions and exact analytical solutions in the sine-Gordon equation is extended to superlattice physics. A family of interesting exact solutions and a new exact analytical solution have been obtained for the electromagnetic wave propagating through a superlattice. In more general cases, the vector potential along the propagating direction obeys the sine-Gordon equation. Some mathematical results of theoretical investigation are given for different cases in superlattices.
Gilli, L.
2013-01-01
This thesis presents the development and the implementation of an uncertainty propagation algorithm based on the concept of spectral expansion. The first part of the thesis is dedicated to the study of uncertainty propagation methodologies and to the analysis of spectral techniques. The concepts int
Milton, Graeme W
2016-01-01
The theory of inhomogeneous analytic materials is developed. These are materials where the coefficients entering the equations involve analytic functions. Three types of analytic materials are identified. The first two types involve an integer $p$. If $p$ takes its maximum value then we have a complete analytic material. Otherwise it is incomplete analytic material of rank $p$. For two-dimensional materials further progress can be made in the identification of analytic materials by using the well-known fact that a $90^\\circ$ rotation applied to a divergence free field in a simply connected domain yields a curl-free field, and this can then be expressed as the gradient of a potential. Other exact results for the fields in inhomogeneous media are reviewed. Also reviewed is the subject of metamaterials, as these materials provide a way of realizing desirable coefficients in the equations.
Emission and transmission noise propagation in positron emission computed tomography
Energy Technology Data Exchange (ETDEWEB)
Gullberg, G.T.; Huesman, R.H.
1979-06-01
Errors in positron emission computed tomograms are the result of noise propagated from three sources: (1) the statistical fluctuation in the positron coincidence events; (2) the statistical fluctuation in the incident transmission beam; and (3) the statistical fluctuation in the transmitted beam. The data for the transmission study in (2) and (3) are used to compensate for internal absorption of the distributed positron source. For the reconstruction of a circular phantom using the convolution algorithm, the percent root-mean-square uncertainty (%RMS) is related to the total measured positron events C and the incident photon flux per cm I/sub 0/. Our derivation of the %RMS uncertainty based on the propagation of errors yields a simple expression: %RMS = ..sqrt..K/sub 1//C + K/sub 2//I/sub 0/. The constants K/sub 1/ = 4.52 x 10/sup 8/ and K/sub 2/ = 1.48 x 10/sup 8/ were determined for a 20 cm diameter disc based on computer simulation. The projection data were analytically calculated with an attenuation coefficient ..mu.. = 0.0958 cm/sup -1/ for 140 angles between 0 and ..pi... Poisson noise was added to the positron coincidence events, the incident transmission events I/sub 0/, and the transmitted events. These results indicate that for a total number of incident transmission photons per cm of 2.0 x 10/sup 5/, the contrast resolution for a fixed spatial resolution is limited to 27% even with an infinite number of emission events. For a total of 10/sup 6/ emission events the contrast resolution is 34%.
Bruce, William J; Maxwell, E A; Sneddon, I N
1963-01-01
Analytic Trigonometry details the fundamental concepts and underlying principle of analytic geometry. The title aims to address the shortcomings in the instruction of trigonometry by considering basic theories of learning and pedagogy. The text first covers the essential elements from elementary algebra, plane geometry, and analytic geometry. Next, the selection tackles the trigonometric functions of angles in general, basic identities, and solutions of equations. The text also deals with the trigonometric functions of real numbers. The fifth chapter details the inverse trigonometric functions
Directory of Open Access Journals (Sweden)
Francisco Cutanda Henríquez
2011-01-01
computation and experimental uncertainty. This work utilizes mathematical methods to analyse comparisons, so that uncertainty can be taken into account. Therefore, false rejections due to uncertainty do not take place and there is no need to expand tolerances to take uncertainty into account. The methods provided are based on the rules of uncertainty propagation and help obtain rigorous pass/fail criteria, based on experimental information.
Premixed flame propagation in vertical tubes
Kazakov, Kirill A
2015-01-01
Analytical treatment of premixed flame propagation in vertical tubes with smooth walls is given. Using the on-shell flame description, equations describing quasi-steady flame with a small but finite front thickness are obtained and solved numerically. It is found that near the limits of inflammability, solutions describing upward flame propagation come in pairs having close propagation speeds, and that the effect of gravity is to reverse the burnt gas velocity profile generated by the flame. On the basis of these results, a theory of partial flame propagation driven by the gravitational field is developed. A complete explanation is given of the intricate observed behavior of limit flames, including dependence of the inflammability range on the size of the combustion domain, the large distances of partial flame propagation, and the progression of flame extinction. The role of the finite front-thickness effects is discussed in detail. Also, various mechanisms governing flame acceleration in smooth tubes are ide...
Federal Laboratory Consortium — The Analytical Labspecializes in Oil and Hydraulic Fluid Analysis, Identification of Unknown Materials, Engineering Investigations, Qualification Testing (to support...
Uncertainty relations for general unitary operators
Bagchi, Shrobona; Pati, Arun Kumar
2016-10-01
We derive several uncertainty relations for two arbitrary unitary operators acting on physical states of a Hilbert space. We show that our bounds are tighter in various cases than the ones existing in the current literature. Using the uncertainty relation for the unitary operators, we obtain the tight state-independent lower bound for the uncertainty of two Pauli observables and anticommuting observables in higher dimensions. With regard to the minimum-uncertainty states, we derive the minimum-uncertainty state equation by the analytic method and relate this to the ground-state problem of the Harper Hamiltonian. Furthermore, the higher-dimensional limit of the uncertainty relations and minimum-uncertainty states are explored. From an operational point of view, we show that the uncertainty in the unitary operator is directly related to the visibility of quantum interference in an interferometer where one arm of the interferometer is affected by a unitary operator. This shows a principle of preparation uncertainty, i.e., for any quantum system, the amount of visibility for two general noncommuting unitary operators is nontrivially upper bounded.
Uncertainty for Part Density Determination: An Update
Energy Technology Data Exchange (ETDEWEB)
Valdez, Mario Orlando [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-12-14
Accurate and precise density measurements by hydrostatic weighing requires the use of an analytical balance, configured with a suspension system, to both measure the weight of a part in water and in air. Additionally, the densities of these liquid media (water and air) must be precisely known for the part density determination. To validate the accuracy and precision of these measurements, uncertainty statements are required. The work in this report is a revision of an original report written more than a decade ago, specifically applying principles and guidelines suggested by the Guide to the Expression of Uncertainty in Measurement (GUM) for determining the part density uncertainty through sensitivity analysis. In this work, updated derivations are provided; an original example is revised with the updated derivations and appendix, provided solely to uncertainty evaluations using Monte Carlo techniques, specifically using the NIST Uncertainty Machine, as a viable alternative method.
Institute of Scientific and Technical Information of China (English)
Yue-ping XU; Harriette HOLZHAUER; Martijn J.BOOIJ; Hong-yue SUN
2008-01-01
For river basin management,the reliability of the rating curves mainly depends on the accuracy and time period of the observed discharge and water level data.In the Elbe decision suPPort system(DSS),the rating curves are combined with the HEC-6 model to investigate the effects Ofriver engineering measures on the Elbe River system.In such situations,the uncertainty originating from the HEC-6 model is of significant importance for the reliability of the rating curves and the corresponding DSS results.This paper proposes a two-step approach to analyze the uncertainty in the rating curves and propagate it into the Elbe DSS:analytic method and Latin Hypercube simulation.Via this approach the uncertainty and sensitivity of model outputs to input parameters are successfuIly investigated.The results show that the proposed approach is very efficient in investigating the effect of uncertainty and can play an important role in improving decision-making under uncertainty.
On the worst case uncertainty and its evaluation
Fabbiano, L.; Giaquinto, N.; Savino, M.; Vacca, G.
2016-02-01
The paper is a review on the worst case uncertainty (WCU) concept, neglected in the Guide to the Expression of Uncertainty in Measurements (GUM), but necessary for a correct uncertainty assessment in a number of practical cases involving distribution with compact support. First, it is highlighted that the knowledge of the WCU is necessary to choose a sensible coverage factor, associated to a sensible coverage probability: the Maximum Acceptable Coverage Factor (MACF) is introduced as a convenient index to guide this choice. Second, propagation rules for the worst-case uncertainty are provided in matrix and scalar form. It is highlighted that when WCU propagation cannot be computed, the Monte Carlo approach is the only way to obtain a correct expanded uncertainty assessment, in contrast to what can be inferred from the GUM. Third, examples of applications of the formulae to ordinary instruments and measurements are given. Also an example taken from the GUM is discussed, underlining some inconsistencies in it.
Ontological Considerations for Uncertainty Propagation in High Level Information Fusion
2013-01-01
model. This model establishes five functional levels, as defined in [9] and repeated in Table 1 below. Table 1: JDL Fusion Levels [9] Level Title...tempting to suggest that Sowa’s three relationship levels correspond to the JDL levels 1 / 2 / 3 (i.e., Independent, Relative, and Mediating, respectively... JDL level 2 can (but does not have to) consider multiple relationships in and between multiple entities. Second, JDL level 2 situation assessment
Decision Based Uncertainty Propagation Using Adaptive Gaussian Mixtures
Terejanu, Gabriel; Singh, Tarunraj; Scott, Peter D
2011-01-01
Given a decision process based on the approximate probability density function returned by a data assimilation algorithm, an interaction level between the decision making level and the data assimilation level is designed to incorporate the information held by the decision maker into the data assimilation process. Here the information held by the decision maker is a loss function at a decision time which maps the state space onto real numbers which represent the threat associated with different possible outcomes or states. The new probability density function obtained will address the region of interest, the area in the state space with the highest threat, and will provide overall a better approximation to the true conditional probability density function within it. The approximation used for the probability density function is a Gaussian mixture and a numerical example is presented to illustrate the concept.
An Uncertainty Propagation Architecture for the Localization Problem
2002-08-01
de Robotique , d’Electrotechnique et d’Automatique IUT, département Informatique, Avenue des Facultés, 80000 Amiens – France {Arnaud.Clerentin...PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) CREA - Centre de Robotique
Dealing with uncertainties - communication between disciplines
Overbeek, Bernadet; Bessembinder, Janette
2013-04-01
Climate adaptation research inevitably involves uncertainty issues - whether people are building a model, using climate scenarios, or evaluating policy processes. However, do they know which uncertainties are relevant in their field of work? And which uncertainties exist in the data from other disciplines that they use (e.g. climate data, land use, hydrological data) and how they propagate? From experiences in Dutch research programmes on climate change in the Netherlands we know that disciplines often deal differently with uncertainties. This complicates communication between disciplines and also with the various users of data and information on climate change and its impacts. In October 2012 an autumn school was organized within the Knowledge for Climate Research Programme in the Netherlands with as central theme dealing with and communicating about uncertainties, in climate- and socio-economic scenarios, in impact models and in the decision making process. The lectures and discussions contributed to the development of a common frame of reference (CFR) for dealing with uncertainties. The common frame contains the following: 1. Common definitions (typology of uncertainties, robustness); 2. Common understanding (why do we consider it important to take uncertainties into account) and aspects on which we disagree (how far should scientists go in communication?); 3. Documents that are considered important by all participants; 4. Do's and don'ts in dealing with uncertainties and communicating about uncertainties (e.g. know your audience, check how your figures are interpreted); 5. Recommendations for further actions (e.g. need for a platform to exchange experiences). The CFR is meant to help researchers in climate adaptation to work together and communicate together on climate change (better interaction between disciplines). It is also meant to help researchers to explain to others (e.g. decision makers) why and when researchers agree and when and why they disagree
Burdette, A C
1971-01-01
Analytic Geometry covers several fundamental aspects of analytic geometry needed for advanced subjects, including calculus.This book is composed of 12 chapters that review the principles, concepts, and analytic proofs of geometric theorems, families of lines, the normal equation of the line, and related matters. Other chapters highlight the application of graphing, foci, directrices, eccentricity, and conic-related topics. The remaining chapters deal with the concept polar and rectangular coordinates, surfaces and curves, and planes.This book will prove useful to undergraduate trigonometric st
Federal Laboratory Consortium — NETL’s analytical laboratories in Pittsburgh, PA, and Albany, OR, give researchers access to the equipment they need to thoroughly study the properties of materials...
Modelling delay propagation within an airport network
Pyrgiotis, N.; Malone, K.M.; Odoni, A.
2013-01-01
We describe an analytical queuing and network decomposition model developed to study the complex phenomenon of the propagation of delays within a large network of major airports. The Approximate Network Delays (AND) model computes the delays due to local congestion at individual airports and capture
Wave propagation in axially moving periodic strings
DEFF Research Database (Denmark)
Sorokin, Vladislav S.; Thomsen, Jon Juel
2017-01-01
The paper deals with analytically studying transverse waves propagation in an axially moving string with periodically modulated cross section. The structure effectively models various relevant technological systems, e.g. belts, thread lines, band saws, etc., and, in particular, roller chain drive...
Causal Propagation of Constraints in General Relativity
York, James W
2015-01-01
In this paper, I demonstrate that the constraint functions are propagated by a first order symmetric (or symmetrizable) hyperbolic system whose characteristic cone is the light cone. This result follows from the twice-contracted Bianchi identities. Analyticity is not required.
Nijhof, Marten Jozef Johannes
2010-01-01
In this work, the accuracy, efficiency and range of applicability of various (approximate) models for viscothermal wave propagation are investigated. Models for viscothermal wave propagation describe thewave behavior of fluids including viscous and thermal effects. Cases where viscothermal effects a
Groves, Curtis Edward
2014-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional validation by test only mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions.Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations. This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions
Groves, Curtis Edward
2014-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional "validation by test only" mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics
EXACT ANALYSIS OF WAVE PROPAGATION IN AN INFINITE RECTANGULAR BEAM
Institute of Scientific and Technical Information of China (English)
孙卫明; 杨光松; 李东旭
2004-01-01
The Fourier series method was extended for the exact analysis of wave propagation in an infinite rectangular beam. Initially, by solving the three-dimensional elastodynamic equations a general analytic solution was derived for wave motion within the beam. And then for the beam with stress-free boundaries, the propagation characteristics of elastic waves were presented. This accurate wave propagation model lays a solid foundation of simultaneous control of coupled waves in the beam.
Longitudinal propagation velocity of the normal zone in superconducting wires
Kate, ten H.H.J.; Boschman, H.; Klundert, van de L.J.M.
1987-01-01
The longitudinal propagation of the normal zone in superconducting wires was experimentally investigated in order to evaluate existing analytical expressions which attempt to describe the propagation velocity in a more or less simple manner. The availability of a reliable expression is important for
On reasoning in networks with qualitative uncertainty
Parsons, Simon; Mamdani, E. H.
2013-01-01
In this paper some initial work towards a new approach to qualitative reasoning under uncertainty is presented. This method is not only applicable to qualitative probabilistic reasoning, as is the case with other methods, but also allows the qualitative propagation within networks of values based upon possibility theory and Dempster-Shafer evidence theory. The method is applied to two simple networks from which a large class of directed graphs may be constructed. The results of this analysis ...
Uncertainty of Doppler reactivity worth due to uncertainties of JENDL-3.2 resonance parameters
Energy Technology Data Exchange (ETDEWEB)
Zukeran, Atsushi [Hitachi Ltd., Hitachi, Ibaraki (Japan). Power and Industrial System R and D Div.; Hanaki, Hiroshi; Nakagawa, Tuneo; Shibata, Keiichi; Ishikawa, Makoto
1998-03-01
Analytical formula of Resonance Self-shielding Factor (f-factor) is derived from the resonance integral (J-function) based on NR approximation and the analytical expression for Doppler reactivity worth ({rho}) is also obtained by using the result. Uncertainties of the f-factor and Doppler reactivity worth are evaluated on the basis of sensitivity coefficients to the resonance parameters. The uncertainty of the Doppler reactivity worth at 487{sup 0}K is about 4 % for the PNC Large Fast Breeder Reactor. (author)
Zarlenga, Antonio; de Barros, Felipe P. J.; Fiori, Aldo
2016-10-01
We propose a computationally efficient probabilistic modeling methodology to estimate the adverse effects on humans of exposure to contaminated groundwater. Our work is aligned with the standard suggested by the regulatory agencies and allows to propagate uncertainty from hydrogeological, toxicological and behavioral parameters to the final health risk endpoint. The problem under consideration consists of a contaminated aquifer supplying water to a population. Contamination stems from a continuous source that feeds a steady plume which constitutes the hazard source. This scenario is particularly suited for NAPL pollutants. The erratic displacement of the contaminant plume in groundwater, due to the spatial variability of hydraulic conductivity, is characterized within the Lagrangian stochastic framework which enables the complete probabilistic characterization of the contaminant concentration at an environmentally sensitive location. Following the probabilistic characterization of flow and transport, we quantify the adverse health effects on humans. The dose response assessment involves the estimation of the uncertain effects of the exposure to a given contaminant while accounting for the exposed individual's metabolism. The model integrates groundwater transport, exposure and human metabolism in a comprehensive probabilistic framework which allows the assessment of the risk probability through a novel simple analytical solution. Aside from its computational efficiency, the analytical features of the framework allows the assessment of uncertainty arising from the hydrogeological parameters.
Asymptotic analysis of outwardly propagating spherical flames
Institute of Scientific and Technical Information of China (English)
Yun-Chao Wu; Zheng Chen
2012-01-01
Asymptotic analysis is conducted for outwardly propagating spherical flames with large activation energy.The spherical flame structure consists of the preheat zone,reaction zone,and equilibrium zone.Analytical solutions are separately obtained in these three zones and then asymptotically matched.In the asymptotic analysis,we derive a correlation describing the spherical flame temperature and propagation speed changing with the flame radius.This correlation is compared with previous results derived in the limit of infinite value of activation energy.Based on this correlation,the properties of spherical flame propagation are investigated and the effects of Lewis number on spherical flame propagation speed and extinction stretch rate are assessed.Moreover,the accuracy and performance of different models used in the spherical flame method are examined.It is found that in order to get accurate laminar flame speed and Markstein length,non-linear models should be used.
Multiple front propagation into unstable states
Montagne, R; Hernández-García, E; Miguel, M S
1993-01-01
The dynamics of transient patterns formed by front propagation in extended nonequilibrium systems is considered. Under certain circumstances, the state left behind a front propagating into an unstable homogeneous state can be an unstable periodic pattern. It is found by a numerical solution of a model of the Fr\\'eedericksz transition in nematic liquid crystals that the mechanism of decay of such periodic unstable states is the propagation of a second front which replaces the unstable pattern by a another unstable periodic state with larger wavelength. The speed of this second front and the periodicity of the new state are analytically calculated with a generalization of the marginal stability formalism suited to the study of front propagation into periodic unstable states. PACS: 47.20.Ky, 03.40.Kf, 47.54.+r
Uncertainty and Cognitive Control
Directory of Open Access Journals (Sweden)
Faisal eMushtaq
2011-10-01
Full Text Available A growing trend of neuroimaging, behavioural and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1 There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2 There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3 The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the need for control; (4 Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders.
Characterizing spatial uncertainty when integrating social data in conservation planning.
Lechner, A M; Raymond, C M; Adams, V M; Polyakov, M; Gordon, A; Rhodes, J R; Mills, M; Stein, A; Ives, C D; Lefroy, E C
2014-12-01
Recent conservation planning studies have presented approaches for integrating spatially referenced social (SRS) data with a view to improving the feasibility of conservation action. We reviewed the growing conservation literature on SRS data, focusing on elicited or stated preferences derived through social survey methods such as choice experiments and public participation geographic information systems. Elicited SRS data includes the spatial distribution of willingness to sell, willingness to pay, willingness to act, and assessments of social and cultural values. We developed a typology for assessing elicited SRS data uncertainty which describes how social survey uncertainty propagates when projected spatially and the importance of accounting for spatial uncertainty such as scale effects and data quality. These uncertainties will propagate when elicited SRS data is integrated with biophysical data for conservation planning and may have important consequences for assessing the feasibility of conservation actions. To explore this issue further, we conducted a systematic review of the elicited SRS data literature. We found that social survey uncertainty was commonly tested for, but that these uncertainties were ignored when projected spatially. Based on these results we developed a framework which will help researchers and practitioners estimate social survey uncertainty and use these quantitative estimates to systematically address uncertainty within an analysis. This is important when using SRS data in conservation applications because decisions need to be made irrespective of data quality and well characterized uncertainty can be incorporated into decision theoretic approaches.
Uncertainty Quantification in Climate Modeling
Sargsyan, K.; Safta, C.; Berry, R.; Debusschere, B.; Najm, H.
2011-12-01
We address challenges that sensitivity analysis and uncertainty quantification methods face when dealing with complex computational models. In particular, climate models are computationally expensive and typically depend on a large number of input parameters. We consider the Community Land Model (CLM), which consists of a nested computational grid hierarchy designed to represent the spatial heterogeneity of the land surface. Each computational cell can be composed of multiple land types, and each land type can incorporate one or more sub-models describing the spatial and depth variability. Even for simulations at a regional scale, the computational cost of a single run is quite high and the number of parameters that control the model behavior is very large. Therefore, the parameter sensitivity analysis and uncertainty propagation face significant difficulties for climate models. This work employs several algorithmic avenues to address some of the challenges encountered by classical uncertainty quantification methodologies when dealing with expensive computational models, specifically focusing on the CLM as a primary application. First of all, since the available climate model predictions are extremely sparse due to the high computational cost of model runs, we adopt a Bayesian framework that effectively incorporates this lack-of-knowledge as a source of uncertainty, and produces robust predictions with quantified uncertainty even if the model runs are extremely sparse. In particular, we infer Polynomial Chaos spectral expansions that effectively encode the uncertain input-output relationship and allow efficient propagation of all sources of input uncertainties to outputs of interest. Secondly, the predictability analysis of climate models strongly suffers from the curse of dimensionality, i.e. the large number of input parameters. While single-parameter perturbation studies can be efficiently performed in a parallel fashion, the multivariate uncertainty analysis
Energy Technology Data Exchange (ETDEWEB)
2006-06-01
In the Analytical Microscopy group, within the National Center for Photovoltaic's Measurements and Characterization Division, we combine two complementary areas of analytical microscopy--electron microscopy and proximal-probe techniques--and use a variety of state-of-the-art imaging and analytical tools. We also design and build custom instrumentation and develop novel techniques that provide unique capabilities for studying materials and devices. In our work, we collaborate with you to solve materials- and device-related R&D problems. This sheet summarizes the uses and features of four major tools: transmission electron microscopy, scanning electron microscopy, the dual-beam focused-ion-beam workstation, and scanning probe microscopy.
Schulte, Christian
2008-01-01
When implementing a propagator for a constraint, one must decide about variants: When implementing min, should one also implement max? Should one implement linear equations both with and without coefficients? Constraint variants are ubiquitous: implementing them requires considerable (if not prohibitive) effort and decreases maintainability, but will deliver better performance. This paper shows how to use variable views, previously introduced for an implementation architecture, to derive perfect propagator variants. A model for views and derived propagators is introduced. Derived propagators are proved to be indeed perfect in that they inherit essential properties such as correctness and domain and bounds consistency. Techniques for systematically deriving propagators such as transformation, generalization, specialization, and channeling are developed for several variable domains. We evaluate the massive impact of derived propagators. Without derived propagators, Gecode would require 140000 rather than 40000 ...
Spain, Barry; Ulam, S; Stark, M
1960-01-01
Analytical Quadrics focuses on the analytical geometry of three dimensions. The book first discusses the theory of the plane, sphere, cone, cylinder, straight line, and central quadrics in their standard forms. The idea of the plane at infinity is introduced through the homogenous Cartesian coordinates and applied to the nature of the intersection of three planes and to the circular sections of quadrics. The text also focuses on paraboloid, including polar properties, center of a section, axes of plane section, and generators of hyperbolic paraboloid. The book also touches on homogenous coordi
Hopkins, Philip F; Bundy, Kevin; Khochfar, Sadegh; Bosch, Frank van den; Somerville, Rachel S; Wetzel, Andrew; Keres, Dusan; Hernquist, Lars; Stewart, Kyle; Younger, Joshua D; Genel, Shy; Ma, Chung-Pei
2010-01-01
Different methodologies lead to order-of-magnitude variations in predicted galaxy merger rates. We examine and quantify the dominant uncertainties. Different halo merger rates and subhalo 'destruction' rates agree to within a factor ~2 given proper care in definitions. If however (sub)halo masses are not appropriately defined or are under-resolved, the major merger rate can be dramatically suppressed. The dominant differences in galaxy merger rates owe to baryonic physics. Hydrodynamic simulations without feedback and older models that do not agree with the observed galaxy mass function propagate factor ~5 bias in the resulting merger rates. However, if the model matches the galaxy mass function, properties of central galaxies are sufficiently converged to give small differences in merger rates. But variations in baryonic physics of satellites have the most dramatic effect. The known problem of satellite 'over-quenching' in most semi-analytic models (SAMs), whereby SAM satellites are too efficiently stripped ...
DEFF Research Database (Denmark)
Seif El-Nasr, Magy; Drachen, Anders; Canossa, Alessandro
2013-01-01
Game Analytics has gained a tremendous amount of attention in game development and game research in recent years. The widespread adoption of data-driven business intelligence practices at operational, tactical and strategic levels in the game industry, combined with the integration of quantitative...
Sustainable Process Design under uncertainty analysis: targeting environmental indicators
DEFF Research Database (Denmark)
2015-01-01
from algae biomass is used as a case study. The results indicate there are considerable uncertainties in the calculated environmental indicators as revealed by CDFs. The underlying sources of these uncertainties are indeed the significant variation in the databases used for the LCA analysis......This study focuses on uncertainty analysis of environmental indicators used to support sustainable process design efforts. To this end, the Life Cycle Assessment methodology is extended with a comprehensive uncertainty analysis to propagate the uncertainties in input LCA data to the environmental...... indicators. The resulting uncertainties in the environmental indicators are then represented by empirical cumulative distribution function, which provides a probabilistic basis for the interpretation of the indicators. In order to highlight the main features of the extended LCA, the production of biodiesel...
Novel techniques for the analysis of the TOA radiometric uncertainty
Gorroño, Javier; Banks, Andrew; Gascon, Ferran; Fox, Nigel P.; Underwood, Craig I.
2016-10-01
In the framework of the European Copernicus programme, the European Space Agency (ESA) has launched the Sentinel-2 (S2) Earth Observation (EO) mission which provides optical high spatial -resolution imagery over land and coastal areas. As part of this mission, a tool (named S2-RUT, from Sentinel-2 Radiometric Uncertainty Tool) estimates the radiometric uncertainties associated to each pixel using as input the top-of-atmosphere (TOA) reflectance factor images provided by ESA. The initial version of the tool has been implemented — code and user guide available1 — and integrated as part of the Sentinel Toolbox. The tool required the study of several radiometric uncertainty sources as well as the calculation and validation of the combined standard uncertainty in order to estimate the TOA reflectance factor uncertainty per pixel. Here we describe the recent research in order to accommodate novel uncertainty contributions to the TOA reflectance uncertainty estimates in future versions of the tool. The two contributions that we explore are the radiometric impact of the spectral knowledge and the uncertainty propagation of the resampling associated to the orthorectification process. The former is produced by the uncertainty associated to the spectral calibration as well as the spectral variations across the instrument focal plane and the instrument degradation. The latter results of the focal plane image propagation into the provided orthoimage. The uncertainty propagation depends on the radiance levels on the pixel neighbourhood and the pixel correlation in the temporal and spatial dimensions. Special effort has been made studying non-stable scenarios and the comparison with different interpolation methods.
Measurement uncertainty in pharmaceutical analysis and its application
Institute of Scientific and Technical Information of China (English)
Marcus Augusto Lyrio Traple; Alessandro Morais Saviano; Fabiane Lacerda Francisco; Felipe Rebello Lourençon
2014-01-01
The measurement uncertainty provides complete information about an analytical result. This is very important because several decisions of compliance or non-compliance are based on analytical results in pharmaceutical industries. The aim of this work was to evaluate and discuss the estimation of uncertainty in pharmaceutical analysis. The uncertainty is a useful tool in the assessment of compliance or non-compliance of in-process and final pharmaceutical products as well as in the assessment of pharmaceutical equivalence and stability study of drug products.
Heisenberg's uncertainty principle
Busch, Paul; Heinonen, Teiko; Lahti, Pekka
2007-01-01
Heisenberg's uncertainty principle is usually taken to express a limitation of operational possibilities imposed by quantum mechanics. Here we demonstrate that the full content of this principle also includes its positive role as a condition ensuring that mutually exclusive experimental options can be reconciled if an appropriate trade-off is accepted. The uncertainty principle is shown to appear in three manifestations, in the form of uncertainty relations: for the widths of the position and...
Commonplaces and social uncertainty
DEFF Research Database (Denmark)
Lassen, Inger
2008-01-01
This article explores the concept of uncertainty in four focus group discussions about genetically modified food. In the discussions, members of the general public interact with food biotechnology scientists while negotiating their attitudes towards genetic engineering. Their discussions offer...... risk discourse (Myers 2005; 2007). In additional, however, I argue that commonplaces are used to mitigate feelings of insecurity caused by uncertainty and to negotiate new codes of moral conduct. Keywords: uncertainty, commonplaces, risk discourse, focus groups, appraisal...
Constraint Propagation as Information Maximization
Abdallah, A Nait
2012-01-01
Dana Scott used the partial order among partial functions for his mathematical model of recursively defined functions. He interpreted the partial order as one of information content. In this paper we elaborate on Scott's suggestion of regarding computation as a process of information maximization by applying it to the solution of constraint satisfaction problems. Here the method of constraint propagation can be interpreted as decreasing uncertainty about the solution -- that is, as gain in information about the solution. As illustrative example we choose numerical constraint satisfaction problems to be solved by interval constraints. To facilitate this approach to constraint solving we formulate constraint satisfaction problems as formulas in predicate logic. This necessitates extending the usual semantics for predicate logic so that meaning is assigned not only to sentences but also to formulas with free variables.
Positrons from dark matter annihilation in the galactic halo: uncertainties
Fornengo, N; Lineros, R; Donato, F; Salati, P
2007-01-01
Indirect detection signals from dark matter annihilation are studied in the positron channel. We discuss in detail the positron propagation inside the galactic medium: we present novel solutions of the diffusion and propagation equations and we focus on the determination of the astrophysical uncertainties which affect the positron dark matter signal. We show that, especially in the low energy tail of the positron spectra at Earth, the uncertainty is sizeable and we quantify the effect. Comparison of our predictions with current available and foreseen experimental data are derived.
[Ethics, empiricism and uncertainty].
Porz, R; Zimmermann, H; Exadaktylos, A K
2011-01-01
Accidents can lead to difficult boundary situations. Such situations often take place in the emergency units. The medical team thus often and inevitably faces professional uncertainty in their decision-making. It is essential to communicate these uncertainties within the medical team, instead of downplaying or overriding existential hurdles in decision-making. Acknowledging uncertainties might lead to alert and prudent decisions. Thus uncertainty can have ethical value in treatment or withdrawal of treatment. It does not need to be covered in evidence-based arguments, especially as some singular situations of individual tragedies cannot be grasped in terms of evidence-based medicine.
Uncertainty in artificial intelligence
Kanal, LN
1986-01-01
How to deal with uncertainty is a subject of much controversy in Artificial Intelligence. This volume brings together a wide range of perspectives on uncertainty, many of the contributors being the principal proponents in the controversy.Some of the notable issues which emerge from these papers revolve around an interval-based calculus of uncertainty, the Dempster-Shafer Theory, and probability as the best numeric model for uncertainty. There remain strong dissenting opinions not only about probability but even about the utility of any numeric method in this context.
Error propagation in the computation of volumes in 3D city models with the Monte Carlo method
Biljecki, F.; Ledoux, H.; Stoter, J.
2014-01-01
This paper describes the analysis of the propagation of positional uncertainty in 3D city models to the uncertainty in the computation of their volumes. Current work related to error propagation in GIS is limited to 2D data and 2D GIS operations, especially of rasters. In this research we have (1) d
MODIS land cover uncertainty in regional climate simulations
Li, Xue; Messina, Joseph P.; Moore, Nathan J.; Fan, Peilei; Shortridge, Ashton M.
2017-02-01
MODIS land cover datasets are used extensively across the climate modeling community, but inherent uncertainties and associated propagating impacts are rarely discussed. This paper modeled uncertainties embedded within the annual MODIS Land Cover Type (MCD12Q1) products and propagated these uncertainties through the Regional Atmospheric Modeling System (RAMS). First, land cover uncertainties were modeled using pixel-based trajectory analyses from a time series of MCD12Q1 for Urumqi, China. Second, alternative land cover maps were produced based on these categorical uncertainties and passed into RAMS. Finally, simulations from RAMS were analyzed temporally and spatially to reveal impacts. Our study found that MCD12Q1 struggles to discriminate between grasslands and croplands or grasslands and barren in this study area. Such categorical uncertainties have significant impacts on regional climate model outputs. All climate variables examined demonstrated impact across the various regions, with latent heat flux affected most with a magnitude of 4.32 W/m2 in domain average. Impacted areas were spatially connected to locations of greater land cover uncertainty. Both biophysical characteristics and soil moisture settings in regard to land cover types contribute to the variations among simulations. These results indicate that formal land cover uncertainty analysis should be included in MCD12Q1-fed climate modeling as a routine procedure.
Spatial uncertainty model for visual features using a Kinect™ sensor.
Park, Jae-Han; Shin, Yong-Deuk; Bae, Ji-Hun; Baeg, Moon-Hong
2012-01-01
This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.
Spatial Uncertainty Model for Visual Features Using a Kinect™ Sensor
Directory of Open Access Journals (Sweden)
Jae-Han Park
2012-06-01
Full Text Available This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.
Uncertainty in flood risk mapping
Gonçalves, Luisa M. S.; Fonte, Cidália C.; Gomes, Ricardo
2014-05-01
A flood refers to a sharp increase of water level or volume in rivers and seas caused by sudden rainstorms or melting ice due to natural factors. In this paper, the flooding of riverside urban areas caused by sudden rainstorms will be studied. In this context, flooding occurs when the water runs above the level of the minor river bed and enters the major river bed. The level of the major bed determines the magnitude and risk of the flooding. The prediction of the flooding extent is usually deterministic, and corresponds to the expected limit of the flooded area. However, there are many sources of uncertainty in the process of obtaining these limits, which influence the obtained flood maps used for watershed management or as instruments for territorial and emergency planning. In addition, small variations in the delineation of the flooded area can be translated into erroneous risk prediction. Therefore, maps that reflect the uncertainty associated with the flood modeling process have started to be developed, associating a degree of likelihood with the boundaries of the flooded areas. In this paper an approach is presented that enables the influence of the parameters uncertainty to be evaluated, dependent on the type of Land Cover Map (LCM) and Digital Elevation Model (DEM), on the estimated values of the peak flow and the delineation of flooded areas (different peak flows correspond to different flood areas). The approach requires modeling the DEM uncertainty and its propagation to the catchment delineation. The results obtained in this step enable a catchment with fuzzy geographical extent to be generated, where a degree of possibility of belonging to the basin is assigned to each elementary spatial unit. Since the fuzzy basin may be considered as a fuzzy set, the fuzzy area of the basin may be computed, generating a fuzzy number. The catchment peak flow is then evaluated using fuzzy arithmetic. With this methodology a fuzzy number is obtained for the peak flow
The uncertainty of modeled soil carbon stock change for Finland
Lehtonen, Aleksi; Heikkinen, Juha
2013-04-01
Countries should report soil carbon stock changes of forests for Kyoto Protocol. Under Kyoto Protocol one can omit reporting of a carbon pool by verifying that the pool is not a source of carbon, which is especially tempting for the soil pool. However, verifying that soils of a nation are not a source of carbon in given year seems to be nearly impossible. The Yasso07 model was parametrized against various decomposition data using MCMC method. Soil carbon change in Finland between 1972 and 2011 were simulated with Yasso07 model using litter input data derived from the National Forest Inventory (NFI) and fellings time series. The uncertainties of biomass models, litter turnoverrates, NFI sampling and Yasso07 model were propagated with Monte Carlo simulations. Due to biomass estimation methods, uncertainties of various litter input sources (e.g. living trees, natural mortality and fellings) correlate strongly between each other. We show how original covariance matrices can be analytically combined and the amount of simulated components reduce greatly. While doing simulations we found that proper handling correlations may be even more essential than accurate estimates of standard errors. As a preliminary results, from the analysis we found that both Southern- and Northern Finland were soil carbon sinks, coefficient of variations (CV) varying 10%-25% when model was driven with long term constant weather data. When we applied annual weather data, soils were both sinks and sources of carbon and CVs varied from 10%-90%. This implies that the success of soil carbon sink verification depends on the weather data applied with models. Due to this fact IPCC should provide clear guidance for the weather data applied with soil carbon models and also for soil carbon sink verification. In the UNFCCC reporting carbon sinks of forest biomass have been typically averaged for five years - similar period for soil model weather data would be logical.
Bollini, C. G.; Rocca, M. C.
1998-01-01
We study the half advanced and half retarded Wheeler Green function and its relation to Feynman propagators. First for massless equation. Then, for Klein-Gordon equations with arbitrary mass parameters; real, imaginary or complex. In all cases the Wheeler propagator lacks an on-shell free propagation. The Wheeler function has support inside the light-cone (whatever the mass). The associated vacuum is symmetric with respect to annihilation and creation operators. We show with some examples tha...
Hollow Gaussian Schell-model beam and its propagation
Wang, Li-Gang
2007-01-01
In this paper, we present a new model, hollow Gaussian-Schell model beams (HGSMBs), to describe the practical dark hollow beams. An analytical propagation formula for HGSMBs passing through a paraxial first-order optical system is derived based on the theory of coherence. Based on the derived formula, an application example showing the influence of spatial coherence on the propagation of beams is illustrated. It is found that the beam propagating properties of HGSMBs will be greatly affected by their spatial coherence. Our model provides a very convenient way for analyzing the propagation properties of partially coherent dark hollow beams.
A Study of Malware Propagation via Online Social Networking
Faghani, Mohammad Reza; Nguyen, Uyen Trang
The popularity of online social networks (OSNs) have attracted malware creators who would use OSNs as a platform to propagate automated worms from one user's computer to another's. On the other hand, the topic of malware propagation in OSNs has only been investigated recently. In this chapter, we discuss recent advances on the topic of malware propagation by way of online social networking. In particular, we present three malware propagation techniques in OSNs, namely cross site scripting (XSS), Trojan and clickjacking types, and their characteristics via analytical models and simulations.
DEFF Research Database (Denmark)
include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition......This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...
Sagor, Rakibul Hasan; Saber, Md. Ghulam; Amin, Md. Ruhul
2014-03-01
The propagation characteristics of the surface-plasmon-polariton (SPP) mode in the single interface of silver (Ag) and gallium lanthanum sulfide (GLS) have been studied both analytically and numerically. The obtained numerical results show an excellent agreement with the analytical ones. The locations of the spatial resonance point along the direction of propagation were determined for the dielectric and the metal.
When to carry out analytic continuation?
Zuo, J M
1998-01-01
This paper discusses the analytic continuation in the thermal field theory by using the theory of $\\eta-\\xi$ spacetime. Taking a simple model as example, the $2\\times 2$ matrix real-time propagator is solved from the equation obtained through continuation of the equation for the imaginary-time propagator. The geometry of the $\\eta-\\xi$ spacetime plays important role in the discussion.
Computing with Epistemic Uncertainty
2015-01-01
modified the input uncertainties in any way. And by avoiding the need for simulation, various assumptions and selection of specific sampling...strategies that may affect results are also avoided . According with the Principle of Maximum Uncertainty , epistemic intervals represent the highest input...
Van Nooyen, R.R.P.; Hrachowitz, M.; Kolechkina, A.G.
2014-01-01
Even without uncertainty about the model structure or parameters, the output of a hydrological model run still contains several sources of uncertainty. These are: measurement errors affecting the input, the transition from continuous time and space to discrete time and space, which causes loss of in
Capel, H.W.; Cramer, J.S.; Estevez-Uscanga, O.
1995-01-01
'Uncertainty and chance' is a subject with a broad span, in that there is no academic discipline or walk of life that is not beset by uncertainty and chance. In this book a range of approaches is represented by authors from varied disciplines: natural sciences, mathematics, social sciences and medic
Institute of Scientific and Technical Information of China (English)
张攀; 陈新华; 陈琳
2011-01-01
建立了顺风条件下大气声传播的声线半分析模型,等效声速为对数声速剖面.模型采用分析迭代的方法,通过积分得到声线轨迹的解析解,并对声线进行分组,每组都由四条声线组成,从而计算出远场声压的超额衰减.较其他方法而言,该模型计算时间较短,最终得到了考虑地面反射和大气折射影响的超额衰减频率响应曲线.%The semi analytical ray model of the atmosphere sound propagation under the downwind condition has been established, the equivalent sound profile is replaced by the power profile of the sound speed. The analysis and iterative method have been employed in this model to solve the analytic solution of the ray tracks through integral , which takes the advantage of the fact that ray paths are ordered in groups of four, therefore the far field excess attenuation can be calculated. Compared with other methods, and this model requires a very small compulation time. Ground reflections and atmosphere refractions are taken into account to evaluate the excess attenuation and other solutions in this model. Finaly, the curve of excess attenuation versus frequency are plotted.
Premixed flame propagation in vertical tubes
Kazakov, Kirill A.
2016-04-01
Analytical treatment of the premixed flame propagation in vertical tubes with smooth walls is given. Using the on-shell flame description, equations for a quasi-steady flame with a small but finite front thickness are obtained and solved numerically. It is found that near the limits of inflammability, solutions describing upward flame propagation come in pairs having close propagation speeds and that the effect of gravity is to reverse the burnt gas velocity profile generated by the flame. On the basis of these results, a theory of partial flame propagation driven by a strong gravitational field is developed. A complete explanation is given of the intricate observed behavior of limit flames, including dependence of the inflammability range on the size of the combustion domain, the large distances of partial flame propagation, and the progression of flame extinction. The role of the finite front-thickness effects is discussed in detail. Also, various mechanisms governing flame acceleration in smooth tubes are identified. Acceleration of methane-air flames in open tubes is shown to be a combined effect of the hydrostatic pressure difference produced by the ambient cold air and the difference of dynamic gas pressure at the tube ends. On the other hand, a strong spontaneous acceleration of the fast methane-oxygen flames at the initial stage of their evolution in open-closed tubes is conditioned by metastability of the quasi-steady propagation regimes. An extensive comparison of the obtained results with the experimental data is made.
Monte-Carlo based Uncertainty Analysis For CO2 Laser Microchanneling Model
Prakash, Shashi; Kumar, Nitish; Kumar, Subrata
2016-09-01
CO2 laser microchanneling has emerged as a potential technique for the fabrication of microfluidic devices on PMMA (Poly-methyl-meth-acrylate). PMMA directly vaporizes when subjected to high intensity focused CO2 laser beam. This process results in clean cut and acceptable surface finish on microchannel walls. Overall, CO2 laser microchanneling process is cost effective and easy to implement. While fabricating microchannels on PMMA using a CO2 laser, the maximum depth of the fabricated microchannel is the key feature. There are few analytical models available to predict the maximum depth of the microchannels and cut channel profile on PMMA substrate using a CO2 laser. These models depend upon the values of thermophysical properties of PMMA and laser beam parameters. There are a number of variants of transparent PMMA available in the market with different values of thermophysical properties. Therefore, for applying such analytical models, the values of these thermophysical properties are required to be known exactly. Although, the values of laser beam parameters are readily available, extensive experiments are required to be conducted to determine the value of thermophysical properties of PMMA. The unavailability of exact values of these property parameters restrict the proper control over the microchannel dimension for given power and scanning speed of the laser beam. In order to have dimensional control over the maximum depth of fabricated microchannels, it is necessary to have an idea of uncertainty associated with the predicted microchannel depth. In this research work, the uncertainty associated with the maximum depth dimension has been determined using Monte Carlo method (MCM). The propagation of uncertainty with different power and scanning speed has been predicted. The relative impact of each thermophysical property has been determined using sensitivity analysis.
Measurement uncertainty and probability
Willink, Robin
2013-01-01
A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.
Energy Technology Data Exchange (ETDEWEB)
Liu Baoding [Tsinghua Univ., Beijing (China). Uncertainty Theory Lab.
2007-07-01
Uncertainty theory is a branch of mathematics based on normality, monotonicity, self-duality, and countable subadditivity axioms. The goal of uncertainty theory is to study the behavior of uncertain phenomena such as fuzziness and randomness. The main topics include probability theory, credibility theory, and chance theory. For this new edition the entire text has been totally rewritten. More importantly, the chapters on chance theory and uncertainty theory are completely new. This book provides a self-contained, comprehensive and up-to-date presentation of uncertainty theory. The purpose is to equip the readers with an axiomatic approach to deal with uncertainty. Mathematicians, researchers, engineers, designers, and students in the field of mathematics, information science, operations research, industrial engineering, computer science, artificial intelligence, and management science will find this work a stimulating and useful reference. (orig.)
Economic uncertainty and econophysics
Schinckus, Christophe
2009-10-01
The objective of this paper is to provide a methodological link between econophysics and economics. I will study a key notion of both fields: uncertainty and the ways of thinking about it developed by the two disciplines. After having presented the main economic theories of uncertainty (provided by Knight, Keynes and Hayek), I show how this notion is paradoxically excluded from the economic field. In economics, uncertainty is totally reduced by an a priori Gaussian framework-in contrast to econophysics, which does not use a priori models because it works directly on data. Uncertainty is then not shaped by a specific model, and is partially and temporally reduced as models improve. This way of thinking about uncertainty has echoes in the economic literature. By presenting econophysics as a Knightian method, and a complementary approach to a Hayekian framework, this paper shows that econophysics can be methodologically justified from an economic point of view.
Physical Uncertainty Bounds (PUB)
Energy Technology Data Exchange (ETDEWEB)
Vaughan, Diane Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Dean L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.
Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses
Energy Technology Data Exchange (ETDEWEB)
Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Curtis E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-08-01
We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.
Conclusions on measurement uncertainty in microbiology.
Forster, Lynne I
2009-01-01
Since its first issue in 1999, testing laboratories wishing to comply with all the requirements of ISO/IEC 17025 have been collecting data for estimating uncertainty of measurement for quantitative determinations. In the microbiological field of testing, some debate has arisen as to whether uncertainty needs to be estimated for each method performed in the laboratory for each type of sample matrix tested. Queries also arise concerning the estimation of uncertainty when plate/membrane filter colony counts are below recommended method counting range limits. A selection of water samples (with low to high contamination) was tested in replicate with the associated uncertainty of measurement being estimated from the analytical results obtained. The analyses performed on the water samples included total coliforms, fecal coliforms, fecal streptococci by membrane filtration, and heterotrophic plate counts by the pour plate technique. For those samples where plate/membrane filter colony counts were > or =20, uncertainty estimates at a 95% confidence level were very similar for the methods, being estimated as 0.13, 0.14, 0.14, and 0.12, respectively. For those samples where plate/membrane filter colony counts were uncertainty values for each sample showed close agreement with published confidence limits established using a Poisson distribution approach.
Managing uncertainty in multiple-criteria decision making related to sustainability assessment
DEFF Research Database (Denmark)
Dorini, Gianluca Fabio; Kapelan, Zoran; Azapagic, Adisa
2011-01-01
In real life, decisions are usually made by comparing different options with respect to several, often conflicting criteria. This requires subjective judgements on the importance of different criteria by DMs and increases uncertainty in decision making. This article demonstrates how uncertainty can......: (1) no uncertainty, (2) uncertainty in data/models and (3) uncertainty in models and decision-makers’ preferences. The results shows how characterising and propagating uncertainty can help increase the effectiveness of multi-criteria decision making processes and lead to more informed decision....... be handled in multi-criteria decision situations using Compromise Programming, one of the Multi-criteria Decision Analysis (MCDA) techniques. Uncertainty is characterised using a probabilistic approach and propagated using a Monte Carlo simulation technique. The methodological approach is illustrated...
2016-06-07
Shallow- Water Propagation William L. Siegmann Rensselaer Polytechnic Institute 110 Eighth Street Troy, New York 12180-3590 phone: (518) 276...ocean_acoustics LONG-TERM GOALS Develop methods for propagation and coherence calculations in complex shallow- water environments, determine...intensity and coherence. APPROACH (A) Develop high accuracy PE techniques for applications to shallow- water sediments, accounting for
Applicability of Parametrized Form of Fully Dressed Quark Propagator
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
According to extensive study of the Dyson-Schwinger equations for a fully dressed quark propagator in the "rainbow" approximation with an effective gluon propagator, a parametrized fully dressed confining quark propagator is suggested in this paper. The parametrized quark propagator describes a confined quark propagation in hadron, and is analytic everywhere in complex p2-plane and has no Lehmann representation. The vector and scalar self-energy functions [1 - Af(p2)] and [Bf(p2) - mf], dynamically running effective mass of quark Mf(p2) and the structure of non-local quark vacuum condensates as well as local quark vacuum condensates are predicted by use of the parametrized quark propagator. The results are compatible with other theoretical calculations.
Optimal Universal Uncertainty Relations
Li, Tao; Xiao, Yunlong; Ma, Teng; Fei, Shao-Ming; Jing, Naihuan; Li-Jost, Xianqing; Wang, Zhi-Xi
2016-01-01
We study universal uncertainty relations and present a method called joint probability distribution diagram to improve the majorization bounds constructed independently in [Phys. Rev. Lett. 111, 230401 (2013)] and [J. Phys. A. 46, 272002 (2013)]. The results give rise to state independent uncertainty relations satisfied by any nonnegative Schur-concave functions. On the other hand, a remarkable recent result of entropic uncertainty relation is the direct-sum majorization relation. In this paper, we illustrate our bounds by showing how they provide a complement to that in [Phys. Rev. A. 89, 052115 (2014)]. PMID:27775010
Indian Academy of Sciences (India)
Rituparna Chutia; Supahi Mahanta; D Datta
2014-04-01
The parameters associated to a environmental dispersion model may include different kinds of variability, imprecision and uncertainty. More often, it is seen that available information is interpreted in probabilistic sense. Probability theory is a well-established theory to measure such kind of variability. However, not all available information, data or model parameters affected by variability, imprecision and uncertainty, can be handled by traditional probability theory. Uncertainty or imprecision may occur due to incomplete information or data, measurement error or data obtained from expert judgement or subjective interpretation of available data or information. Thus for model parameters, data may be affected by subjective uncertainty. Traditional probability theory is inappropriate to represent subjective uncertainty. Possibility theory is used as a tool to describe parameters with insufficient knowledge. Based on the polynomial chaos expansion, stochastic response surface method has been utilized in this article for the uncertainty propagation of atmospheric dispersion model under consideration of both probabilistic and possibility information. The proposed method has been demonstrated through a hypothetical case study of atmospheric dispersion.
Uncertainty in wind climate parameters and their influence on wind turbine fatigue loads
DEFF Research Database (Denmark)
Toft, Henrik Stensgaard; Svenningsen, Lasse; Sørensen, John Dalsgaard;
2016-01-01
Highlights • Probabilistic framework for reliability assessment of site specific wind turbines. • Uncertainty in wind climate parameters propagated to structural loads directly. • Sensitivity analysis to estimate wind climate parameters influence on reliability.......Highlights • Probabilistic framework for reliability assessment of site specific wind turbines. • Uncertainty in wind climate parameters propagated to structural loads directly. • Sensitivity analysis to estimate wind climate parameters influence on reliability....
Farrance, Ian; Frenkel, Robert
2014-02-01
The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship
Uncertainty in biodiversity science, policy and management: a conceptual overview
Directory of Open Access Journals (Sweden)
Yrjö Haila
2014-10-01
Full Text Available The protection of biodiversity is a complex societal, political and ultimately practical imperative of current global society. The imperative builds upon scientific knowledge on human dependence on the life-support systems of the Earth. This paper aims at introducing main types of uncertainty inherent in biodiversity science, policy and management, as an introduction to a companion paper summarizing practical experiences of scientists and scholars (Haila et al. 2014. Uncertainty is a cluster concept: the actual nature of uncertainty is inherently context-bound. We use semantic space as a conceptual device to identify key dimensions of uncertainty in the context of biodiversity protection; these relate to [i] data; [ii] proxies; [iii] concepts; [iv] policy and management; and [v] normative goals. Semantic space offers an analytic perspective for drawing critical distinctions between types of uncertainty, identifying fruitful resonances that help to cope with the uncertainties, and building up collaboration between different specialists to support mutual social learning.
Institute of Scientific and Technical Information of China (English)
LIDian-qing; ZHANGSheng-kun
2004-01-01
The classical probability theory cannot effectively quantify the parameter uncertainty in probability of detection.Furthermore,the conventional data analytic method and expert judgment method fail to handle the problem of model uncertainty updating with the information from nondestructive inspection.To overcome these disadvantages,a Bayesian approach was proposed to quantify the parameter uncertainty in probability of detection.Furthermore,the formulae of the multiplication factors to measure the statistical uncertainties in the probability of detection following the Weibull distribution were derived.A Bayesian updating method was applied to compute the posterior probabilities of model weights and the posterior probability density functions of distribution parameters of probability of detection.A total probability model method was proposed to analyze the problem of multi-layered model uncertainty updating.This method was then applied to the problem of multilayered corrosion model uncertainty updating for ship structures.The results indicate that the proposed method is very effective in analyzing the problem of multi-layered model uncertainty updating.
DEFF Research Database (Denmark)
Christensen, Hanne Bjerre; Poulsen, Mette Erecius; Pedersen, Mikael
2003-01-01
. In the present study, recommendations from the International Organisation for Standardisation's (ISO) Guide to the Expression of Uncertainty and the EURACHEM/CITAC guide Quantifying Uncertainty in Analytical Measurements were followed to estimate the expanded uncertainties for 153 pesticides in fruit......The estimation of uncertainty of an analytical result has become important in analytical chemistry. It is especially difficult to determine uncertainties for multiresidue methods, e.g. for pesticides in fruit and vegetables, as the varieties of pesticide/commodity combinations are many...
Assessment of errors and uncertainty patterns in GIA modeling
DEFF Research Database (Denmark)
Barletta, Valentina Roberta; Spada, G.
, such as time-evolving shorelines and paleo-coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...
Assessment of errors and uncertainty patterns in GIA modeling
DEFF Research Database (Denmark)
Barletta, Valentina Roberta; Spada, G.
2012-01-01
, such as time-evolving shorelines and paleo coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...
Quantification of Modelling Uncertainties in Turbulent Flow Simulations
Edeling, W.N.
2015-01-01
The goal of this thesis is to make predictive simulations with Reynolds-Averaged Navier-Stokes (RANS) turbulence models, i.e. simulations with a systematic treatment of model and data uncertainties and their propagation through a computational model to produce predictions of quantities of interest w
Complex-Mass Definition and the Structure of Unstable Particle’s Propagator
Directory of Open Access Journals (Sweden)
Vladimir Kuksa
2015-01-01
Full Text Available The propagators of unstable particles are considered in framework of the convolution representation. Spectral function is found for a special case when the propagator of scalar unstable particle has Breit-Wigner form. The expressions for the dressed propagators of unstable vector and spinor fields are derived in an analytical way for this case. We obtain the propagators in modified Breit-Wigner forms which correspond to the complex-mass definition.
Generalized Analytical Solutions for Nonlinear Positive-Negative Index Couplers
Directory of Open Access Journals (Sweden)
Zh. Kudyshev
2012-01-01
Full Text Available We find and analyze a generalized analytical solution for nonlinear wave propagation in waveguide couplers with opposite signs of the linear refractive index, nonzero phase mismatch between the channels, and arbitrary nonlinear coefficients.
Introduction to uncertainty quantification
Sullivan, T J
2015-01-01
Uncertainty quantification is a topic of increasing practical importance at the intersection of applied mathematics, statistics, computation, and numerous application areas in science and engineering. This text provides a framework in which the main objectives of the field of uncertainty quantification are defined, and an overview of the range of mathematical methods by which they can be achieved. Complete with exercises throughout, the book will equip readers with both theoretical understanding and practical experience of the key mathematical and algorithmic tools underlying the treatment of uncertainty in modern applied mathematics. Students and readers alike are encouraged to apply the mathematical methods discussed in this book to their own favourite problems to understand their strengths and weaknesses, also making the text suitable as a self-study. This text is designed as an introduction to uncertainty quantification for senior undergraduate and graduate students with a mathematical or statistical back...
Menger, Fredric M
2010-09-01
It might come as a disappointment to some chemists, but just as there are uncertainties in physics and mathematics, there are some chemistry questions we may never know the answer to either, suggests Fredric M. Menger.
Uncertainty, rationality, and agency
Hoek, Wiebe van der
2006-01-01
Goes across 'classical' borderlines of disciplinesUnifies logic, game theory, and epistemics and studies them in an agent-settingCombines classical and novel approaches to uncertainty, rationality, and agency
Helrich, Carl S
2017-01-01
This advanced undergraduate textbook begins with the Lagrangian formulation of Analytical Mechanics and then passes directly to the Hamiltonian formulation and the canonical equations, with constraints incorporated through Lagrange multipliers. Hamilton's Principle and the canonical equations remain the basis of the remainder of the text. Topics considered for applications include small oscillations, motion in electric and magnetic fields, and rigid body dynamics. The Hamilton-Jacobi approach is developed with special attention to the canonical transformation in order to provide a smooth and logical transition into the study of complex and chaotic systems. Finally the text has a careful treatment of relativistic mechanics and the requirement of Lorentz invariance. The text is enriched with an outline of the history of mechanics, which particularly outlines the importance of the work of Euler, Lagrange, Hamilton and Jacobi. Numerous exercises with solutions support the exceptionally clear and concise treatment...
Bollini, C G
1998-01-01
We study the half advanced and half retarded Wheeler Green function and its relation to Feynman propagators. First for massless equation. Then, for Klein-Gordon equations with arbitrary mass parameters; real, imaginary or complex. In all cases the Wheeler propagator lacks an on-shell free propagation. The Wheeler function has support inside the light-cone (whatever the mass). The associated vacuum is symmetric with respect to annihilation and creation operators. We show with some examples that perturbative unitarity holds, whatever the mass (real or complex). Some possible applications are discussed.
Lemaire, Maurice
2014-01-01
Science is a quest for certainty, but lack of certainty is the driving force behind all of its endeavors. This book, specifically, examines the uncertainty of technological and industrial science. Uncertainty and Mechanics studies the concepts of mechanical design in an uncertain setting and explains engineering techniques for inventing cost-effective products. Though it references practical applications, this is a book about ideas and potential advances in mechanical science.
Communicating spatial uncertainty to non-experts using R
Luzzi, Damiano; Sawicka, Kasia; Heuvelink, Gerard; de Bruin, Sytze
2016-04-01
Effective visualisation methods are important for the efficient use of uncertainty information for various groups of users. Uncertainty propagation analysis is often used with spatial environmental models to quantify the uncertainty within the information. A challenge arises when trying to effectively communicate the uncertainty information to non-experts (not statisticians) in a wide range of cases. Due to the growing popularity and applicability of the open source programming language R, we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. The package has implemented Monte Carlo algorithms for uncertainty propagation, the output of which is represented by an ensemble of model outputs (i.e. a sample from a probability distribution). Numerous visualisation methods exist that aim to present such spatial uncertainty information both statically, dynamically and interactively. To provide the most universal visualisation tools for non-experts, we conducted a survey on a group of 20 university students and assessed the effectiveness of selected static and interactive methods for visualising uncertainty in spatial variables such as DEM and land cover. The static methods included adjacent maps and glyphs for continuous variables. Both allow for displaying maps with information about the ensemble mean, variance/standard deviation and prediction intervals. Adjacent maps were also used for categorical data, displaying maps of the most probable class, as well as its associated probability. The interactive methods included a graphical user interface, which in addition to displaying the previously mentioned variables also allowed for comparison of joint uncertainties at multiple locations. The survey indicated that users could understand the basics of the uncertainty information displayed in the static maps, with the interactive interface allowing for more in-depth information. Subsequently, the R
Generalized uncertainty principles
Machluf, Ronny
2008-01-01
The phenomenon in the essence of classical uncertainty principles is well known since the thirties of the last century. We introduce a new phenomenon which is in the essence of a new notion that we introduce: "Generalized Uncertainty Principles". We show the relation between classical uncertainty principles and generalized uncertainty principles. We generalized "Landau-Pollak-Slepian" uncertainty principle. Our generalization relates the following two quantities and two scaling parameters: 1) The weighted time spreading $\\int_{-\\infty}^\\infty |f(x)|^2w_1(x)dx$, ($w_1(x)$ is a non-negative function). 2) The weighted frequency spreading $\\int_{-\\infty}^\\infty |\\hat{f}(\\omega)|^2w_2(\\omega)d\\omega$. 3) The time weight scale $a$, ${w_1}_a(x)=w_1(xa^{-1})$ and 4) The frequency weight scale $b$, ${w_2}_b(\\omega)=w_2(\\omega b^{-1})$. "Generalized Uncertainty Principle" is an inequality that summarizes the constraints on the relations between the two spreading quantities and two scaling parameters. For any two reason...
Uncertainties in the simulation of groundwater recharge at different scales
Directory of Open Access Journals (Sweden)
H. Bogena
2005-01-01
Full Text Available Digital spatial data always imply some kind of uncertainty. The source of this uncertainty can be found in their compilation as well as the conceptual design that causes a more or less exact abstraction of the real world, depending on the scale under consideration. Within the framework of hydrological modelling, in which numerous data sets from diverse sources of uneven quality are combined, the various uncertainties are accumulated. In this study, the GROWA model is taken as an example to examine the effects of different types of uncertainties on the calculated groundwater recharge. Distributed input errors are determined for the parameters' slope and aspect using a Monte Carlo approach. Landcover classification uncertainties are analysed by using the conditional probabilities of a remote sensing classification procedure. The uncertainties of data ensembles at different scales and study areas are discussed. The present uncertainty analysis showed that the Gaussian error propagation method is a useful technique for analysing the influence of input data on the simulated groundwater recharge. The uncertainties involved in the land use classification procedure and the digital elevation model can be significant in some parts of the study area. However, for the specific model used in this study it was shown that the precipitation uncertainties have the greatest impact on the total groundwater recharge error.
Directory of Open Access Journals (Sweden)
Tommasi J.
2010-10-01
Full Text Available In the [eV;MeV] energy range, modelling of the neutron induced reactions are based on nuclear reaction models having parameters. Estimation of co-variances on cross sections or on nuclear reaction model parameters is a recurrent puzzle in nuclear data evaluation. Major breakthroughs were asked by nuclear reactor physicists to assess proper uncertainties to be used in applications. In this paper, mathematical methods developped in the CONRAD code[2] will be presented to explain the treatment of all type of uncertainties, including experimental ones (statistical and systematic and propagate them to nuclear reaction model parameters or cross sections. Marginalization procedure will thus be exposed using analytical or Monte-Carlo solutions. Furthermore, one major drawback found by reactor physicist is the fact that integral or analytical experiments (reactor mock-up or simple integral experiment, e.g. ICSBEP, … were not taken into account sufficiently soon in the evaluation process to remove discrepancies. In this paper, we will describe a mathematical framework to take into account properly this kind of information.
Slow wave propagation in soft adhesive interfaces.
Viswanathan, Koushik; Sundaram, Narayan K; Chandrasekar, Srinivasan
2016-11-16
Stick-slip in sliding of soft adhesive surfaces has long been associated with the propagation of Schallamach waves, a type of slow surface wave. Recently it was demonstrated using in situ experiments that two other kinds of slow waves-separation pulses and slip pulses-also mediate stick-slip (Viswanathan et al., Soft Matter, 2016, 12, 5265-5275). While separation pulses, like Schallamach waves, involve local interface detachment, slip pulses are moving stress fronts with no detachment. Here, we present a theoretical analysis of the propagation of these three waves in a linear elastodynamics framework. Different boundary conditions apply depending on whether or not local interface detachment occurs. It is shown that the interface dynamics accompanying slow waves is governed by a system of integral equations. Closed-form analytical expressions are obtained for the interfacial pressure, shear stress, displacements and velocities. Separation pulses and Schallamach waves emerge naturally as wave solutions of the integral equations, with oppositely oriented directions of propagation. Wave propagation is found to be stable in the stress regime where linearized elasticity is a physically valid approximation. Interestingly, the analysis reveals that slow traveling wave solutions are not possible in a Coulomb friction framework for slip pulses. The theory provides a unified picture of stick-slip dynamics and slow wave propagation in adhesive contacts, consistent with experimental observations.
Uncertainty and error in computational simulations
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, W.L.; Diegert, K.V.; Alvin, K.F.; Rutherford, B.M.
1997-10-01
The present paper addresses the question: ``What are the general classes of uncertainty and error sources in complex, computational simulations?`` This is the first step of a two step process to develop a general methodology for quantitatively estimating the global modeling and simulation uncertainty in computational modeling and simulation. The second step is to develop a general mathematical procedure for representing, combining and propagating all of the individual sources through the simulation. The authors develop a comprehensive view of the general phases of modeling and simulation. The phases proposed are: conceptual modeling of the physical system, mathematical modeling of the system, discretization of the mathematical model, computer programming of the discrete model, numerical solution of the model, and interpretation of the results. This new view is built upon combining phases recognized in the disciplines of operations research and numerical solution methods for partial differential equations. The characteristics and activities of each of these phases is discussed in general, but examples are given for the fields of computational fluid dynamics and heat transfer. They argue that a clear distinction should be made between uncertainty and error that can arise in each of these phases. The present definitions for uncertainty and error are inadequate and. therefore, they propose comprehensive definitions for these terms. Specific classes of uncertainty and error sources are then defined that can occur in each phase of modeling and simulation. The numerical sources of error considered apply regardless of whether the discretization procedure is based on finite elements, finite volumes, or finite differences. To better explain the broad types of sources of uncertainty and error, and the utility of their categorization, they discuss a coupled-physics example simulation.
Uncertainty Quantification for Optical Model Parameters
Lovell, A E; Sarich, J; Wild, S M
2016-01-01
Although uncertainty quantification has been making its way into nuclear theory, these methods have yet to be explored in the context of reaction theory. For example, it is well known that different parameterizations of the optical potential can result in different cross sections, but these differences have not been systematically studied and quantified. The purpose of this work is to investigate the uncertainties in nuclear reactions that result from fitting a given model to elastic-scattering data, as well as to study how these uncertainties propagate to the inelastic and transfer channels. We use statistical methods to determine a best fit and create corresponding 95\\% confidence bands. A simple model of the process is fit to elastic-scattering data and used to predict either inelastic or transfer cross sections. In this initial work, we assume that our model is correct, and the only uncertainties come from the variation of the fit parameters. We study a number of reactions involving neutron and deuteron p...
Uncertainty in Seismic Capacity of Masonry Buildings
Directory of Open Access Journals (Sweden)
Nicola Augenti
2012-07-01
Full Text Available Seismic assessment of masonry structures is plagued by both inherent randomness and model uncertainty. The former is referred to as aleatory uncertainty, the latter as epistemic uncertainty because it depends on the knowledge level. Pioneering studies on reinforced concrete buildings have revealed a significant influence of modeling parameters on seismic vulnerability. However, confidence in mechanical properties of existing masonry buildings is much lower than in the case of reinforcing steel and concrete. This paper is aimed at assessing whether and how uncertainty propagates from material properties to seismic capacity of an entire masonry structure. A typical two-story unreinforced masonry building is analyzed. Based on previous statistical characterization of mechanical properties of existing masonry types, the following random variables have been considered in this study: unit weight, uniaxial compressive strength, shear strength at zero confining stress, Young’s modulus, shear modulus, and available ductility in shear. Probability density functions were implemented to generate a significant number of realizations and static pushover analysis of the case-study building was performed for each vector of realizations, load combination and lateral load pattern. Analysis results show a large dispersion in displacement capacity and lower dispersion in spectral acceleration capacity. This can directly affect decision-making because both design and retrofit solutions depend on seismic capacity predictions. Therefore, engineering judgment should always be used when assessing structural safety of existing masonry constructions against design earthquakes, based on a series of seismic analyses under uncertain parameters.
Wit, de A.J.W.; Bruin, de S.
2006-01-01
Previous analyses of the effects of uncertainty in precipitation fields on the output of EU Crop Growth Monitoring System (CGMS) demonstrated that the influence on simulated crop yield was limited at national scale, but considerable at local and regional scales. We aim to propagate uncertainty due t
Mužík, Zbyněk
2006-01-01
Práce se zabývá problematikou měření ukazatelů souvisejících s provozem webových stránek a aplikací a technologickými prostředky k tomu sloužícími ? Web Analytics (WA). Hlavním cílem práce je otestovat a porovnat vybrané zástupce těchto nástrojů a podrobit je srovnání podle objektivních kriterií, dále také kritické zhodnocení možností WA nástrojů obecně. V první části se práce zaměřuje na popis různých způsobů měření provozu na WWW a definuje související metriky. Poskytuje také přehled dostup...
Network planning under uncertainties
Ho, Kwok Shing; Cheung, Kwok Wai
2008-11-01
One of the main focuses for network planning is on the optimization of network resources required to build a network under certain traffic demand projection. Traditionally, the inputs to this type of network planning problems are treated as deterministic. In reality, the varying traffic requirements and fluctuations in network resources can cause uncertainties in the decision models. The failure to include the uncertainties in the network design process can severely affect the feasibility and economics of the network. Therefore, it is essential to find a solution that can be insensitive to the uncertain conditions during the network planning process. As early as in the 1960's, a network planning problem with varying traffic requirements over time had been studied. Up to now, this kind of network planning problems is still being active researched, especially for the VPN network design. Another kind of network planning problems under uncertainties that has been studied actively in the past decade addresses the fluctuations in network resources. One such hotly pursued research topic is survivable network planning. It considers the design of a network under uncertainties brought by the fluctuations in topology to meet the requirement that the network remains intact up to a certain number of faults occurring anywhere in the network. Recently, the authors proposed a new planning methodology called Generalized Survivable Network that tackles the network design problem under both varying traffic requirements and fluctuations of topology. Although all the above network planning problems handle various kinds of uncertainties, it is hard to find a generic framework under more general uncertainty conditions that allows a more systematic way to solve the problems. With a unified framework, the seemingly diverse models and algorithms can be intimately related and possibly more insights and improvements can be brought out for solving the problem. This motivates us to seek a
Market uncertainty; Markedsusikkerhet
Energy Technology Data Exchange (ETDEWEB)
Doorman, Gerard; Holtan, Jon Anders; Mo, Birger; Groenli, Helle; Haaland, Magnar; Grinden, Bjoern
1997-04-10
In Norway, the project ``Market uncertainty`` has been in progress for over two years and resulted in increased skill in the use of the Grid System Operation Model. This report classifies some of the factors which lead to uncertainties in the electric power market. It has been examined whether these factors should be, or can be, modelled in the available simulation models. Some of the factors have been further considered and methods of modelling the associated uncertainties have been examined. It is concluded that (1) There is a need for automatic simulation of several scenarios in the model, and these scenarios should incorporate probability parameters, (2) At first it is most important that one can handle uncertainties in fuel prices and demand, (3) Market uncertainty which is due to irrational behaviour should be dealt with in a separate model. The difference between real and simulated prices should be analysed and modelled with a time series model, (4) Risk should be included in the Vansimtap model by way of feedback from simulations, (5) The marginal values of stored water as calculated by means of the various methods in use should be compared systematically. 9 refs., 16 figs., 5 tabs.
Interpreting uncertainty terms.
Holtgraves, Thomas
2014-08-01
Uncertainty terms (e.g., some, possible, good, etc.) are words that do not have a fixed referent and hence are relatively ambiguous. A model is proposed that specifies how, from the hearer's perspective, recognition of facework as a potential motive for the use of an uncertainty term results in a calibration of the intended meaning of that term. Four experiments are reported that examine the impact of face threat, and the variables that affect it (e.g., power), on the manner in which a variety of uncertainty terms (probability terms, quantifiers, frequency terms, etc.) are interpreted. Overall, the results demonstrate that increased face threat in a situation will result in a more negative interpretation of an utterance containing an uncertainty term. That the interpretation of so many different types of uncertainty terms is affected in the same way suggests the operation of a fundamental principle of language use, one with important implications for the communication of risk, subjective experience, and so on.
Fuzzy-probabilistic calculations of water-balance uncertainty
Energy Technology Data Exchange (ETDEWEB)
Faybishenko, B.
2009-10-01
Hydrogeological systems are often characterized by imprecise, vague, inconsistent, incomplete, or subjective information, which may limit the application of conventional stochastic methods in predicting hydrogeologic conditions and associated uncertainty. Instead, redictions and uncertainty analysis can be made using uncertain input parameters expressed as probability boxes, intervals, and fuzzy numbers. The objective of this paper is to present the theory for, and a case study as an application of, the fuzzyprobabilistic approach, ombining probability and possibility theory for simulating soil water balance and assessing associated uncertainty in the components of a simple waterbalance equation. The application of this approach is demonstrated using calculations with the RAMAS Risk Calc code, to ssess the propagation of uncertainty in calculating potential evapotranspiration, actual evapotranspiration, and infiltration-in a case study at the Hanford site, Washington, USA. Propagation of uncertainty into the results of water-balance calculations was evaluated by hanging he types of models of uncertainty incorporated into various input parameters. The results of these fuzzy-probabilistic calculations are compared to the conventional Monte Carlo simulation approach and estimates from field observations at the Hanford site.
Strategies for Application of Isotopic Uncertainties in Burnup Credit
Energy Technology Data Exchange (ETDEWEB)
Gauld, I.C.
2002-12-23
Uncertainties in the predicted isotopic concentrations in spent nuclear fuel represent one of the largest sources of overall uncertainty in criticality calculations that use burnup credit. The methods used to propagate the uncertainties in the calculated nuclide concentrations to the uncertainty in the predicted neutron multiplication factor (k{sub eff}) of the system can have a significant effect on the uncertainty in the safety margin in criticality calculations and ultimately affect the potential capacity of spent fuel transport and storage casks employing burnup credit. Methods that can provide a more accurate and realistic estimate of the uncertainty may enable increased spent fuel cask capacity and fewer casks needing to be transported, thereby reducing regulatory burden on licensee while maintaining safety for transporting spent fuel. This report surveys several different best-estimate strategies for considering the effects of nuclide uncertainties in burnup-credit analyses. The potential benefits of these strategies are illustrated for a prototypical burnup-credit cask design. The subcritical margin estimated using best-estimate methods is discussed in comparison to the margin estimated using conventional bounding methods of uncertainty propagation. To quantify the comparison, each of the strategies for estimating uncertainty has been performed using a common database of spent fuel isotopic assay measurements for pressurized-light-water reactor fuels and predicted nuclide concentrations obtained using the current version of the SCALE code system. The experimental database applied in this study has been significantly expanded to include new high-enrichment and high-burnup spent fuel assay data recently published for a wide range of important burnup-credit actinides and fission products. Expanded rare earth fission-product measurements performed at the Khlopin Radium Institute in Russia that contain the only known publicly-available measurement for {sup 103
A discussion on the Heisenberg uncertainty principle from the perspective of special relativity
Nanni, Luca
2016-09-01
In this note, we consider the implications of the Heisenberg uncertainty principle (HUP) when computing uncertainties that affect the main dynamical quantities, from the perspective of special relativity. Using the well-known formula for propagating statistical errors, we prove that the uncertainty relations between the moduli of conjugate observables are not relativistically invariant. The new relationships show that, in experiments involving relativistic particles, limitations of the precision of a quantity obtained by indirect calculations may affect the final result.
Sources of uncertainties in modelling black carbon at the global scale
2010-01-01
Our understanding of the global black carbon (BC) cycle is essentially qualitative due to uncertainties in our knowledge of its properties. This work investigates two source of uncertainties in modelling black carbon: those due to the use of different schemes for BC ageing and its removal rate in the global Transport-Chemistry model TM5 and those due to the uncertainties in the definition and quantification of the observations, which propagate through to both the emission inventories, and the...
DEFF Research Database (Denmark)
Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.
2011-01-01
the uncertainty of the weather radar rainfall input. The main findings of this work, is that the input uncertainty propagate through the urban drainage model with significant effects on the model result. The GLUE methodology is in general a usable way to explore this uncertainty although; the exact width...... of the prediction bands can be questioned, due to the subjective nature of the method. Moreover, the method also gives very useful information about the model and parameter behaviour....
Measurement uncertainty relations
Energy Technology Data Exchange (ETDEWEB)
Busch, Paul, E-mail: paul.busch@york.ac.uk [Department of Mathematics, University of York, York (United Kingdom); Lahti, Pekka, E-mail: pekka.lahti@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Werner, Reinhard F., E-mail: reinhard.werner@itp.uni-hannover.de [Institut für Theoretische Physik, Leibniz Universität, Hannover (Germany)
2014-04-15
Measurement uncertainty relations are quantitative bounds on the errors in an approximate joint measurement of two observables. They can be seen as a generalization of the error/disturbance tradeoff first discussed heuristically by Heisenberg. Here we prove such relations for the case of two canonically conjugate observables like position and momentum, and establish a close connection with the more familiar preparation uncertainty relations constraining the sharpness of the distributions of the two observables in the same state. Both sets of relations are generalized to means of order α rather than the usual quadratic means, and we show that the optimal constants are the same for preparation and for measurement uncertainty. The constants are determined numerically and compared with some bounds in the literature. In both cases, the near-saturation of the inequalities entails that the state (resp. observable) is uniformly close to a minimizing one.
SAGD optimization under uncertainty
Energy Technology Data Exchange (ETDEWEB)
Gossuin, J.; Naccache, P. [Schlumberger SIS, Abingdon (United Kingdom); Bailley, W.; Couet, B. [Schlumberger-Doll Research, Cambridge, MA, (United States)
2011-07-01
In the heavy oil industry, the steam assisted gravity drainage process is often used to enhance oil recovery but this is a costly method and ways to make it more efficient are needed. Multiple methods have been developed to optimize the SAGD process but none of them explicitly considered uncertainty. This paper presents an optimization method in the presence of reservoir uncertainty. This process was tested on an SAGD model where three equi-probable geological models are possible. Preparatory steps were first performed to identify key variables and the optimization model was then proposed. The method was shown to be successful in handling a significant number of uncertainties, optimizing the SAGD process and preventing premature steam channels that can choke production. The optimization method presented herein was successfully applied to an SAGD process and was shown to provide better strategies than sensitivity analysis while handling more complex problems.
Treatment of precipitation uncertainty in rainfall-runoff modelling: a fuzzy set approach
Maskey, Shreedhar; Guinot, Vincent; Price, Roland K.
2004-09-01
The uncertainty in forecasted precipitation remains a major source of uncertainty in real time flood forecasting. Precipitation uncertainty consists of uncertainty in (i) the magnitude, (ii) temporal distribution, and (iii) spatial distribution of the precipitation. This paper presents a methodology for propagating the precipitation uncertainty through a deterministic rainfall-runoff-routing model for flood forecasting. It uses fuzzy set theory combined with genetic algorithms. The uncertainty due to the unknown temporal distribution of the precipitation is achieved by disaggregation of the precipitation into subperiods. The methodology based on fuzzy set theory is particularly useful where a probabilistic forecast of precipitation is not available. A catchment model of the Klodzko valley (Poland) built with HEC-1 and HEC-HMS was used for the application. The results showed that the output uncertainty due to the uncertain temporal distribution of precipitation can be significantly dominant over the uncertainty due to the uncertain quantity of precipitation.
Uncertainty and validation. Effect of user interpretation on uncertainty estimates
Energy Technology Data Exchange (ETDEWEB)
Kirchner, G. [Univ. of Bremen (Germany); Peterson, R. [AECL, Chalk River, ON (Canada)] [and others
1996-11-01
Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the
Sensitivity and uncertainty analysis
Cacuci, Dan G; Navon, Ionel Michael
2005-01-01
As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c
Uncertainty in artificial intelligence
Levitt, TS; Lemmer, JF; Shachter, RD
1990-01-01
Clearly illustrated in this volume is the current relationship between Uncertainty and AI.It has been said that research in AI revolves around five basic questions asked relative to some particular domain: What knowledge is required? How can this knowledge be acquired? How can it be represented in a system? How should this knowledge be manipulated in order to provide intelligent behavior? How can the behavior be explained? In this volume, all of these questions are addressed. From the perspective of the relationship of uncertainty to the basic questions of AI, the book divides naturally i
Instability versus equilibrium propagation of a laser beam in plasma.
Lushnikov, Pavel M; Rose, Harvey A
2004-06-25
We obtain, for the first time, an analytic theory of the forward stimulated Brillouin scattering instability of a spatially and temporally incoherent laser beam that controls the transition between statistical equilibrium and nonequilibrium (unstable) self-focusing regimes of beam propagation. The stability boundary may be used as a comprehensive guide for inertial confinement fusion designs. Well into the stable regime, an analytic expression for the angular diffusion coefficient is obtained, which provides an essential correction to a geometric optic approximation for beam propagation.
Asymmetrical dynamic propagation problems on mode Ⅲ interface crack
Institute of Scientific and Technical Information of China (English)
L(U) Nian-chun; YANG Ding-ning; CHENG Yun-hong; CHENG Jin
2007-01-01
By the application of the theory of complex functions, asymmetrical dynamic propagation problems on mode Ⅲ interface crack are studied. The universal representations of analytical solutions are obtained by the approaches of serf-similar function. The problems researched can be facilely transformed into Riemann-Hilbert problems and analytical solution to an asymmetrical propagation crack under the condition of point loads and unit-step loads, respectively, is acquired. After those solutions were used by superposition theorem, the solutions of arbitrarily complex problems could be attained.
High Performance Orbital Propagation Using a Generic Software Architecture
Möckel, M.; Bennett, J.; Stoll, E.; Zhang, K.
2016-09-01
Orbital propagation is a key element in many fields of space research. Over the decades, scientists have developed numerous orbit propagation algorithms, often tailored to specific use cases that vary in available input data, desired output as well as demands of execution speed and accuracy. Conjunction assessments, for example, require highly accurate propagations of a relatively small number of objects while statistical analyses of the (untracked) space debris population need a propagator that can process large numbers of objects in a short time with only medium accuracy. Especially in the latter case, a significant increase of computation speed can be achieved by using graphics processors, devices that are designed to process hundreds or thousands of calculations in parallel. In this paper, an analytical propagator is introduced that uses graphics processing to reduce the run time for propagating a large space debris population from several hours to minutes with only a minor loss of accuracy. A second performance analysis is conducted on a parallelised version of the popular SGP4 algorithm. It is discussed how these modifications can be applied to more accurate numerical propagators. Both programs are implemented using a generic, plugin-based software architecture designed for straightforward integration of propagators into other software tools. It is shown how this architecture can be used to easily integrate, compare and combine different orbital propagators, both CPU and GPU-based.
Working fluid selection for organic Rankine cycles - Impact of uncertainty of fluid properties
DEFF Research Database (Denmark)
Frutiger, Jerome; Andreasen, Jesper Graa; Liu, Wei
2016-01-01
of processmodels and constraints 2) selection of property models, i.e. Penge Robinson equation of state 3)screening of 1965 possible working fluid candidates including identification of optimal process parametersbased on Monte Carlo sampling 4) propagating uncertainty of fluid parameters to the ORC netpower output......This study presents a generic methodology to select working fluids for ORC (Organic Rankine Cycles)taking into account property uncertainties of the working fluids. A Monte Carlo procedure is described as a tool to propagate the influence of the input uncertainty of the fluid parameters on the ORC...
Institute of Scientific and Technical Information of China (English)
扈庆; 李显芳; 张继蓉; 印成; 李亚莹
2013-01-01
用微波消解技术消解城市污水处理厂污泥样品，用火焰原子吸收光谱法测定锌，分析影响微波消解-火焰原子吸收法测定总锌含的测量不确定度的各种因素，对各不确定度分量进行了评定和量化，计算了合成标准不确定度和扩展不确定度，污泥中锌含量的测定结果表示为(465±11) mg/kg，k=2。% The measurement uncertainty refers to the random uncertainty in the measurement process. It is complete when the measured data and uncertainty is provided at the same time. This paper described the digestion of samples in activated sludge by microwave,and zinc concentration in the digested samples was measured by FAAS. The measurement uncertainty is evaluated based on the guide of uncertainty evaluation and denotation for measurements. The sources of uncertainty in the measurement were analyzed. Every uncertainty component was evaluated and quantified. The combined standard uncertainty and the expanded uncertainty were expressed. The result of the measurement of Zinc in sludge sample was expressed as (465±11) mg/kg, k=2.
Wave propagation in sandwich panels with a poroelastic core.
Liu, Hao; Finnveden, Svante; Barbagallo, Mathias; Arteaga, Ines Lopez
2014-05-01
Wave propagation in sandwich panels with a poroelastic core, which is modeled by Biot's theory, is investigated using the waveguide finite element method. A waveguide poroelastic element is developed based on a displacement-pressure weak form. The dispersion curves of the sandwich panel are first identified as propagating or evanescent waves by varying the damping in the panel, and wave characteristics are analyzed by examining their motions. The energy distributions are calculated to identify the dominant motions. Simplified analytical models are also devised to show the main physics of the corresponding waves. This wave propagation analysis provides insight into the vibro-acoustic behavior of sandwich panels lined with elastic porous materials.
Hierarchical Affinity Propagation
Givoni, Inmar; Frey, Brendan J
2012-01-01
Affinity propagation is an exemplar-based clustering algorithm that finds a set of data-points that best exemplify the data, and associates each datapoint with one exemplar. We extend affinity propagation in a principled way to solve the hierarchical clustering problem, which arises in a variety of domains including biology, sensor networks and decision making in operational research. We derive an inference algorithm that operates by propagating information up and down the hierarchy, and is efficient despite the high-order potentials required for the graphical model formulation. We demonstrate that our method outperforms greedy techniques that cluster one layer at a time. We show that on an artificial dataset designed to mimic the HIV-strain mutation dynamics, our method outperforms related methods. For real HIV sequences, where the ground truth is not available, we show our method achieves better results, in terms of the underlying objective function, and show the results correspond meaningfully to geographi...
Institute of Scientific and Technical Information of China (English)
范梦璇
2015-01-01
<正>Employ change-related uncertainty is a condition that under current continually changing business environment,the organizations also have to change,the change include strategic direction,structure and staffing levels to help company to keep competitive(Armenakis&Bedeian,1999);However;these
DEFF Research Database (Denmark)
Greasley, David; Madsen, Jakob B.
2006-01-01
A severe collapse of fixed capital formation distinguished the onset of the Great Depression from other investment downturns between the world wars. Using a model estimated for the years 1890-2000, we show that the expected profitability of capital measured by Tobin's q, and the uncertainty...
Cettolin, E.; Riedl, A.M.
2013-01-01
An important element for the public support of policies is their perceived justice. At the same time most policy choices have uncertain outcomes. We report the results of a first experiment investigating just allocations of resources when some recipients are exposed to uncertainty. Although, under c
The factualization of uncertainty:
DEFF Research Database (Denmark)
Meyer, G.; Folker, A.P.; Jørgensen, R.B.
2005-01-01
exercises, scientific uncertainty is turned into risk, expressed in facts and figures. Paradoxically, this conveys an impression of certainty, while value-disagreement and conflicts of interest remain hidden below the surface of factuality. Public dialogue and negotiation along these lines are rendered...
Vehicle Routing under Uncertainty
Máhr, T.
2011-01-01
In this thesis, the main focus is on the study of a real-world transportation problem with uncertainties, and on the comparison of a centralized and a distributed solution approach in the context of this problem. We formalize the real-world problem, and provide a general framework to extend it with
Uncertainties in repository modeling
Energy Technology Data Exchange (ETDEWEB)
Wilson, J.R.
1996-12-31
The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling.
David, P
2013-01-01
Propagation of Waves focuses on the wave propagation around the earth, which is influenced by its curvature, surface irregularities, and by passage through atmospheric layers that may be refracting, absorbing, or ionized. This book begins by outlining the behavior of waves in the various media and at their interfaces, which simplifies the basic phenomena, such as absorption, refraction, reflection, and interference. Applications to the case of the terrestrial sphere are also discussed as a natural generalization. Following the deliberation on the diffraction of the "ground? wave around the ear
Programmatic methods for addressing contaminated volume uncertainties.
Energy Technology Data Exchange (ETDEWEB)
DURHAM, L.A.; JOHNSON, R.L.; RIEMAN, C.R.; SPECTOR, H.L.; Environmental Science Division; U.S. ARMY CORPS OF ENGINEERS BUFFALO DISTRICT
2007-01-01
Accurate estimates of the volumes of contaminated soils or sediments are critical to effective program planning and to successfully designing and implementing remedial actions. Unfortunately, data available to support the preremedial design are often sparse and insufficient for accurately estimating contaminated soil volumes, resulting in significant uncertainty associated with these volume estimates. The uncertainty in the soil volume estimates significantly contributes to the uncertainty in the overall project cost estimates, especially since excavation and off-site disposal are the primary cost items in soil remedial action projects. The Army Corps of Engineers Buffalo District's experience has been that historical contaminated soil volume estimates developed under the Formerly Utilized Sites Remedial Action Program (FUSRAP) often underestimated the actual volume of subsurface contaminated soils requiring excavation during the course of a remedial activity. In response, the Buffalo District has adopted a variety of programmatic methods for addressing contaminated volume uncertainties. These include developing final status survey protocols prior to remedial design, explicitly estimating the uncertainty associated with volume estimates, investing in predesign data collection to reduce volume uncertainties, and incorporating dynamic work strategies and real-time analytics in predesign characterization and remediation activities. This paper describes some of these experiences in greater detail, drawing from the knowledge gained at Ashland1, Ashland2, Linde, and Rattlesnake Creek. In the case of Rattlesnake Creek, these approaches provided the Buffalo District with an accurate predesign contaminated volume estimate and resulted in one of the first successful FUSRAP fixed-price remediation contracts for the Buffalo District.
A Stochastic Nonlinear Water Wave Model for Efficient Uncertainty Quantification
Bigoni, Daniele; Eskilsson, Claes
2014-01-01
A major challenge in next-generation industrial applications is to improve numerical analysis by quantifying uncertainties in predictions. In this work we present a stochastic formulation of a fully nonlinear and dispersive potential flow water wave model for the probabilistic description of the evolution waves. This model is discretized using the Stochastic Collocation Method (SCM), which provides an approximate surrogate of the model. This can be used to accurately and efficiently estimate the probability distribution of the unknown time dependent stochastic solution after the forward propagation of uncertainties. We revisit experimental benchmarks often used for validation of deterministic water wave models. We do this using a fully nonlinear and dispersive model and show how uncertainty in the model input can influence the model output. Based on numerical experiments and assumed uncertainties in boundary data, our analysis reveals that some of the known discrepancies from deterministic simulation in compa...
Instability Versus Equilibrium Propagation of Laser Beam in Plasma
Lushnikov, Pavel M.; Rose, Harvey A.
2003-01-01
We obtain, for the first time, an analytic theory of the forward stimulated Brillouin scattering instability of a spatially and temporally incoherent laser beam, that controls the transition between statistical equilibrium and non-equilibrium (unstable) self-focusing regimes of beam propagation. The stability boundary may be used as a comprehensive guide for inertial confinement fusion designs. Well into the stable regime, an analytic expression for the angular diffusion coefficient is obtain...
Love wave propagation in piezoelectric layered structure with dissipation.
Du, Jianke; Xian, Kai; Wang, Ji; Yong, Yook-Kong
2009-02-01
We investigate analytically the effect of the viscous dissipation of piezoelectric material on the dispersive and attenuated characteristics of Love wave propagation in a layered structure, which involves a thin piezoelectric layer bonded perfectly to an unbounded elastic substrate. The effects of the viscous coefficient on the phase velocity of Love waves and attenuation are presented and discussed in detail. The analytical method and the results can be useful for the design of the resonators and sensors.
Feizizadeh, Bakhtiar; Blaschke, Thomas
2014-03-04
GIS-based multicriteria decision analysis (MCDA) methods are increasingly being used in landslide susceptibility mapping. However, the uncertainties that are associated with MCDA techniques may significantly impact the results. This may sometimes lead to inaccurate outcomes and undesirable consequences. This article introduces a new GIS-based MCDA approach. We illustrate the consequences of applying different MCDA methods within a decision-making process through uncertainty analysis. Three GIS-MCDA methods in conjunction with Monte Carlo simulation (MCS) and Dempster-Shafer theory are analyzed for landslide susceptibility mapping (LSM) in the Urmia lake basin in Iran, which is highly susceptible to landslide hazards. The methodology comprises three stages. First, the LSM criteria are ranked and a sensitivity analysis is implemented to simulate error propagation based on the MCS. The resulting weights are expressed through probability density functions. Accordingly, within the second stage, three MCDA methods, namely analytical hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA), are used to produce the landslide susceptibility maps. In the third stage, accuracy assessments are carried out and the uncertainties of the different results are measured. We compare the accuracies of the three MCDA methods based on (1) the Dempster-Shafer theory and (2) a validation of the results using an inventory of known landslides and their respective coverage based on object-based image analysis of IRS-ID satellite images. The results of this study reveal that through the integration of GIS and MCDA models, it is possible to identify strategies for choosing an appropriate method for LSM. Furthermore, our findings indicate that the integration of MCDA and MCS can significantly improve the accuracy of the results. In LSM, the AHP method performed best, while the OWA reveals better performance in the reliability assessment. The WLC operation
Environmental adversity and uncertainty favour cooperation
Andras, Peter; Lazarus, John; Roberts, Gilbert
2007-01-01
Background A major cornerstone of evolutionary biology theory is the explanation of the emergence of cooperation in communities of selfish individuals. There is an unexplained tendency in the plant and animal world – with examples from alpine plants, worms, fish, mole-rats, monkeys and humans – for cooperation to flourish where the environment is more adverse (harsher) or more unpredictable. Results Using mathematical arguments and computer simulations we show that in more adverse environments individuals perceive their resources to be more unpredictable, and that this unpredictability favours cooperation. First we show analytically that in a more adverse environment the individual experiences greater perceived uncertainty. Second we show through a simulation study that more perceived uncertainty implies higher level of cooperation in communities of selfish individuals. Conclusion This study captures the essential features of the natural examples: the positive impact of resource adversity or uncertainty on cooperation. These newly discovered connections between environmental adversity, uncertainty and cooperation help to explain the emergence and evolution of cooperation in animal and human societies. PMID:18053138
Environmental adversity and uncertainty favour cooperation
Directory of Open Access Journals (Sweden)
Lazarus John
2007-11-01
Full Text Available Abstract Background A major cornerstone of evolutionary biology theory is the explanation of the emergence of cooperation in communities of selfish individuals. There is an unexplained tendency in the plant and animal world – with examples from alpine plants, worms, fish, mole-rats, monkeys and humans – for cooperation to flourish where the environment is more adverse (harsher or more unpredictable. Results Using mathematical arguments and computer simulations we show that in more adverse environments individuals perceive their resources to be more unpredictable, and that this unpredictability favours cooperation. First we show analytically that in a more adverse environment the individual experiences greater perceived uncertainty. Second we show through a simulation study that more perceived uncertainty implies higher level of cooperation in communities of selfish individuals. Conclusion This study captures the essential features of the natural examples: the positive impact of resource adversity or uncertainty on cooperation. These newly discovered connections between environmental adversity, uncertainty and cooperation help to explain the emergence and evolution of cooperation in animal and human societies.
Uncertainty in mapping urban air quality using crowdsourcing techniques
Schneider, Philipp; Castell, Nuria; Lahoz, William; Bartonova, Alena
2016-04-01
Small and low-cost sensors measuring various air pollutants have become available in recent years owing to advances in sensor technology. Such sensors have significant potential for improving high-resolution mapping of air quality in the urban environment as they can be deployed in comparatively large numbers and therefore are able to provide information at unprecedented spatial detail. However, such sensor devices are subject to significant and currently little understood uncertainties that affect their usability. Not only do these devices exhibit random errors and biases of occasionally substantial magnitudes, but these errors may also shift over time. In addition, there often tends to be significant inter-sensor variability even when supposedly identical sensors from the same manufacturer are used. We need to quantify accurately these uncertainties to make proper use of the information they provide. Furthermore, when making use of the data and producing derived products such as maps, the measurement uncertainties that propagate throughout the analysis need to be clearly communicated to the scientific and non-scientific users of the map products. Based on recent experiences within the EU-funded projects CITI-SENSE and hackAIR we discuss the uncertainties along the entire processing chain when using crowdsourcing techniques for mapping urban air quality. Starting with the uncertainties exhibited by the sensors themselves, we present ways of quantifying the error characteristics of a network of low-cost microsensors and show suitable statistical metrics for summarizing them. Subsequently, we briefly present a data-fusion-based method for mapping air quality in the urban environment and illustrate how we propagate the uncertainties of the individual sensors throughout the mapping system, resulting in detailed maps that document the pixel-level uncertainty for each concentration field. Finally, we present methods for communicating the resulting spatial uncertainty
Generation and propagation of optical vortices
Rozas, David
Optical vortices are singularities in phase fronts of laser beams. They are characterized by a dark core whose size (relative to the size of the background beam) may dramatically affect their behavior upon propagation. Previously, only large-core vortices have been extensively studied. The object of the research presented in this dissertation was to explore ways of generating small-core optical vortices (also called optical vortex filaments ), and to examine their propagation using analytical, numerical and experimental methods. Computer-generated holography enabled us to create arbitrary distributions of optical vortex filaments for experimental exploration. Hydrodynamic analogies were used to develop an heuristic model which described the dependence of vortex motion on other vortices and the background beam, both qualitatively and quantitatively. We predicted that pair of optical vortex filaments will rotate with angular rates inversely proportional to their separation distance (just like vortices in a fluid). We also reported the first experimental observation of this novel fluid-like effect. It was found, however, that upon propagation in linear media, the fluid-like rotation was not sustained owing to the overlap of diffracting vortex cores. Further numerical studies and experiments showed that rotation angle may be enhanced in nonlinear self-defocusing media. The results presented in this thesis offer us a better understanding of dynamics of propagating vortices which may result in applications in optical switching, optical data storage, manipulation of micro-particles and optical limiting for eye protection.
Urrutxua, Hodei; Sanjurjo-Rivo, Manuel; Peláez, Jesús
2016-01-01
In the year 2000 an in-house orbital propagator called DROMO (Peláez et al. in Celest Mech Dyn Astron 97:131-150, 2007. doi: 10.1007/s10569-006-9056-3) was developed by the Space Dynamics Group of the Technical University of Madrid, based in a set of redundant variables including Euler-Rodrigues parameters. An original deduction of the DROMO propagator is carried out, underlining its close relation with the ideal frame concept introduced by Hansen (Abh der Math-Phys Cl der Kon Sachs Ges der Wissensch 5:41-218, 1857). Based on the very same concept, Deprit (J Res Natl Bur Stand Sect B Math Sci 79B(1-2):1-15, 1975) proposed a formulation for orbit propagation. In this paper, similarities and differences with the theory carried out by Deprit are analyzed. Simultaneously, some improvements are introduced in the formulation, that lead to a more synthetic and better performing propagator. Also, the long-term effect of the oblateness of the primary is studied in terms of DROMO variables, and new numerical results are presented to evaluate the performance of the method.
Investment choice under uncertainty: A review essay
Directory of Open Access Journals (Sweden)
Trifunović Dejan
2005-01-01
Full Text Available An investment opportunity whose return is perfectly predictable, hardly exists at all. Instead, investor makes his decisions under conditions of uncertainty. Theory of expected utility is the main analytical tool for description of choice under uncertainty. Critics of the theory contend that individuals have bounded rationality and that the theory of expected utility is not correct. When agents are faced with risky decisions they behave differently, conditional on their attitude towards risk. They can be risk loving, risk averse or risk neutral. In order to make an investment decision it is necessary to compare probability distribution functions of returns. Investment decision making is much simpler if one uses expected values and variances instead of probability distribution functions.
Slow light pulse propagation in dispersive media
DEFF Research Database (Denmark)
Nielsen, Torben Roland; Mørk, Jesper; Lavrinenko, Andrei
2009-01-01
We present a theoretical and numerical analysis of pulse propagation in a semiconductor photonic crystal waveguide with embedded quantum dots in a regime where the pulse is subjected to both waveguide and material dispersion. The group index and the transmission are investigated by finite......-difference-time-domain Maxwell-Bloch simulations and compared to analytic results. For long pulses the group index (transmission) for the combined system is significantly enhanced (reduced) relative to slow light based on purely material or waveguide dispersion. Shorter pulses are strongly distorted and depending on parameters...... broadening or break-up of the pulse may be observed. The transition from linear to nonlinear pulse propagation is quantified in terms of the spectral width of the pulse. To cite this article: T.R. Nielsen et al., C. R. Physique 10 (2009). (C) 2009 Academie des sciences. Published by Elsevier Masson SAS. All...
Photon propagation in slowly varying electromagnetic fields
Karbstein, Felix
2016-01-01
We study the effective theory of soft photons in slowly varying electromagnetic background fields at one-loop order in QED. This is of relevance for the study of all-optical signatures of quantum vacuum nonlinearity in realistic electromagnetic background fields as provided by high-intensity lasers. The central result derived in this article is a new analytical expression for the photon polarization tensor in two linearly polarized counter-propagating pulsed Gaussian laser beams. As we treat the peak field strengths of both laser beams as free parameters this field configuration can be considered as interpolating between the limiting cases of a purely right- or left-moving laser beam (if one of the peak field strengths is set to zero) and the standing-wave type scenario with two counter-propagating beams of equal strength.
DEFF Research Database (Denmark)
Abdul Samad, Noor Asma Fazli Bin; Sin, Gürkan
2013-01-01
The objective of this study is to test and validate a Process Analytical Technology (PAT) system design on a potassium dichromate crystallization process in the presence of input uncertainties using uncertainty and sensitivity analysis. To this end a systematic framework for managing uncertaintie...
Traceability and Measurement Uncertainty
DEFF Research Database (Denmark)
Tosello, Guido; De Chiffre, Leonardo
2004-01-01
respects necessary scientific precision and problem-solving approach of the field of engineering studies. Competences should be presented in a way that is methodologically and didactically optimised for employees with a mostly work-based vocational qualification and should at the same time be appealing...... and motivating to this important group. The developed e-learning system consists on 12 different chapters dealing with the following topics: 1. Basics 2. Traceability and measurement uncertainty 3. Coordinate metrology 4. Form measurement 5. Surface testing 6. Optical measurement and testing 7. Measuring rooms 8....... Machine tool testing 9. The role of manufacturing metrology for QM 10. Inspection planning 11. Quality management of measurements incl. Documentation 12. Advanced manufacturing measurement technology The present report (which represents the section 2 - Traceability and Measurement Uncertainty – of the e...
Handbook of management under uncertainty
2001-01-01
A mere few years ago it would have seemed odd to propose a Handbook on the treatment of management problems within a sphere of uncertainty. Even today, on the threshold of the third millennium, this statement may provoke a certain wariness. In fact, to resort to exact or random data, that is probable date, is quite normal and con venient, as we then know where we are going best, where we are proposing to go if all occurs as it is conceived and hoped for. To treat uncertain information, to accept a new principle and from there determined criteria, without being sure of oneself and confiding only in the will to better understand objects and phenomena, constitutes and compromise with a new form of understanding the behaviour of current beings that goes even further than simple rationality. Economic Science and particularly the use of its elements of configuration in the world of management, has imbued several generations with an analytical spirit that has given rise to the elaboration of theories widely accept...
Wood, Alexander
2004-01-01
deterministic methods for Hg TMDL decision support, one that is fully compatible with an adaptive management approach. This alternative approach uses empirical data and informed judgment to provide a scientific and technical basis for helping National Pollutant Discharge Elimination System (NPDES) permit holders make management decisions. An Hg-offset system would be an option if a wastewater-treatment plant could not achieve NPDES permit requirements for HgT reduction. We develop a probabilistic decision-analytical model consisting of three submodels for HgT loading, MeHg, and cost mitigation within a Bayesian network that integrates information of varying rigor and detail into a simple model of a complex system. Hg processes are identified and quantified by using a combination of historical data, statistical models, and expert judgment. Such an integrated approach to uncertainty analysis allows easy updating of prediction and inference when observations of model variables are made. We demonstrate our approach with data from the Cache Creek watershed (a subbasin of the Sacramento River watershed). The empirical models used to generate the needed probability distributions are based on the same empirical models currently being used by the Central Valley Regional Water Quality Control Cache Creek Hg TMDL working group. The significant difference is that input uncertainty and error are explicitly included in the model and propagated throughout its algorithms. This work demonstrates how to integrate uncertainty into the complex and highly uncertain Hg TMDL decisionmaking process. The various sources of uncertainty are propagated as decision risk that allows decisionmakers to simultaneously consider uncertainties in remediation/implementation costs while attempting to meet environmental/ecologic targets. We must note that this research is on going. As more data are collected, the HgT and cost-mitigation submodels are updated and the uncer
Aggregating and Communicating Uncertainty.
1980-04-01
means for identifying and communicating uncertainty. i 12- APPENDIX A BIBLIOGRAPHY j| 1. Ajzen , Icek ; "Intuitive Theories of Events and the Effects...disregarding valid but noncausal information." (Icak Ajzen , "Intuitive Theo- ries of Events and the Effects of Base-Rate Information on Prediction...9 4i,* ,4.. -. .- S % to the criterion while disregarding valid but noncausal information." (Icak Ajzen , "Intuitive Theories of Events and the Effects
1981-05-15
Variants of Uncertainty Daniel Kahneman University of British Columbia Amos Tversky Stanford University DTI-C &%E-IECTE ~JUNO 1i 19 8 1j May 15, 1981... Dennett , 1979) in which different parts have ac- cess to different data, assign then different weights and hold different views of the situation...2robable and t..h1 provable. Oxford- Claredor Press, 1977. Dennett , D.C. Brainstorms. Hassocks: Harvester, 1979. Donchin, E., Ritter, W. & McCallum, W.C
Uncertainty in artificial intelligence
Shachter, RD; Henrion, M; Lemmer, JF
1990-01-01
This volume, like its predecessors, reflects the cutting edge of research on the automation of reasoning under uncertainty.A more pragmatic emphasis is evident, for although some papers address fundamental issues, the majority address practical issues. Topics include the relations between alternative formalisms (including possibilistic reasoning), Dempster-Shafer belief functions, non-monotonic reasoning, Bayesian and decision theoretic schemes, and new inference techniques for belief nets. New techniques are applied to important problems in medicine, vision, robotics, and natural language und
Pernot, Pascal
2009-01-01
Bayesian Model Calibration is used to revisit the problem of scaling factor calibration for semi-empirical correction of ab initio calculations. A particular attention is devoted to uncertainty evaluation for scaling factors, and to their effect on prediction of observables involving scaled properties. We argue that linear models used for calibration of scaling factors are generally not statistically valid, in the sense that they are not able to fit calibration data within their uncertainty limits. Uncertainty evaluation and uncertainty propagation by statistical methods from such invalid models are doomed to failure. To relieve this problem, a stochastic function is included in the model to account for model inadequacy, according to the Bayesian Model Calibration approach. In this framework, we demonstrate that standard calibration summary statistics, as optimal scaling factor and root mean square, can be safely used for uncertainty propagation only when large calibration sets of precise data are used. For s...
Participation under Uncertainty
Energy Technology Data Exchange (ETDEWEB)
Boudourides, Moses A. [Univ. of Patras, Rio-Patras (Greece). Dept. of Mathematics
2003-10-01
This essay reviews a number of theoretical perspectives about uncertainty and participation in the present-day knowledge-based society. After discussing the on-going reconfigurations of science, technology and society, we examine how appropriate for policy studies are various theories of social complexity. Post-normal science is such an example of a complexity-motivated approach, which justifies civic participation as a policy response to an increasing uncertainty. But there are different categories and models of uncertainties implying a variety of configurations of policy processes. A particular role in all of them is played by expertise whose democratization is an often-claimed imperative nowadays. Moreover, we discuss how different participatory arrangements are shaped into instruments of policy-making and framing regulatory processes. As participation necessitates and triggers deliberation, we proceed to examine the role and the barriers of deliberativeness. Finally, we conclude by referring to some critical views about the ultimate assumptions of recent European policy frameworks and the conceptions of civic participation and politicization that they invoke.
Calibration Under Uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Swiler, Laura Painton; Trucano, Timothy Guy
2005-03-01
This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.
Uncertainty Quantification of Calculated Temperatures for the AGR-1 Experiment
Energy Technology Data Exchange (ETDEWEB)
Binh T. Pham; Jeffrey J. Einerson; Grant L. Hawkes
2012-04-01
This report documents an effort to quantify the uncertainty of the calculated temperature data for the first Advanced Gas Reactor (AGR-1) fuel irradiation experiment conducted in the INL's Advanced Test Reactor (ATR) in support of the Next Generation Nuclear Plant (NGNP) R&D program. Recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, the results of the numerical simulations can be used in combination with the statistical analysis methods to improve qualification of measured data. Additionally, the temperature simulation data for AGR tests can be used for validation of the fuel transport and fuel performance simulation models. The crucial roles of the calculated fuel temperatures in ensuring achievement of the AGR experimental program objectives require accurate determination of the model temperature uncertainties. The report is organized into three chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program and provides overviews of AGR-1 measured data, AGR-1 test configuration and test procedure, and thermal simulation. Chapters 2 describes the uncertainty quantification procedure for temperature simulation data of the AGR-1 experiment, namely, (i) identify and quantify uncertainty sources; (ii) perform sensitivity analysis for several thermal test conditions; (iii) use uncertainty propagation to quantify overall response temperature uncertainty. A set of issues associated with modeling uncertainties resulting from the expert assessments are identified. This also includes the experimental design to estimate the main effects and interactions of the important thermal model parameters. Chapter 3 presents the overall uncertainty results for the six AGR-1 capsules. This includes uncertainties for the daily volume-average and peak fuel temperatures, daily average temperatures at TC locations, and time-average volume-average and time-average peak fuel temperatures.
Uncertainty Quantification of Calculated Temperatures for the AGR-1 Experiment
Energy Technology Data Exchange (ETDEWEB)
Binh T. Pham; Jeffrey J. Einerson; Grant L. Hawkes
2013-03-01
This report documents an effort to quantify the uncertainty of the calculated temperature data for the first Advanced Gas Reactor (AGR-1) fuel irradiation experiment conducted in the INL’s Advanced Test Reactor (ATR) in support of the Next Generation Nuclear Plant (NGNP) R&D program. Recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, the results of the numerical simulations can be used in combination with the statistical analysis methods to improve qualification of measured data. Additionally, the temperature simulation data for AGR tests can be used for validation of the fuel transport and fuel performance simulation models. The crucial roles of the calculated fuel temperatures in ensuring achievement of the AGR experimental program objectives require accurate determination of the model temperature uncertainties. The report is organized into three chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program and provides overviews of AGR-1 measured data, AGR-1 test configuration and test procedure, and thermal simulation. Chapters 2 describes the uncertainty quantification procedure for temperature simulation data of the AGR-1 experiment, namely, (i) identify and quantify uncertainty sources; (ii) perform sensitivity analysis for several thermal test conditions; (iii) use uncertainty propagation to quantify overall response temperature uncertainty. A set of issues associated with modeling uncertainties resulting from the expert assessments are identified. This also includes the experimental design to estimate the main effects and interactions of the important thermal model parameters. Chapter 3 presents the overall uncertainty results for the six AGR-1 capsules. This includes uncertainties for the daily volume-average and peak fuel temperatures, daily average temperatures at TC locations, and time-average volume-average and time-average peak fuel temperatures.
Model and parameter uncertainty in IDF relationships under climate change
Chandra, Rupa; Saha, Ujjwal; Mujumdar, P. P.
2015-05-01
Quantifying distributional behavior of extreme events is crucial in hydrologic designs. Intensity Duration Frequency (IDF) relationships are used extensively in engineering especially in urban hydrology, to obtain return level of extreme rainfall event for a specified return period and duration. Major sources of uncertainty in the IDF relationships are due to insufficient quantity and quality of data leading to parameter uncertainty due to the distribution fitted to the data and uncertainty as a result of using multiple GCMs. It is important to study these uncertainties and propagate them to future for accurate assessment of return levels for future. The objective of this study is to quantify the uncertainties arising from parameters of the distribution fitted to data and the multiple GCM models using Bayesian approach. Posterior distribution of parameters is obtained from Bayes rule and the parameters are transformed to obtain return levels for a specified return period. Markov Chain Monte Carlo (MCMC) method using Metropolis Hastings algorithm is used to obtain the posterior distribution of parameters. Twenty six CMIP5 GCMs along with four RCP scenarios are considered for studying the effects of climate change and to obtain projected IDF relationships for the case study of Bangalore city in India. GCM uncertainty due to the use of multiple GCMs is treated using Reliability Ensemble Averaging (REA) technique along with the parameter uncertainty. Scale invariance theory is employed for obtaining short duration return levels from daily data. It is observed that the uncertainty in short duration rainfall return levels is high when compared to the longer durations. Further it is observed that parameter uncertainty is large compared to the model uncertainty.
Simulation of excitation and propagation of pico-second ultrasound
Energy Technology Data Exchange (ETDEWEB)
Yang, Seung Yong; Kim, No Kyu [Dept. of Mechanical Engineering, Korea University of Technology and Education, Chunan (Korea, Republic of)
2014-12-15
This paper presents an analytic and numerical simulation of the generation and propagation of pico-second ultrasound with nano-scale wavelength, enabling the production of bulk waves in thin films. An analytic model of laser-matter interaction and elasto-dynamic wave propagation is introduced to calculate the elastic strain pulse in microstructures. The model includes the laser-pulse absorption on the material surface, heat transfer from a photon to the elastic energy of a phonon, and acoustic wave propagation to formulate the governing equations of ultra-short ultrasound. The excitation and propagation of acoustic pulses produced by ultra-short laser pulses are numerically simulated for an aluminum substrate using the finite-difference method and compared with the analytical solution. Furthermore, Fourier analysis was performed to investigate the frequency spectrum of the simulated elastic wave pulse. It is concluded that a pico-second bulk wave with a very high frequency of up to hundreds of gigahertz is successfully generated in metals using a 100-fs laser pulse and that it can be propagated in the direction of thickness for thickness less than 100 nm.
Fazzari, D M
2001-01-01
This report presents the results of an evaluation of the Total Measurement Uncertainty (TMU) for the Canberra manufactured Segmented Gamma Scanner Assay System (SGSAS) as employed at the Hanford Plutonium Finishing Plant (PFP). In this document, TMU embodies the combined uncertainties due to all of the individual random and systematic sources of measurement uncertainty. It includes uncertainties arising from corrections and factors applied to the analysis of transuranic waste to compensate for inhomogeneities and interferences from the waste matrix and radioactive components. These include uncertainty components for any assumptions contained in the calibration of the system or computation of the data. Uncertainties are propagated at 1 sigma. The final total measurement uncertainty value is reported at the 95% confidence level. The SGSAS is a gamma assay system that is used to assay plutonium and uranium waste. The SGSAS system can be used in a stand-alone mode to perform the NDA characterization of a containe...
Energy Technology Data Exchange (ETDEWEB)
Harper, F.T.; Young, M.L.; Miller, L.A. [Sandia National Labs., Albuquerque, NM (United States); Hora, S.C. [Univ. of Hawaii, Hilo, HI (United States); Lui, C.H. [Nuclear Regulatory Commission, Washington, DC (United States); Goossens, L.H.J.; Cooke, R.M. [Delft Univ. of Technology (Netherlands); Paesler-Sauer, J. [Research Center, Karlsruhe (Germany); Helton, J.C. [and others
1995-01-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the risks presented by nuclear installations based on postulated frequencies and magnitudes of potential accidents. In 1991, the US Nuclear Regulatory Commission (NRC) and the Commission of the European Communities (CEC) began a joint uncertainty analysis of the two codes. The ultimate objective of the joint effort was to develop credible and traceable uncertainty distributions for the input variables of the codes. Expert elicitation was identified as the best technology available for developing a library of uncertainty distributions for the selected consequence parameters. The study was formulated jointly and was limited to the current code models and to physical quantities that could be measured in experiments. Experts developed their distributions independently. To validate the distributions generated for the wet deposition input variables, samples were taken from these distributions and propagated through the wet deposition code model. Resulting distributions closely replicated the aggregated elicited wet deposition distributions. To validate the distributions generated for the dispersion code input variables, samples from the distributions and propagated through the Gaussian plume model (GPM) implemented in the MACCS and COSYMA codes. Project teams from the NRC and CEC cooperated successfully to develop and implement a unified process for the elaboration of uncertainty distributions on consequence code input parameters. Formal expert judgment elicitation proved valuable for synthesizing the best available information. Distributions on measurable atmospheric dispersion and deposition parameters were successfully elicited from experts involved in the many phenomenological areas of consequence analysis. This volume is the first of a three-volume document describing the project.
Seismic wave propagation in fractured media: A discontinuous Galerkin approach
De Basabe, Jonás D.
2011-01-01
We formulate and implement a discontinuous Galekin method for elastic wave propagation that allows for discontinuities in the displacement field to simulate fractures or faults using the linear- slip model. We show numerical results using a 2D model with one linear- slip discontinuity and different frequencies. The results show a good agreement with analytic solutions. © 2011 Society of Exploration Geophysicists.
Late time tail of wave propagation on curved spacetime
Ching, E S C; Suen, W M; Young, K; Ching, E S C; Leung, P T; Suen, W M; Young, K
1994-01-01
The late time behavior of waves propagating on a general curved spacetime is studied. The late time tail is not necessarily an inverse power of time. Our work extends, places in context, and provides understanding for the known results for the Schwarzschild spacetime. Analytic and numerical results are in excellent agreement.
Uncertainty in magnetic activity indices
Institute of Scientific and Technical Information of China (English)
XU WenYao
2008-01-01
Magnetic activity indices are widely used in theoretical studies of solar-terrestrial coupling and space weather prediction. However, the indices suffer from various uncertainties, which limit their application and even mislead to incorrect conclu-sion. In this paper we analyze three most popular indices, Kp, AE and Dst. Three categories of uncertainties in magnetic indices are discussed: "data uncertainty" originating from inadequate data processing, "station uncertainty" caused by in-complete station covering, and "physical uncertainty" stemming from unclear physical mechanism. A comparison between magnetic disturbances and related indices indicate that the residual Sq will cause an uncertainty of 1-2 in K meas-urement, the uncertainty in saturated AE is as much as 50%, and the uncertainty in Dst index caused by the partial ring currents is about a half of the partial ring cur-rent.
Uncertainty in magnetic activity indices
Institute of Scientific and Technical Information of China (English)
2008-01-01
Magnetic activity indices are widely used in theoretical studies of solar-terrestrial coupling and space weather prediction. However, the indices suffer from various uncertainties, which limit their application and even mislead to incorrect conclu-sion. In this paper we analyze three most popular indices, Kp, AE and Dst. Three categories of uncertainties in magnetic indices are discussed: "data uncertainty" originating from inadequate data processing, "station uncertainty" caused by in-complete station covering, and "physical uncertainty" stemming from unclear physical mechanism. A comparison between magnetic disturbances and related indices indicate that the residual Sq will cause an uncertainty of 1―2 in K meas-urement, the uncertainty in saturated AE is as much as 50%, and the uncertainty in Dst index caused by the partial ring currents is about a half of the partial ring cur-rent.
Arnold, Dan; Demyanov, Vasily; Christie, Mike; Bakay, Alexander; Gopa, Konstantin
2016-10-01
Assessing the change in uncertainty in reservoir production forecasts over field lifetime is rarely undertaken because of the complexity of joining together the individual workflows. This becomes particularly important in complex fields such as naturally fractured reservoirs. The impact of this problem has been identified in previous and many solutions have been proposed but never implemented on complex reservoir problems due to the computational cost of quantifying uncertainty and optimising the reservoir development, specifically knowing how many and what kind of simulations to run. This paper demonstrates a workflow that propagates uncertainty throughout field lifetime, and into the decision making process by a combination of a metric-based approach, multi-objective optimisation and Bayesian estimation of uncertainty. The workflow propagates uncertainty estimates from appraisal into initial development optimisation, then updates uncertainty through history matching and finally propagates it into late-life optimisation. The combination of techniques applied, namely the metric approach and multi-objective optimisation, help evaluate development options under uncertainty. This was achieved with a significantly reduced number of flow simulations, such that the combined workflow is computationally feasible to run for a real-field problem. This workflow is applied to two synthetic naturally fractured reservoir (NFR) case studies in appraisal, field development, history matching and mid-life EOR stages. The first is a simple sector model, while the second is a more complex full field example based on a real life analogue. This study infers geological uncertainty from an ensemble of models that are based on the carbonate Brazilian outcrop which are propagated through the field lifetime, before and after the start of production, with the inclusion of production data significantly collapsing the spread of P10-P90 in reservoir forecasts. The workflow links uncertainty
Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian
2014-01-01
Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis.
Leblanc, Thierry; Sica, Robert J.; van Gijsel, Joanna A. E.; Godin-Beekmann, Sophie; Haefele, Alexander; Trickl, Thomas; Payen, Guillaume; Liberti, Gianluigi
2016-08-01
A standardized approach for the definition, propagation, and reporting of uncertainty in the ozone differential absorption lidar data products contributing to the Network for the Detection for Atmospheric Composition Change (NDACC) database is proposed. One essential aspect of the proposed approach is the propagation in parallel of all independent uncertainty components through the data processing chain before they are combined together to form the ozone combined standard uncertainty. The independent uncertainty components contributing to the overall budget include random noise associated with signal detection, uncertainty due to saturation correction, background noise extraction, the absorption cross sections of O3, NO2, SO2, and O2, the molecular extinction cross sections, and the number densities of the air, NO2, and SO2. The expression of the individual uncertainty components and their step-by-step propagation through the ozone differential absorption lidar (DIAL) processing chain are thoroughly estimated. All sources of uncertainty except detection noise imply correlated terms in the vertical dimension, which requires knowledge of the covariance matrix when the lidar signal is vertically filtered. In addition, the covariance terms must be taken into account if the same detection hardware is shared by the lidar receiver channels at the absorbed and non-absorbed wavelengths. The ozone uncertainty budget is presented as much as possible in a generic form (i.e., as a function of instrument performance and wavelength) so that all NDACC ozone DIAL investigators across the network can estimate, for their own instrument and in a straightforward manner, the expected impact of each reviewed uncertainty component. In addition, two actual examples of full uncertainty budget are provided, using nighttime measurements from the tropospheric ozone DIAL located at the Jet Propulsion Laboratory (JPL) Table Mountain Facility, California, and nighttime measurements from the JPL
Pauli effects in uncertainty relations
Toranzo, I V; Esquivel, R O; Dehesa, J S
2014-01-01
In this letter we analyze the effect of the spin dimensionality of a physical system in two mathematical formulations of the uncertainty principle: a generalized Heisenberg uncertainty relation valid for all antisymmetric N-fermion wavefunctions, and the Fisher-information- based uncertainty relation valid for all antisymmetric N-fermion wavefunctions of central potentials. The accuracy of these spin-modified uncertainty relations is examined for all atoms from Hydrogen to Lawrencium in a self-consistent framework.
S-parameter uncertainty computations
DEFF Research Database (Denmark)
Vidkjær, Jens
1993-01-01
A method for computing uncertainties of measured s-parameters is presented. Unlike the specification software provided with network analyzers, the new method is capable of calculating the uncertainties of arbitrary s-parameter sets and instrument settings.......A method for computing uncertainties of measured s-parameters is presented. Unlike the specification software provided with network analyzers, the new method is capable of calculating the uncertainties of arbitrary s-parameter sets and instrument settings....
Transverse, Propagating Velocity Perturbations in Solar Coronal Loops
De Moortel, I; Wright, A N; Hood, A W
2015-01-01
This short review paper gives an overview of recently observed transverse, propagating velocity perturbations in coronal loops. These ubiquitous perturbations are observed to undergo strong damping as they propagate. Using 3D numerical simulations of footpoint-driven transverse waves propagating in a coronal plasma with a cylindrical density structure, in combination with analytical modelling, it is demonstrated that the observed velocity perturbations can be understood in terms of coupling of different wave modes in the inhomogeneous boundaries of the loops. Mode coupling in the inhomogeneous boundary layers of the loops leads to the coupling of the transversal (kink) mode to the azimuthal (Alfven) mode, observed as the decay of the transverse kink oscillations. Both the numerical and analytical results show the spatial profile of the damped wave has a Gaussian shape to begin with, before switching to exponential decay at large heights. In addition, recent analysis of CoMP (Coronal Multi-channel Polarimeter)...
Nonradiative limitations to plasmon propagation in chains of metallic nanoparticles
Brandstetter-Kunc, Adam; Downing, Charles A; Weinmann, Dietmar; Jalabert, Rodolfo A
2016-01-01
We investigate the collective plasmonic modes in a chain of metallic nanoparticles that are coupled by near-field interactions. The size- and momentum-dependent nonradiative Landau damping and radiative decay rates are calculated analytically within an open quantum system approach. These decay rates determine the excitation propagation along the chain. In particular, the behavior of the radiative decay rate as a function of the plasmon wavelength leads to a transition from an exponential decay of the collective excitation for short distances to an algebraic decay for large distances. Importantly, we show that the exponential decay is of a purely nonradiative origin. Our transparent model enables us to provide analytical expressions for the polarization-dependent plasmon excitation profile along the chain and for the associated propagation length. Our theoretical analysis constitutes an important step in the quest for the optimal conditions for plasmonic propagation in nanoparticle chains.
Vegetative propagation of jojoba
Energy Technology Data Exchange (ETDEWEB)
Low, C.B.; Hackett, W.P.
1981-03-01
Development of jojoba as an economically viable crop requires improved methods of propagation and culture. Rooting experiments were performed on cutting material collected from wild jojoba plants. A striking seasonal fluctuation in rooting potential was found. Jojoba plants can be successfully propagated from stem cuttings made during spring, summer, and, to some extent, fall. Variability among jojoba plants may also play a role in rooting potential, although it is not as important as season. In general, the use of auxin (4,000 ppm indolebutyric acid) on jojoba cuttings during periods of high rooting potential promotes adventitious root formation, but during periods of low rooting potential it has no effect or is even slightly inhibitory. In the greenhouse, cutting-grown plants apparently reproductively matured sooner than those grown from seed. If this observation holds true for plants transplanted into the field, earlier fruit production by cutting--grown plants would mean earlier return of initial planting and maintenance costs.
Strategy for addressing composition uncertainties in a Hanford high-level waste vitrification plant
Energy Technology Data Exchange (ETDEWEB)
Bryan, M.F.; Piepel, G.F.
1996-03-01
Various requirements will be imposed on the feed material and glass produced by the high-level waste (HLW) vitrification plant at the Hanford Site. A statistical process/product control system will be used to control the melter feed composition and to check and document product quality. Two general types of uncertainty are important in HLW vitrification process/product control: model uncertainty and composition uncertainty. Model uncertainty is discussed by Hrma, Piepel, et al. (1994). Composition uncertainty includes the uncertainties inherent in estimates of feed composition and other process measurements. Because feed composition is a multivariate quantity, multivariate estimates of composition uncertainty (i.e., covariance matrices) are required. Three components of composition uncertainty will play a role in estimating and checking batch and glass attributes: batch-to-batch variability, within-batch uncertainty, and analytical uncertainty. This document reviews the techniques to be used in estimating and updating composition uncertainties and in combining these composition uncertainties with model uncertainty to yield estimates of (univariate) uncertainties associated with estimates of batch and glass properties.
DEFF Research Database (Denmark)
Linnet, Kristian; Johansen, Sys Stybe; Buchard, Anders
2008-01-01
On the basis of simultaneously sampled postmortem blood specimens from the left and right femoral veins the pre-analytical variation of methadone measurements was evaluated and compared to the analytical variation. The material consisted of a series of 27 duplicate samples from routine autopsy ca...... interval (+/-2CV(T)) for a postmortem measurement is largely determined by the pre-analytical component of variation. This should be kept in mind when judging on the uncertainty of postmortem measurement results....
Bidirectional beam propagation method
Kaczmarski, P.; Lagasse, P. E.
1988-05-01
A bidirectional extension of the beam propagation method (BPM) to optical waveguides with a longitudinal discontinuity is presented. The algorithm is verified by computing a reflection of the TE(0) mode from a semiconductor laser facet. The bidirectional BPM is applicable to other configurations such as totally reflecting waveguide mirrors, an abruption transition in a waveguide, or a waveguide with many discontinuities generating multiple reflections. The method can also be adapted to TM polarization.
Gauge engineering and propagators
Directory of Open Access Journals (Sweden)
Maas Axel
2017-01-01
The dependence of the propagators on the choice of these complete gauge-fixings will then be investigated using lattice gauge theory for Yang-Mills theory. It is found that the implications for the infrared, and to some extent mid-momentum behavior, can be substantial. In going beyond the Yang-Mills case it turns out that the influence of matter can generally not be neglected. This will be briefly discussed for various types of matter.
Analysis of Uncertainty in Dynamic Processes Development of Banks Functioning
Directory of Open Access Journals (Sweden)
Aleksei V. Korovyakovskii
2013-01-01
Full Text Available The paper offers the approach to measure of uncertainty estimation in dynamic processes of banks functioning, using statistic data of different banking operations indicators. To calculate measure of uncertainty in dynamic processes of banks functioning the phase images of relevant sets of statistic data are considered. Besides, it is shown that the form of phase image of the studied sets of statistic data can act as a basis of measure of uncertainty estimation in dynamic processes of banks functioning. The set of analytical characteristics are offered to formalize the form of phase image definition of the studied sets of statistic data. It is shown that the offered analytical characteristics consider inequality of changes in values of the studied sets of statistic data, which is one of the ways of uncertainty display in dynamic processes development. The invariant estimates of measure of uncertainty in dynamic processes of banks functioning, considering significant changes in absolute values of the same indicators for different banks were obtained. The examples of calculation of measure of uncertainty in dynamic processes of concrete banks functioning were cited.
Modeling Heterogeneity in Networks using Uncertainty Quantification Tools
Rajendran, Karthikeyan; Siettos, Constantinos I; Laing, Carlo R; Kevrekidis, Ioannis G
2015-01-01
Using the dynamics of information propagation on a network as our illustrative example, we present and discuss a systematic approach to quantifying heterogeneity and its propagation that borrows established tools from Uncertainty Quantification. The crucial assumption underlying this mathematical and computational "technology transfer" is that the evolving states of the nodes in a network quickly become correlated with the corresponding node "identities": features of the nodes imparted by the network structure (e.g. the node degree, the node clustering coefficient). The node dynamics thus depend on heterogeneous (rather than uncertain) parameters, whose distribution over the network results from the network structure. Knowing these distributions allows us to obtain an efficient coarse-grained representation of the network state in terms of the expansion coefficients in suitable orthogonal polynomials. This representation is closely related to mathematical/computational tools for uncertainty quantification (th...
Estimation of uncertainty for fatigue growth rate at cryogenic temperatures
Nyilas, Arman; Weiss, Klaus P.; Urbach, Elisabeth; Marcinek, Dawid J.
2014-01-01
Fatigue crack growth rate (FCGR) measurement data for high strength austenitic alloys at cryogenic environment suffer in general from a high degree of data scatter in particular at ΔK regime below 25 MPa√m. Using standard mathematical smoothing techniques forces ultimately a linear relationship at stage II regime (crack propagation rate versus ΔK) in a double log field called Paris law. However, the bandwidth of uncertainty relies somewhat arbitrary upon the researcher's interpretation. The present paper deals with the use of the uncertainty concept on FCGR data as given by GUM (Guidance of Uncertainty in Measurements), which since 1993 is a recommended procedure to avoid subjective estimation of error bands. Within this context, the lack of a true value addresses to evaluate the best estimate by a statistical method using the crack propagation law as a mathematical measurement model equation and identifying all input parameters. Each parameter necessary for the measurement technique was processed using the Gaussian distribution law by partial differentiation of the terms to estimate the sensitivity coefficients. The combined standard uncertainty determined for each term with its computed sensitivity coefficients finally resulted in measurement uncertainty of the FCGR test result. The described procedure of uncertainty has been applied within the framework of ITER on a recent FCGR measurement for high strength and high toughness Type 316LN material tested at 7 K using a standard ASTM proportional compact tension specimen. The determined values of Paris law constants such as C0 and the exponent m as best estimate along with the their uncertainty value may serve a realistic basis for the life expectancy of cyclic loaded members.
The propagator of the attractive delta-Bose gas in one dimension
Prolhac, Sylvain
2011-01-01
We consider the quantum delta-Bose gas on the infinite line. For repulsive interaction, Tracy and Widom have obtained an exact formula for the quantum propagator. In our contribution we explicitly perform its analytic continuation to attractive interaction. We also study the connection to the expansion of the propagator in terms of the Bethe ansatz eigenfunctions. Thereby we prove their completeness.
Collective Uncertainty Entanglement Test
Rudnicki, Łukasz; Życzkowski, Karol
2011-01-01
For a given pure state of a composite quantum system we analyze the product of its projections onto a set of locally orthogonal separable pure states. We derive a bound for this product analogous to the entropic uncertainty relations. For bipartite systems the bound is saturated for maximally entangled states and it allows us to construct a family of entanglement measures, we shall call collectibility. As these quantities are experimentally accessible, the approach advocated contributes to the task of experimental quantification of quantum entanglement, while for a three-qubit system it is capable to identify the genuine three-party entanglement.
Mathematical Analysis of Uncertainty
Directory of Open Access Journals (Sweden)
Angel GARRIDO
2016-01-01
Full Text Available Classical Logic showed early its insufficiencies for solving AI problems. The introduction of Fuzzy Logic aims at this problem. There have been research in the conventional Rough direction alone or in the Fuzzy direction alone, and more recently, attempts to combine both into Fuzzy Rough Sets or Rough Fuzzy Sets. We analyse some new and powerful tools in the study of Uncertainty, as the Probabilistic Graphical Models, Chain Graphs, Bayesian Networks, and Markov Networks, integrating our knowledge of graphs and probability.
Variance-based uncertainty relations
Huang, Yichen
2010-01-01
It is hard to overestimate the fundamental importance of uncertainty relations in quantum mechanics. In this work, I propose state-independent variance-based uncertainty relations for arbitrary observables in both finite and infinite dimensional spaces. We recover the Heisenberg uncertainty principle as a special case. By studying examples, we find that the lower bounds provided by our new uncertainty relations are optimal or near-optimal. I illustrate the uses of our new uncertainty relations by showing that they eliminate one common obstacle in a sequence of well-known works in entanglement detection, and thus make these works much easier to access in applications.
Higher order mode propagation in nonuniform circular ducts
Cho, Y. C.; Ingard, K. U.
1980-01-01
This paper presents an analytical investigation of higher order mode propagation in a nonuniform circular duct without mean flow. An approximate wave equation is derived on the assumptions that the duct cross section varies slowly and that mode conversion is negligible. Exact closed form solutions are obtained for a particular class of converging-diverging circular duct which is here referred to as 'circular cosh duct'. Numerical results are presentd in terms of the transmission loss for the various duct shapes and frequencies. The results are applicable to studies of multimodal propagation as well as single mode propagation. The results are also applicable to studies of sound radiation from certain types of contoured inlet ducts, or of sound propagation in a converging-diverging duct of somewhat different shape from a cosh duct.
Propagation of Airy Gaussian vortex beams in uniaxial crystals
Weihao, Yu; Ruihuang, Zhao; Fu, Deng; Jiayao, Huang; Chidao, Chen; Xiangbo, Yang; Yanping, Zhao; Dongmei, Deng
2016-04-01
The propagation dynamics of the Airy Gaussian vortex beams in uniaxial crystals orthogonal to the optical axis has been investigated analytically and numerically. The propagation expression of the beams has been obtained. The propagation features of the Airy Gaussian vortex beams are shown with changes of the distribution factor and the ratio of the extraordinary refractive index to the ordinary refractive index. The correlations between the ratio and the maximum intensity value during the propagation, and its appearing distance have been investigated. Project supported by the National Natural Science Foundation of China (Grant Nos. 11374108, 11374107, 10904041, and 11547212), the Foundation of Cultivating Outstanding Young Scholars of Guangdong Province, China, the CAS Key Laboratory of Geospace Environment, University of Science and Technology of China, the National Training Program of Innovation and Entrepreneurship for Undergraduates (Grant No. 2015093), and the Science and Technology Projects of Guangdong Province, China (Grant No. 2013B031800011).
Commwarrior worm propagation model for smart phone networks
Institute of Scientific and Technical Information of China (English)
XIA Wei; LI Zhao-hui; CHEN Zeng-qiang; YUAN Zhu-zhi
2008-01-01
Commwarrior worm is capable of spreading through both Bluetooth and multimedia messaging service (MMS) in smart phone networks. According to the propagation characteristics of Bluetooth and MMS, we built the susceptible- exposed-infected-recovered-dormancy (SEIRD) model for the Bluetooth and MMS hybrid spread mode and performed the stability analysis. The simulation results show good correlation with our theoretical analysis and demonstrate the effectiveness of this dynamic propagation model. On the basis of the SEIRD model, we further discuss at length the influence of the propagation parameters such as user gather density in groups, moving velocity of smart phone, the time for worm to replicate itself, and other interrelated parameters on the propagation of the virus. On the basis of these analytical and simulation results, some feasible control strategies will be proposed to restrain the spread of mobile worm such as commwarrior on smart phone network.
MacNeill, Sheila; Campbell, Lorna M.; Hawksey, Martin
2014-01-01
This article presents an overview of the development and use of analytics in the context of education. Using Buckingham Shum's three levels of analytics, the authors present a critical analysis of current developments in the domain of learning analytics, and contrast the potential value of analytics research and development with real world…
Hwang, Rong-Jen; Rogers, Craig; Beltran, Jada; Razatos, Gerasimos; Avery, Jason
2016-06-01
Reporting a measurement of uncertainty helps to determine the limitations of the method of analysis and aids in laboratory accreditation. This laboratory has conducted a study to estimate a reasonable uncertainty for the mass concentration of vaporous ethanol, in g/210 L, by the Intoxilyzer(®) 8000 breath analyzer. The uncertainty sources used were: gas chromatograph (GC) calibration adjustment, GC analytical, certified reference material, Intoxilyzer(®) 8000 calibration adjustment and Intoxilyzer(®) 8000 analytical. Standard uncertainties attributed to these sources were calculated and separated into proportional and constant standard uncertainties. Both the combined proportional and the constant standard uncertainties were further combined to an expanded uncertainty as both a percentage and an unit. To prevent any under reporting of the expanded uncertainty, 0.10 g/210 L was chosen as the defining point for expressing the expanded uncertainty. For the Intoxilyzer(®) 8000, all vaporous ethanol results at or above 0.10 g/210 L, the expanded uncertainty will be reported as ±3.6% at a confidence level of 95% (k = 2); for vaporous ethanol results below 0.10 g/210 L, the expanded uncertainty will be reported as ±0.0036 g/210 L at a confidence level of 95% (k = 2).
Haji Hosseinloo, Ashkan; Turitsyn, Konstantin
2016-05-01
Vibratory energy harvesters as potential replacements for conventional batteries are not as robust as batteries. Their performance can drastically deteriorate in the presence of uncertainty in their parameters. Parametric uncertainty is inevitable with any physical device mainly due to manufacturing tolerances, defects, and environmental effects such as temperature and humidity. Hence, uncertainty propagation analysis and optimization under uncertainty seem indispensable with any energy harvester design. Here we propose a new modeling philosophy for optimization under uncertainty; optimization for the worst-case scenario (minimum power) rather than for the ensemble expectation of the power. The proposed optimization philosophy is practically very useful when there is a minimum requirement on the harvested power. We formulate the problems of uncertainty propagation and optimization under uncertainty in a generic and architecture-independent fashion, and then apply them to a single-degree-of-freedom linear piezoelectric energy harvester with uncertainty in its different parameters. The simulation results show that there is a significant improvement in the worst-case power of the designed harvester compared to that of a naively optimized (deterministically optimized) harvester. For instance, for a 10% uncertainty in the natural frequency of the harvester (in terms of its standard deviation) this improvement is about 570%.
MacIntosh, C. R.; Merchant, C. J.; von Schuckmann, K.
2016-10-01
This article presents a review of current practice in estimating steric sea level change, focussed on the treatment of uncertainty. Steric sea level change is the contribution to the change in sea level arising from the dependence of density on temperature and salinity. It is a significant component of sea level rise and a reflection of changing ocean heat content. However, tracking these steric changes still remains a significant challenge for the scientific community. We review the importance of understanding the uncertainty in estimates of steric sea level change. Relevant concepts of uncertainty are discussed and illustrated with the example of observational uncertainty propagation from a single profile of temperature and salinity measurements to steric height. We summarise and discuss the recent literature on methodologies and techniques used to estimate steric sea level in the context of the treatment of uncertainty. Our conclusions are that progress in quantifying steric sea level uncertainty will benefit from: greater clarity and transparency in published discussions of uncertainty, including exploitation of international standards for quantifying and expressing uncertainty in measurement; and the development of community "recipes" for quantifying the error covariances in observations and from sparse sampling and for estimating and propagating uncertainty across spatio-temporal scales.
MacIntosh, C. R.; Merchant, C. J.; von Schuckmann, K.
2017-01-01
This article presents a review of current practice in estimating steric sea level change, focussed on the treatment of uncertainty. Steric sea level change is the contribution to the change in sea level arising from the dependence of density on temperature and salinity. It is a significant component of sea level rise and a reflection of changing ocean heat content. However, tracking these steric changes still remains a significant challenge for the scientific community. We review the importance of understanding the uncertainty in estimates of steric sea level change. Relevant concepts of uncertainty are discussed and illustrated with the example of observational uncertainty propagation from a single profile of temperature and salinity measurements to steric height. We summarise and discuss the recent literature on methodologies and techniques used to estimate steric sea level in the context of the treatment of uncertainty. Our conclusions are that progress in quantifying steric sea level uncertainty will benefit from: greater clarity and transparency in published discussions of uncertainty, including exploitation of international standards for quantifying and expressing uncertainty in measurement; and the development of community "recipes" for quantifying the error covariances in observations and from sparse sampling and for estimating and propagating uncertainty across spatio-temporal scales.
Integrating Out Astrophysical Uncertainties
Fox, Patrick J; Weiner, Neal
2010-01-01
Underground searches for dark matter involve a complicated interplay of particle physics, nuclear physics, atomic physics and astrophysics. We attempt to remove the uncertainties associated with astrophysics by developing the means to map the observed signal in one experiment directly into a predicted rate at another. We argue that it is possible to make experimental comparisons that are completely free of astrophysical uncertainties by focusing on {\\em integral} quantities, such as $g(v_{min})=\\int_{v_{min}} dv\\, f(v)/v $ and $\\int_{v_{thresh}} dv\\, v g(v)$. Direct comparisons are possible when the $v_{min}$ space probed by different experiments overlap. As examples, we consider the possible dark matter signals at CoGeNT, DAMA and CRESST-Oxygen. We find that expected rate from CoGeNT in the XENON10 experiment is higher than observed, unless scintillation light output is low. Moreover, we determine that S2-only analyses are constraining, unless the charge yield $Q_y< 2.4 {\\, \\rm electrons/keV}$. For DAMA t...
Wave propagation in elastic solids
Achenbach, Jan
1984-01-01
The propagation of mechanical disturbances in solids is of interest in many branches of the physical scienses and engineering. This book aims to present an account of the theory of wave propagation in elastic solids. The material is arranged to present an exposition of the basic concepts of mechanical wave propagation within a one-dimensional setting and a discussion of formal aspects of elastodynamic theory in three dimensions, followed by chapters expounding on typical wave propagation phenomena, such as radiation, reflection, refraction, propagation in waveguides, and diffraction. The treat
An analytic method for sensitivity analysis of complex systems
Zhu, Yueying; Li, Wei; Cai, Xu
2016-01-01
Sensitivity analysis is concerned with understanding how the model output depends on uncertainties (variances) in inputs and then identifies which inputs are important in contributing to the prediction imprecision. Uncertainty determination in output is the most crucial step in sensitivity analysis. In the present paper, an analytic expression, which can exactly evaluate the uncertainty in output as a function of the output's derivatives and inputs' central moments, is firstly deduced for general multivariate models with given relationship between output and inputs in terms of Taylor series expansion. A $\\gamma$-order relative uncertainty for output, denoted by $\\mathrm{R^{\\gamma}_v}$, is introduced to quantify the contributions of input uncertainty of different orders. On this basis, it is shown that the widely used approximation considering the first order contribution from the variance of input variable can satisfactorily express the output uncertainty only when the input variance is very small or the inpu...
Propagation speed of gamma radiation in brass
Energy Technology Data Exchange (ETDEWEB)
Cavalcante, Jose T.P.D.; Silva, Paulo R.J.; Saitovich, Henrique
2009-07-01
The propagation speed (PS) of visible light -represented by a short frequency range in the large frame of electromagnetic radiations (ER) frequencies- in air was measured during the last century, using a great deal of different methods, with high precision results being achieved. Presently, a well accepted value, with very small uncertainty, is c= 299,792.458 Km/s) (c reporting to the Latin word celeritas: 'speed swiftness'). When propagating in denser material media (MM), such value is always lower when compared to the air value, with the propagating MM density playing an important role. Until present, such studies focusing propagation speeds, refractive indexes, dispersions were specially related to visible light, or to ER in wavelengths ranges dose to it, and with a transparent MM. A first incursion in this subject dealing with {gamma}-rays was performed using an electronic coincidence counting system, when the value of it's PS was measured in air, C{sub {gamma}}{sub (air)}298,300.15 Km/s; a method that went on with later electronic improvements. always in air. To perform such measurements the availability of a {gamma}-radiation source in which two {gamma}-rays are emitted simultaneously in opposite directions -as already used as well as applied in the present case- turns out to be essential to the feasibility of the experiment, as far as no reflection techniques could be used. Such a suitable source was the positron emitter {sup 22}Na placed in a thin wall metal container in which the positrons are stopped and annihilated when reacting with the medium electrons, in such way originating -as it is very well established from momentum/energy conservation laws - two gamma-rays, energy 511 KeV each, both emitted simultaneously in opposite directions. In all the previous experiments were used photomultiplier detectors coupled to NaI(Tl) crystal scintillators, which have a good energy resolution but a deficient time resolution for such purposes
Temporal scaling in information propagation.
Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi
2014-06-18
For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
Temporal scaling in information propagation
Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi
2014-06-01
For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
Institute of Scientific and Technical Information of China (English)
王晖; 刘大有; 等
1994-01-01
In this paper we consider the problem of sequential processing and present a sequential model based on the back-propagation algorithm.This model is intended to deal with intrinsically sequential problems,such as word recognition,speech recognition,natural language understanding.This model can be used to train a network to learn the sequence of input patterns,in a fixed order or a random order.Besides,this model is open- and partial-associative,characterized as “resognizing while accumulating”, which, as we argue, is mental cognition process oriented.
1948-06-25
applies Chapter 2 presents in simple form the mathe - principles which have been found to work in prac- matical theory underlying the propagation of...6.17, which was described under sec- L713 412 812.111 410.511171IS1&41&716.3 A& tion 6.5 above, and read the value of the muf for muFi .-f------ - 01...twepse, 71. mathe 1measuring virtual 6elghts, is world contour charts. 57; zero distance, 73. Se .1.. Contour chairt MCNiII, A. 0., 106 G Median value
Particle Dark Matter constraints: the effect of Galactic uncertainties
Benito, Maria; Bernal, Nicolás; Bozorgnia, Nassim; Calore, Francesca; Iocco, Fabio
2017-02-01
Collider, space, and Earth based experiments are now able to probe several extensions of the Standard Model of particle physics which provide viable dark matter candidates. Direct and indirect dark matter searches rely on inputs of astrophysical nature, such as the local dark matter density or the shape of the dark matter density profile in the target in object. The determination of these quantities is highly affected by astrophysical uncertainties. The latter, especially those for our own Galaxy, are ill-known, and often not fully accounted for when analyzing the phenomenology of particle physics models. In this paper we present a systematic, quantitative estimate of how astrophysical uncertainties on Galactic quantities (such as the local galactocentric distance, circular velocity, or the morphology of the stellar disk and bulge) propagate to the determination of the phenomenology of particle physics models, thus eventually affecting the determination of new physics parameters. We present results in the context of two specific extensions of the Standard Model (the Singlet Scalar and the Inert Doublet) that we adopt as case studies for their simplicity in illustrating the magnitude and impact of such uncertainties on the parameter space of the particle physics model itself. Our findings point toward very relevant effects of current Galactic uncertainties on the determination of particle physics parameters, and urge a systematic estimate of such uncertainties in more complex scenarios, in order to achieve constraints on the determination of new physics that realistically include all known uncertainties.
Considering rating curve uncertainty in water level predictions
Sikorska, A. E.; Scheidegger, A.; Banasik, K.; Rieckermann, J.
2013-11-01
Streamflow cannot be measured directly and is typically derived with a rating curve model. Unfortunately, this causes uncertainties in the streamflow data and also influences the calibration of rainfall-runoff models if they are conditioned on such data. However, it is currently unknown to what extent these uncertainties propagate to rainfall-runoff predictions. This study therefore presents a quantitative approach to rigorously consider the impact of the rating curve on the prediction uncertainty of water levels. The uncertainty analysis is performed within a formal Bayesian framework and the contributions of rating curve versus rainfall-runoff model parameters to the total predictive uncertainty are addressed. A major benefit of the approach is its independence from the applied rainfall-runoff model and rating curve. In addition, it only requires already existing hydrometric data. The approach was successfully demonstrated on a small catchment in Poland, where a dedicated monitoring campaign was performed in 2011. The results of our case study indicate that the uncertainty in calibration data derived by the rating curve method may be of the same relevance as rainfall-runoff model parameters themselves. A conceptual limitation of the approach presented is that it is limited to water level predictions. Nevertheless, regarding flood level predictions, the Bayesian framework seems very promising because it (i) enables the modeler to incorporate informal knowledge from easily accessible information and (ii) better assesses the individual error contributions. Especially the latter is important to improve the predictive capability of hydrological models.
Considering rating curve uncertainty in water level predictions
Directory of Open Access Journals (Sweden)
A. E. Sikorska
2013-03-01
Full Text Available Streamflow cannot be measured directly and is typically derived with a rating curve model. Unfortunately, this causes uncertainties in the streamflow data and also influences the calibration of rainfall-runoff models if they are conditioned on such data. However, it is currently unknown to what extent these uncertainties propagate to rainfall-runoff predictions. This study therefore presents a quantitative approach to rigorously consider the impact of the rating curve on the prediction uncertainty of water levels. The uncertainty analysis is performed within a formal Bayesian framework and the contributions of rating curve versus rainfall-runoff model parameters to the total predictive uncertainty are addressed. A major benefit of the approach is its independence from the applied rainfall-runoff model and rating curve. In addition, it only requires already existing hydrometric data. The approach was successfully tested on a small urbanized basin in Poland, where a dedicated monitoring campaign was performed in 2011. The results of our case study indicate that the uncertainty in calibration data derived by the rating curve method may be of the same relevance as rainfall-runoff model parameters themselves. A conceptual limitation of the approach presented is that it is limited to water level predictions. Nevertheless, regarding flood level predictions, the Bayesian framework seems very promising because it (i enables the modeler to incorporate informal knowledge from easily accessible information and (ii better assesses the individual error contributions. Especially the latter is important to improve the predictive capability of hydrological models.
Lorentz Invariance Violation and Generalized Uncertainty Principle
Tawfik, A; Ali, A Farag
2016-01-01
Recent approaches for quantum gravity are conjectured to give predictions for a minimum measurable length, a maximum observable momentum and an essential generalization for the Heisenberg uncertainty principle (GUP). The latter is based on a momentum-dependent modification in the standard dispersion relation and leads to Lorentz invariance violation (LIV). The main features of the controversial OPERA measurements on the faster-than-light muon neutrino anomaly are used to calculate the time of flight delays $\\Delta t$ and the relative change $\\Delta v$ in the speed of neutrino in dependence on the redshift $z$. The results are compared with the OPERA measurements. We find that the measurements are too large to be interpreted as LIV. Depending on the rest mass, the propagation of high-energy muon neutrino can be superluminal. The comparison with the ultra high energy cosmic rays seems to reveals an essential ingredient of the approach combining string theory, loop quantum gravity, black hole physics and doubly ...
Estimation of measurement uncertainty arising from manual sampling of fuels.
Theodorou, Dimitrios; Liapis, Nikolaos; Zannikos, Fanourios
2013-02-15
Sampling is an important part of any measurement process and is therefore recognized as an important contributor to the measurement uncertainty. A reliable estimation of the uncertainty arising from sampling of fuels leads to a better control of risks associated with decisions concerning whether product specifications are met or not. The present work describes and compares the results of three empirical statistical methodologies (classical ANOVA, robust ANOVA and range statistics) using data from a balanced experimental design, which includes duplicate samples analyzed in duplicate from 104 sampling targets (petroleum retail stations). These methodologies are used for the estimation of the uncertainty arising from the manual sampling of fuel (automotive diesel) and the subsequent sulfur mass content determination. The results of the three methodologies statistically differ, with the expanded uncertainty of sampling being in the range of 0.34-0.40 mg kg(-1), while the relative expanded uncertainty lying in the range of 4.8-5.1%, depending on the methodology used. The estimation of robust ANOVA (sampling expanded uncertainty of 0.34 mg kg(-1) or 4.8% in relative terms) is considered more reliable, because of the presence of outliers within the 104 datasets used for the calculations. Robust ANOVA, in contrast to classical ANOVA and range statistics, accommodates outlying values, lessening their effects on the produced estimates. The results of this work also show that, in the case of manual sampling of fuels, the main contributor to the whole measurement uncertainty is the analytical measurement uncertainty, with the sampling uncertainty accounting only for the 29% of the total measurement uncertainty.
Wave propagation in spatially modulated tubes
Ziepke, A.; Martens, S.; Engel, H.
2016-09-01
We investigate wave propagation in rotationally symmetric tubes with a periodic spatial modulation of cross section. Using an asymptotic perturbation analysis, the governing quasi-two-dimensional reaction-diffusion equation can be reduced into a one-dimensional reaction-diffusion-advection equation. Assuming a weak perturbation by the advection term and using projection method, in a second step, an equation of motion for traveling waves within such tubes can be derived. Both methods predict properly the nonlinear dependence of the propagation velocity on the ratio of the modulation period of the geometry to the intrinsic width of the front, or pulse. As a main feature, we observe finite intervals of propagation failure of waves induced by the tube's modulation and derive an analytically tractable condition for their occurrence. For the highly diffusive limit, using the Fick-Jacobs approach, we show that wave velocities within modulated tubes are governed by an effective diffusion coefficient. Furthermore, we discuss the effects of a single bottleneck on the period of pulse trains. We observe period changes by integer fractions dependent on the bottleneck width and the period of the entering pulse train.
Wave propagation in spatially modulated tubes.
Ziepke, A; Martens, S; Engel, H
2016-09-07
We investigate wave propagation in rotationally symmetric tubes with a periodic spatial modulation of cross section. Using an asymptotic perturbation analysis, the governing quasi-two-dimensional reaction-diffusion equation can be reduced into a one-dimensional reaction-diffusion-advection equation. Assuming a weak perturbation by the advection term and using projection method, in a second step, an equation of motion for traveling waves within such tubes can be derived. Both methods predict properly the nonlinear dependence of the propagation velocity on the ratio of the modulation period of the geometry to the intrinsic width of the front, or pulse. As a main feature, we observe finite intervals of propagation failure of waves induced by the tube's modulation and derive an analytically tractable condition for their occurrence. For the highly diffusive limit, using the Fick-Jacobs approach, we show that wave velocities within modulated tubes are governed by an effective diffusion coefficient. Furthermore, we discuss the effects of a single bottleneck on the period of pulse trains. We observe period changes by integer fractions dependent on the bottleneck width and the period of the entering pulse train.
Modeling Light Propagation in Luminescent Media
Sahin, Derya
This study presents physical, computational and analytical modeling approaches for light propagation in luminescent random media. Two different approaches are used, namely (i) a statistical approach: Monte-Carlo simulations for photon transport and (ii) a deterministic approach: radiative transport theory. Both approaches account accurately for the multiple absorption and reemission of light at different wavelengths and for anisotropic luminescence. The deterministic approach is a generalization of radiative transport theory for solving inelastic scattering problems in random media. We use the radiative transport theory to study light propagation in luminescent media. Based on this theory, we also study the optically thick medium. Using perturbation methods, a corrected diffusion approximation with asymptotically accurate boundary conditions and a boundary layer solution are derived. The accuracy and the efficacy of this approach is verified for a plane-parallel slab problem. In particular, we apply these two approaches (MC and radiative transport theory) to model light propagation in semiconductor-based luminescent solar concentrators (LSCs). The computational results for both approaches are compared with each other and found to agree. The results of this dissertation present practical and reliable techniques to use for solving forward/inverse inelastic scattering problems arising in various research areas such as optics, biomedical engineering, nuclear engineering, solar science and material science.
Measurement uncertainty of lactase-containing tablets analyzed with FTIR.
Paakkunainen, Maaret; Kohonen, Jarno; Reinikainen, Satu-Pia
2014-01-01
Uncertainty is one of the most critical aspects in determination of measurement reliability. In order to ensure accurate measurements, results need to be traceable and uncertainty measurable. In this study, homogeneity of FTIR samples is determined with a combination of variographic and multivariate approach. An approach for estimation of uncertainty within individual sample, as well as, within repeated samples is introduced. FTIR samples containing two commercial pharmaceutical lactase products (LactaNON and Lactrase) are applied as an example of the procedure. The results showed that the approach is suitable for the purpose. The sample pellets were quite homogeneous, since the total uncertainty of each pellet varied between 1.5% and 2.5%. The heterogeneity within a tablet strip was found to be dominant, as 15-20 tablets has to be analyzed in order to achieve <5.0% expanded uncertainty level. Uncertainty arising from the FTIR instrument was <1.0%. The uncertainty estimates are computed directly from FTIR spectra without any concentration information of the analyte.
Uncertainty Quantification for Large-Scale Ice Sheet Modeling
Energy Technology Data Exchange (ETDEWEB)
Ghattas, Omar [Univ. of Texas, Austin, TX (United States)
2016-02-05
This report summarizes our work to develop advanced forward and inverse solvers and uncertainty quantification capabilities for a nonlinear 3D full Stokes continental-scale ice sheet flow model. The components include: (1) forward solver: a new state-of-the-art parallel adaptive scalable high-order-accurate mass-conservative Newton-based 3D nonlinear full Stokes ice sheet flow simulator; (2) inverse solver: a new adjoint-based inexact Newton method for solution of deterministic inverse problems governed by the above 3D nonlinear full Stokes ice flow model; and (3) uncertainty quantification: a novel Hessian-based Bayesian method for quantifying uncertainties in the inverse ice sheet flow solution and propagating them forward into predictions of quantities of interest such as ice mass flux to the ocean.
Iterative Methods for Scalable Uncertainty Quantification in Complex Networks
Surana, Amit; Banaszuk, Andrzej
2011-01-01
In this paper we address the problem of uncertainty management for robust design, and verification of large dynamic networks whose performance is affected by an equally large number of uncertain parameters. Many such networks (e.g. power, thermal and communication networks) are often composed of weakly interacting subnetworks. We propose intrusive and non-intrusive iterative schemes that exploit such weak interconnections to overcome dimensionality curse associated with traditional uncertainty quantification methods (e.g. generalized Polynomial Chaos, Probabilistic Collocation) and accelerate uncertainty propagation in systems with large number of uncertain parameters. This approach relies on integrating graph theoretic methods and waveform relaxation with generalized Polynomial Chaos, and Probabilistic Collocation, rendering these techniques scalable. We analyze convergence properties of this scheme and illustrate it on several examples.
Uncertainty relation in Schwarzschild spacetime
Feng, Jun; Zhang, Yao-Zhong; Gould, Mark D.; Fan, Heng
2015-04-01
We explore the entropic uncertainty relation in the curved background outside a Schwarzschild black hole, and find that Hawking radiation introduces a nontrivial modification on the uncertainty bound for particular observer, therefore it could be witnessed by proper uncertainty game experimentally. We first investigate an uncertainty game between a free falling observer and his static partner holding a quantum memory initially entangled with the quantum system to be measured. Due to the information loss from Hawking decoherence, we find an inevitable increase of the uncertainty on the outcome of measurements in the view of static observer, which is dependent on the mass of the black hole, the distance of observer from event horizon, and the mode frequency of quantum memory. To illustrate the generality of this paradigm, we relate the entropic uncertainty bound with other uncertainty probe, e.g., time-energy uncertainty. In an alternative game between two static players, we show that quantum information of qubit can be transferred to quantum memory through a bath of fluctuating quantum fields outside the black hole. For a particular choice of initial state, we show that the Hawking decoherence cannot counteract entanglement generation after the dynamical evolution of system, which triggers an effectively reduced uncertainty bound that violates the intrinsic limit -log2 c. Numerically estimation for a proper choice of initial state shows that our result is comparable with possible real experiments. Finally, a discussion on the black hole firewall paradox in the context of entropic uncertainty relation is given.
An uncertainty inventory demonstration - a primary step in uncertainty quantification
Energy Technology Data Exchange (ETDEWEB)
Langenbrunner, James R. [Los Alamos National Laboratory; Booker, Jane M [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Salazar, Issac F [Los Alamos National Laboratory; Ross, Timothy J [UNM
2009-01-01
Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.
Mikulica, Tomáš
2016-01-01
Cílem diplomové práce je popsat různé metody výpočtu globálního osvětlení scény včetně techniky Light Propagation Volumes. Pro tuto metodu jsou podrobně popsány všechny tři kroky výpočtu: injekce, propagace a vykreslení. Dále je navrženo několik vlastních rozšíření zlepšující grafickou kvalitu metody. Části návrhu a implementace jsou zaměřeny na popis scény, zobrazovacího systému, tvorby stínů, implementace metody Light Propagation Volumes a navržených rozšíření. Práci uzavírá měření, porovná...