Analytical Propagation of Uncertainty in Life Cycle Assessment Using Matrix Formulation
DEFF Research Database (Denmark)
Imbeault-Tétreault, Hugues; Jolliet, Olivier; Deschênes, Louise;
2013-01-01
Inventory data and characterization factors in life cycle assessment (LCA) contain considerable uncertainty. The most common method of parameter uncertainty propagation to the impact scores is Monte Carlo simulation, which remains a resource-intensive option—probably one of the reasons why...... uncertainty assessment is not a regular step in LCA. An analytical approach based on Taylor series expansion constitutes an effective means to overcome the drawbacks of the Monte Carlo method. This project aimed to test the approach on a real case study, and the resulting analytical uncertainty was compared...... of the output uncertainty. Moreover, the sensitivity analysis reveals that the uncertainty of the most sensitive input parameters was not initially considered in the case study. The uncertainty analysis of the comparison of two scenarios is a useful means of highlighting the effects of correlation...
DEFF Research Database (Denmark)
Hong, Jinglan; Shaked, Shanna; Rosenbaum, Ralph K.;
2010-01-01
determine a range and a best estimate of a) the squared geometric standard deviation on the ratio of the two scenario scores, "A/B", and b) the degree of confidence in the prediction that the impact of scenario A is lower than B (i.e., the probability that A/B75%). For the aluminum panel, the electricity...... method to analyze the uncertainty propagation of a single scenario, in which case the squared geometric standard deviation of the final output is determined as a function of the model sensitivity to each input parameter and the squared geometric standard deviation of each parameter. We then extend...... and aluminum primary production, as well as the light oil consumption, are the dominant contributors to the uncertainty. The developed approach for scenario comparisons, differentiating between common and independent parameters, leads to results similar to those of a Monte Carlo analysis; for all tested cases...
Sciacchitano, Andrea; Wieneke, Bernhard
2016-08-01
This paper discusses the propagation of the instantaneous uncertainty of PIV measurements to statistical and instantaneous quantities of interest derived from the velocity field. The expression of the uncertainty of vorticity, velocity divergence, mean value and Reynolds stresses is derived. It is shown that the uncertainty of vorticity and velocity divergence requires the knowledge of the spatial correlation between the error of the x and y particle image displacement, which depends upon the measurement spatial resolution. The uncertainty of statistical quantities is often dominated by the random uncertainty due to the finite sample size and decreases with the square root of the effective number of independent samples. Monte Carlo simulations are conducted to assess the accuracy of the uncertainty propagation formulae. Furthermore, three experimental assessments are carried out. In the first experiment, a turntable is used to simulate a rigid rotation flow field. The estimated uncertainty of the vorticity is compared with the actual vorticity error root-mean-square, with differences between the two quantities within 5-10% for different interrogation window sizes and overlap factors. A turbulent jet flow is investigated in the second experimental assessment. The reference velocity, which is used to compute the reference value of the instantaneous flow properties of interest, is obtained with an auxiliary PIV system, which features a higher dynamic range than the measurement system. Finally, the uncertainty quantification of statistical quantities is assessed via PIV measurements in a cavity flow. The comparison between estimated uncertainty and actual error demonstrates the accuracy of the proposed uncertainty propagation methodology.
Uncertainty propagation in nuclear forensics
International Nuclear Information System (INIS)
Uncertainty propagation formulae are presented for age dating in support of nuclear forensics. The age of radioactive material in this context refers to the time elapsed since a particular radionuclide was chemically separated from its decay product(s). The decay of the parent radionuclide and ingrowth of the daughter nuclide are governed by statistical decay laws. Mathematical equations allow calculation of the age of specific nuclear material through the atom ratio between parent and daughter nuclides, or through the activity ratio provided that the daughter nuclide is also unstable. The derivation of the uncertainty formulae of the age may present some difficulty to the user community and so the exact solutions, some approximations, a graphical representation and their interpretation are presented in this work. Typical nuclides of interest are actinides in the context of non-proliferation commitments. The uncertainty analysis is applied to a set of important parent–daughter pairs and the need for more precise half-life data is examined. - Highlights: • Uncertainty propagation formulae for age dating with nuclear chronometers. • Applied to parent–daughter pairs used in nuclear forensics. • Investigated need for better half-life data
Stochastic and epistemic uncertainty propagation in LCA
DEFF Research Database (Denmark)
Clavreul, Julie; Guyonnet, Dominique; Tonini, Davide;
2013-01-01
When performing uncertainty propagation, most LCA practitioners choose to represent uncertainties by single probability distributions and to propagate them using stochastic methods. However, the selection of single probability distributions appears often arbitrary when faced with scarce informati...
Uncertainty Propagation in an Ecosystem Nutrient Budget.
New aspects and advancements in classical uncertainty propagation methods were used to develop a nutrient budget with associated error for a northern Gulf of Mexico coastal embayment. Uncertainty was calculated for budget terms by propagating the standard error and degrees of fr...
Uncertainty propagation with functionally correlated quantities
Giordano, Mosè
2016-01-01
Many uncertainty propagation software exist, written in different programming languages, but not all of them are able to handle functional correlation between quantities. In this paper we review one strategy to deal with uncertainty propagation of quantities that are functionally correlated, and introduce a new software offering this feature: the Julia package Measurements.jl. It supports real and complex numbers with uncertainty, arbitrary-precision calculations, mathematical and linear algebra operations with matrices and arrays.
Uncertainty propagation within the UNEDF models
Haverinen, T
2016-01-01
The parameters of the nuclear energy density have to be adjusted to experimental data. As a result they carry certain uncertainty which then propagates to calculated values of observables. In the present work we quantify the statistical uncertainties on binding energies for three UNEDF Skyrme energy density functionals by taking advantage of the knowledge of the model parameter uncertainties. We find that the uncertainty of UNEDF models increases rapidly when going towards proton or neutron rich nuclei. We also investigate the impact of each model parameter on the total error budget.
The Role of Uncertainty, Awareness, and Trust in Visual Analytics.
Sacha, Dominik; Senaratne, Hansi; Kwon, Bum Chul; Ellis, Geoffrey; Keim, Daniel A
2016-01-01
Visual analytics supports humans in generating knowledge from large and often complex datasets. Evidence is collected, collated and cross-linked with our existing knowledge. In the process, a myriad of analytical and visualisation techniques are employed to generate a visual representation of the data. These often introduce their own uncertainties, in addition to the ones inherent in the data, and these propagated and compounded uncertainties can result in impaired decision making. The user's confidence or trust in the results depends on the extent of user's awareness of the underlying uncertainties generated on the system side. This paper unpacks the uncertainties that propagate through visual analytics systems, illustrates how human's perceptual and cognitive biases influence the user's awareness of such uncertainties, and how this affects the user's trust building. The knowledge generation model for visual analytics is used to provide a terminology and framework to discuss the consequences of these aspects in knowledge construction and though examples, machine uncertainty is compared to human trust measures with provenance. Furthermore, guidelines for the design of uncertainty-aware systems are presented that can aid the user in better decision making. PMID:26529704
Dynamic system uncertainty propagation using polynomial chaos
Institute of Scientific and Technical Information of China (English)
Xiong Fenfen; Chen Shishi; Xiong Ying
2014-01-01
The classic polynomial chaos method (PCM), characterized as an intrusive methodology, has been applied to uncertainty propagation (UP) in many dynamic systems. However, the intrusive polynomial chaos method (IPCM) requires tedious modification of the governing equations, which might introduce errors and can be impractical. Alternative to IPCM, the non-intrusive polynomial chaos method (NIPCM) that avoids such modifications has been developed. In spite of the frequent application to dynamic problems, almost all the existing works about NIPCM for dynamic UP fail to elaborate the implementation process in a straightforward way, which is important to readers who are unfamiliar with the mathematics of the polynomial chaos theory. Meanwhile, very few works have compared NIPCM to IPCM in terms of their merits and applicability. Therefore, the mathematic procedure of dynamic UP via both methods considering parametric and initial condition uncertainties are comparatively discussed and studied in the present paper. Comparison of accuracy and efficiency in statistic moment estimation is made by applying the two methods to several dynamic UP problems. The relative merits of both approaches are discussed and summarized. The detailed description and insights gained with the two methods through this work are expected to be helpful to engineering designers in solving dynamic UP problems.
Towards a complete propagation uncertainties in depletion calculations
Energy Technology Data Exchange (ETDEWEB)
Martinez, J.S. [Universidad Politecnica de Madrid (Spain). Dept. of Nuclear Engineering; Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Garching (Germany); Zwermann, W.; Gallner, L.; Puente-Espel, Federico; Velkov, K.; Hannstein, V. [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Garching (Germany); Cabellos, O. [Universidad Politecnica de Madrid (Spain). Dept. of Nuclear Engineering
2013-07-01
Propagation of nuclear data uncertainties to calculated values is interesting for design purposes and libraries evaluation. XSUSA, developed at GRS, propagates cross section uncertainties to nuclear calculations. In depletion simulations, fission yields and decay data are also involved and are a possible source of uncertainty that should be taken into account. We have developed tools to generate varied fission yields and decay libraries and to propagate uncertainties through depletion in order to complete the XSUSA uncertainty assessment capabilities. A generic test to probe the methodology is defined and discussed. (orig.)
Gluon Propagator in Fractional Analytic Perturbation Theory
Allendes, Pedro; Cvetič, Gorazd
2014-01-01
We consider the gluon propagator in the Landau gauge at low spacelike momenta and with the dressing function $Z(Q^2)$ at the two-loop order. We incorporate the nonperturbative effects by making the (noninteger) powers of the QCD coupling in the dressing function $Z(Q^2)$ analytic (holomorphic) via the Fractional Analytic Perturbation Theory (FAPT) model, and simultaneously introducing the gluon dynamical mass in the propagator as motivated by the previous analyses of the Dyson-Schwinger equations. The obtained propagator has behavior compatible with the unquenched lattice data ($N_f=2+1$) at low spacelike momenta $0.4 \\ {\\rm GeV} < Q \\lesssim 10$ GeV. We conclude that the removal of the unphysical Landau singularities of the powers of the coupling via the (F)APT prescription, in conjunction with the introduction of the dynamical mass $M \\approx 0.62$ GeV of the gluon, leads to an acceptable behavior of the propagator in the infrared regime.
Quantifying uncertainty in nuclear analytical measurements
International Nuclear Information System (INIS)
The lack of international consensus on the expression of uncertainty in measurements was recognised by the late 1970s and led, after the issuance of a series of rather generic recommendations, to the publication of a general publication, known as GUM, the Guide to the Expression of Uncertainty in Measurement. This publication, issued in 1993, was based on co-operation over several years by the Bureau International des Poids et Mesures, the International Electrotechnical Commission, the International Federation of Clinical Chemistry, the International Organization for Standardization (ISO), the International Union of Pure and Applied Chemistry, the International Union of Pure and Applied Physics and the Organisation internationale de metrologie legale. The purpose was to promote full information on how uncertainty statements are arrived at and to provide a basis for harmonized reporting and the international comparison of measurement results. The need to provide more specific guidance to different measurement disciplines was soon recognized and the field of analytical chemistry was addressed by EURACHEM in 1995 in the first edition of a guidance report on Quantifying Uncertainty in Analytical Measurements, produced by a group of experts from the field. That publication translated the general concepts of the GUM into specific applications for analytical laboratories and illustrated the principles with a series of selected examples as a didactic tool. Based on feedback from the actual practice, the EURACHEM publication was extensively reviewed in 1997-1999 under the auspices of the Co-operation on International Traceability in Analytical Chemistry (CITAC), and a second edition was published in 2000. Still, except for a single example on the measurement of radioactivity in GUM, the field of nuclear and radiochemical measurements was not covered. The explicit requirement of ISO standard 17025:1999, General Requirements for the Competence of Testing and Calibration
Algorithms for propagating uncertainty across heterogeneous domains
Energy Technology Data Exchange (ETDEWEB)
Cho, Heyrim; Yang, Xiu; Venturi, D.; Karniadakis, George E.
2015-12-30
We address an important research area in stochastic multi-scale modeling, namely the propagation of uncertainty across heterogeneous domains characterized by partially correlated processes with vastly different correlation lengths. This class of problems arise very often when computing stochastic PDEs and particle models with stochastic/stochastic domain interaction but also with stochastic/deterministic coupling. The domains may be fully embedded, adjacent or partially overlapping. The fundamental open question we address is the construction of proper transmission boundary conditions that preserve global statistical properties of the solution across different subdomains. Often, the codes that model different parts of the domains are black-box and hence a domain decomposition technique is required. No rigorous theory or even effective empirical algorithms have yet been developed for this purpose, although interfaces defined in terms of functionals of random fields (e.g., multi-point cumulants) can overcome the computationally prohibitive problem of preserving sample-path continuity across domains. The key idea of the different methods we propose relies on combining local reduced-order representations of random fields with multi-level domain decomposition. Specifically, we propose two new algorithms: The first one enforces the continuity of the conditional mean and variance of the solution across adjacent subdomains by using Schwarz iterations. The second algorithm is based on PDE-constrained multi-objective optimization, and it allows us to set more general interface conditions. The effectiveness of these new algorithms is demonstrated in numerical examples involving elliptic problems with random diffusion coefficients, stochastically advected scalar fields, and nonlinear advection-reaction problems with random reaction rates.
Propagation of nuclear data Uncertainties for PWR core analysis
Energy Technology Data Exchange (ETDEWEB)
Cabellos, O.; Castro, E.; Ahnert, C.; Holgado, C. [Dept. of Nuclear Engineering, Universidad Politecnica de Madrid, Madrid (Spain)
2014-06-15
An uncertainty propagation methodology based on the Monte Carlo method is applied to PWR nuclear design analysis to assess the impact of nuclear data uncertainties. The importance of the nuclear data uncertainties for {sup 235,238}U, {sup 239}Pu, and the thermal scattering library for hydrogen in water is analyzed. This uncertainty analysis is compared with the design and acceptance criteria to assure the adequacy of bounding estimates in safety margins.
Remaining Useful Life Estimation in Prognosis: An Uncertainty Propagation Problem
Sankararaman, Shankar; Goebel, Kai
2013-01-01
The estimation of remaining useful life is significant in the context of prognostics and health monitoring, and the prediction of remaining useful life is essential for online operations and decision-making. However, it is challenging to accurately predict the remaining useful life in practical aerospace applications due to the presence of various uncertainties that affect prognostic calculations, and in turn, render the remaining useful life prediction uncertain. It is challenging to identify and characterize the various sources of uncertainty in prognosis, understand how each of these sources of uncertainty affect the uncertainty in the remaining useful life prediction, and thereby compute the overall uncertainty in the remaining useful life prediction. In order to achieve these goals, this paper proposes that the task of estimating the remaining useful life must be approached as an uncertainty propagation problem. In this context, uncertainty propagation methods which are available in the literature are reviewed, and their applicability to prognostics and health monitoring are discussed.
Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I
Bautista, Manuel A; Quinet, Pascal; Dunn, Jay; Kallman, Theodore R Gull Timothy R; Mendoza, Claudio
2013-01-01
We present a method for computing uncertainties in spectral models, i.e. level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data. We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of O III and Fe II and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe II].
UNCERTAINTIES IN ATOMIC DATA AND THEIR PROPAGATION THROUGH SPECTRAL MODELS. I
International Nuclear Information System (INIS)
We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data. We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of O III and Fe II and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe II].
UNCERTAINTIES IN ATOMIC DATA AND THEIR PROPAGATION THROUGH SPECTRAL MODELS. I
Energy Technology Data Exchange (ETDEWEB)
Bautista, M. A.; Fivet, V. [Department of Physics, Western Michigan University, Kalamazoo, MI 49008 (United States); Quinet, P. [Astrophysique et Spectroscopie, Universite de Mons-UMONS, B-7000 Mons (Belgium); Dunn, J. [Physical Science Department, Georgia Perimeter College, Dunwoody, GA 30338 (United States); Gull, T. R. [Code 667, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Kallman, T. R. [Code 662, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Mendoza, C., E-mail: manuel.bautista@wmich.edu [Centro de Fisica, Instituto Venezolano de Investigaciones Cientificas (IVIC), P.O. Box 20632, Caracas 1020A (Venezuela, Bolivarian Republic of)
2013-06-10
We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data. We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of O III and Fe II and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe II].
Uncertainty propagation in locally damped dynamic systems
Cortes Mochales, Lluis; Ferguson, Neil S.; Bhaskar, Atul
2012-01-01
In the field of stochastic structural dynamics, perturbation methods are widely used to estimate the response statistics of uncertain systems. When large built up systems are to be modelled in the mid-frequency range, perturbation methods are often combined with finite element model reduction techniques in order to considerably reduce the computation time of the response. Existing methods based on Component Mode Synthesis(CMS) allow the uncertainties in the system parameters to be treated ...
Stochastic and epistemic uncertainty propagation in LCA
DEFF Research Database (Denmark)
Clavreul, Julie; Guyonnet, Dominique; Tonini, Davide;
2013-01-01
), because there is no way of distinguishing, in the variability of the calculated result, what comes from true randomness and what comes from incomplete information.The method presented offers the advantage of putting the focus on the information rather than deciding a priori of how to represent it...... bounds might motivate the decision-maker to increase the information base regarding certain critical parameters, in order to reduce the uncertainty. Such a decision could not ensue from a purely probabilistic calculation based on subjective (postulated) distributions (despite lack of information...
New challenges on uncertainty propagation assessment of flood risk analysis
Martins, Luciano; Aroca-Jiménez, Estefanía; Bodoque, José M.; Díez-Herrero, Andrés
2016-04-01
Natural hazards, such as floods, cause considerable damage to the human life, material and functional assets every year and around the World. Risk assessment procedures has associated a set of uncertainties, mainly of two types: natural, derived from stochastic character inherent in the flood process dynamics; and epistemic, that are associated with lack of knowledge or the bad procedures employed in the study of these processes. There are abundant scientific and technical literature on uncertainties estimation in each step of flood risk analysis (e.g. rainfall estimates, hydraulic modelling variables); but very few experience on the propagation of the uncertainties along the flood risk assessment. Therefore, epistemic uncertainties are the main goal of this work, in particular,understand the extension of the propagation of uncertainties throughout the process, starting with inundability studies until risk analysis, and how far does vary a proper analysis of the risk of flooding. These methodologies, such as Polynomial Chaos Theory (PCT), Method of Moments or Monte Carlo, are used to evaluate different sources of error, such as data records (precipitation gauges, flow gauges...), hydrologic and hydraulic modelling (inundation estimation), socio-demographic data (damage estimation) to evaluate the uncertainties propagation (UP) considered in design flood risk estimation both, in numerical and cartographic expression. In order to consider the total uncertainty and understand what factors are contributed most to the final uncertainty, we used the method of Polynomial Chaos Theory (PCT). It represents an interesting way to handle to inclusion of uncertainty in the modelling and simulation process. PCT allows for the development of a probabilistic model of the system in a deterministic setting. This is done by using random variables and polynomials to handle the effects of uncertainty. Method application results have a better robustness than traditional analysis
GCR environmental models II: Uncertainty propagation methods for GCR environments
Slaba, Tony C.; Blattnig, Steve R.
2014-04-01
In order to assess the astronaut exposure received within vehicles or habitats, accurate models of the ambient galactic cosmic ray (GCR) environment are required. Many models have been developed and compared to measurements, with uncertainty estimates often stated to be within 15%. However, intercode comparisons can lead to differences in effective dose exceeding 50%. This is the second of three papers focused on resolving this discrepancy. The first paper showed that GCR heavy ions with boundary energies below 500 MeV/n induce less than 5% of the total effective dose behind shielding. Yet, due to limitations on available data, model development and validation are heavily influenced by comparisons to measurements taken below 500 MeV/n. In the current work, the focus is on developing an efficient method for propagating uncertainties in the ambient GCR environment to effective dose values behind shielding. A simple approach utilizing sensitivity results from the first paper is described and shown to be equivalent to a computationally expensive Monte Carlo uncertainty propagation. The simple approach allows a full uncertainty propagation to be performed once GCR uncertainty distributions are established. This rapid analysis capability may be integrated into broader probabilistic radiation shielding analysis and also allows error bars (representing boundary condition uncertainty) to be placed around point estimates of effective dose.
Investigation of Free Particle Propagator with Generalized Uncertainty Problem
Ghobakhloo, F
2016-01-01
We consider the Schrodinger equation with a generalized uncertainty principle for a free particle. We then transform the problem into a second ordinary differential equation and thereby obtain the corresponding propagator. The result of ordinary quantum mechanics is recovered for vanishing minimal length parameter.
Uncertainty propagation in nerve impulses through the action potential mechanism
Torres Valderrama, A.; Witteveen, J.A.S.; Navarro Jimenez, M.I.; Blom, J.G.
2015-01-01
We investigate the propagation of probabilistic uncertainty through the action potential mechanism in nerve cells. Using the Hodgkin-Huxley (H-H) model and Stochastic Collocation on Sparse Grids, we obtain an accurate probabilistic interpretation of the deterministic dynamics of the transmembrane po
HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks
Energy Technology Data Exchange (ETDEWEB)
Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-05-01
This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.
Uncertainty propagation from raw data to final results
International Nuclear Information System (INIS)
Reduction of data from raw numbers (counts per channel) to physically meaningful quantities (such as cross sections) is in itself a complicated procedure. Propagation of experimental uncertainties through that reduction process has sometimes been perceived as even more difficult, if not impossible. At the Oak Ridge Electron Linear Accelerator, a computer code ALEX has been developed to assist in the propagation process. The purpose of ALEX is to carefully and correctly propagate all experimental uncertainties through the entire reduction procedure, yielding the complete covariance matrix for the reduced data, while requiring little additional input from the experimentalist beyond that which is needed for the data reduction itself. The theoretical method used in ALEX is described, with emphasis on transmission measurements. Application to the natural iron and natural nickel measurements of D.C. Larson is shown
On analytic formulas of Feynman propagators in position space
Institute of Scientific and Technical Information of China (English)
ZHANG Hong-Hao; FENG Kai-Xi; QIU Si-Wei; ZHAO An; LI Xue-Song
2010-01-01
We correct an inaccurate result of previous work on the Feynman propagator in position space of a free Dirac field in(3+1)-dimensional spacetime; we derive the generalized analytic formulas of both the scalar Feynman propagator and the spinor Feynman propagator in position space in arbitrary(D+1)-dimensional spacetime; and we further find a recurrence relation among the spinor Feynman propagator in(D+l)-dimensional spacetime and the scalar Feynman propagators in(D+1)-,(D-1)-and(D+3)-dimensional spacetimes.
Uncertainty propagation within an integrated model of climate change
International Nuclear Information System (INIS)
This paper demonstrates a methodology whereby stochastic dynamical systems are used to investigate a climate model's inherent capacity to propagate uncertainty over time. The usefulness of the methodology stems from its ability to identify the variables that account for most of the model's uncertainty. We accomplish this by reformulating a deterministic dynamical system capturing the structure of an integrated climate model into a stochastic dynamical system. Then, via the use of computational techniques of stochastic differential equations accurate uncertainty estimates of the model's variables are determined. The uncertainty is measured in terms of properties of probability distributions of the state variables. The starting characteristics of the uncertainty of the initial state and the random fluctuations are derived from estimates given in the literature. Two aspects of uncertainty are investigated: (1) the dependence on environmental scenario - which is determined by technological development and actions towards environmental protection; and (2) the dependence on the magnitude of the initial state measurement error determined by the progress of climate change and the total magnitude of the system's random fluctuations as well as by our understanding of the climate system. Uncertainty of most of the system's variables is found to be nearly independent of the environmental scenario for the time period under consideration (1990-2100). Even conservative uncertainty estimates result in scenario overlap of several decades during which the consequences of any actions affecting the environment could be very difficult to identify with a sufficient degree of confidence. This fact may have fundamental consequences on the level of social acceptance of any restrictive measures against accelerating global warming. In general, the stochastic fluctuations contribute more to the uncertainty than the initial state measurements. The variables coupling all major climate elements
Propagation of Uncertainty in Rigid Body Attitude Flows
Lee, Taeyoung; Chaturvedi, Nalin A.; Sanyal, Amit K.; Leok, Melvin; McClamroch, N. Harris
2007-01-01
Motivated by attitude control and attitude estimation problems for a rigid body, computational methods are proposed to propagate uncertainties in the angular velocity and the attitude. The nonlinear attitude flow is determined by Euler-Poincar\\'e equations that describe the rotational dynamics of the rigid body acting under the influence of an attitude dependent potential and by a reconstruction equation that describes the kinematics expressed in terms of an orthogonal matrix representing the...
Analysis of uncertainty propagation in nuclear fuel cycle scenarios
International Nuclear Information System (INIS)
Nuclear scenario studies model nuclear fleet over a given period. They enable the comparison of different options for the reactor fleet evolution, and the management of the future fuel cycle materials, from mining to disposal, based on criteria such as installed capacity per reactor technology, mass inventories and flows, in the fuel cycle and in the waste. Uncertainties associated with nuclear data and scenario parameters (fuel, reactors and facilities characteristics) propagate along the isotopic chains in depletion calculations, and through out the scenario history, which reduces the precision of the results. The aim of this work is to develop, implement and use a stochastic uncertainty propagation methodology adapted to scenario studies. The method chosen is based on development of depletion computation surrogate models, which reduce the scenario studies computation time, and whose parameters include perturbations of the depletion model; and fabrication of equivalence model which take into account cross-sections perturbations for computation of fresh fuel enrichment. Then the uncertainty propagation methodology is applied to different scenarios of interest, considering different options of evolution for the French PWR fleet with SFR deployment. (author)
Uncertainty and Sensitivity Analyses of Duct Propagation Models
Nark, Douglas M.; Watson, Willie R.; Jones, Michael G.
2008-01-01
This paper presents results of uncertainty and sensitivity analyses conducted to assess the relative merits of three duct propagation codes. Results from this study are intended to support identification of a "working envelope" within which to use the various approaches underlying these propagation codes. This investigation considers a segmented liner configuration that models the NASA Langley Grazing Incidence Tube, for which a large set of measured data was available. For the uncertainty analysis, the selected input parameters (source sound pressure level, average Mach number, liner impedance, exit impedance, static pressure and static temperature) are randomly varied over a range of values. Uncertainty limits (95% confidence levels) are computed for the predicted values from each code, and are compared with the corresponding 95% confidence intervals in the measured data. Generally, the mean values of the predicted attenuation are observed to track the mean values of the measured attenuation quite well and predicted confidence intervals tend to be larger in the presence of mean flow. A two-level, six factor sensitivity study is also conducted in which the six inputs are varied one at a time to assess their effect on the predicted attenuation. As expected, the results demonstrate the liner resistance and reactance to be the most important input parameters. They also indicate the exit impedance is a significant contributor to uncertainty in the predicted attenuation.
Uncertainty Quantification and Propagation in Nuclear Density Functional Theory
Energy Technology Data Exchange (ETDEWEB)
Schunck, N; McDonnell, J D; Higdon, D; Sarich, J; Wild, S M
2015-03-17
Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going eff orts seek to better root nuclear DFT in the theory of nuclear forces, energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in fi nite nuclei. In this paper, we review recent eff orts to quantify the related uncertainties, and propagate them to model predictions. In particular, we cover the topics of parameter estimation for inverse problems, statistical analysis of model uncertainties and Bayesian inference methods. Illustrative examples are taken from the literature.
Uncertainty quantification and propagation in nuclear density functional theory
Energy Technology Data Exchange (ETDEWEB)
Schunck, N. [Lawrence Livermore National Laboratory, Nuclear and Chemical Science Division, Livermore, CA (United States); McDonnell, J.D. [Lawrence Livermore National Laboratory, Nuclear and Chemical Science Division, Livermore, CA (United States); Francis Marion University, Department of Physics and Astronomy, Florence, SC (United States); Higdon, D. [Los Alamos National Laboratory, Los Alamos, NM (United States); Sarich, J.; Wild, S.M. [Argonne National Laboratory, Mathematics and Computer Science Division, Argonne, IL (United States)
2015-12-15
Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going efforts seek to better root nuclear DFT in the theory of nuclear forces (see Duguet et al., this Topical Issue), energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in finite nuclei. In this paper, we review recent efforts to quantify the related uncertainties, and propagate them to model predictions. In particular, we cover the topics of parameter estimation for inverse problems, statistical analysis of model uncertainties and Bayesian inference methods. Illustrative examples are taken from the literature. (orig.)
Analytic orbit propagation for transiting circumbinary planets
Georgakarakos, Nikolaos
2015-01-01
The herein presented analytical framework fully describes the motion of coplanar systems consisting of a stellar binary and a planet orbiting both stars on orbital as well as secular timescales. Perturbations of the Runge-Lenz vector are used to derive short period evolution of the system, while octupole secular theory is applied to describe its long term behaviour. A post Newtonian correction on the stellar orbit is included. The planetary orbit is initially circular and the theory developed here assumes that the planetary eccentricity remains relatively small (e_2<0.2). Our model is tested against results from numerical integrations of the full equations of motion and is then applied to investigate the dynamical history of some of the circumbinary planetary systems discovered by NASA's Kepler satellite. Our results suggest that the formation history of the systems Kepler-34 and Kepler-413 has most likely been different from the one of Kepler-16, Kepler-35, Kepler-38 and Kepler-64, since the observed plan...
Propagation of radar rainfall uncertainty in urban flood simulations
Liguori, Sara; Rico-Ramirez, Miguel
2013-04-01
This work discusses the results of the implementation of a novel probabilistic system designed to improve ensemble sewer flow predictions for the drainage network of a small urban area in the North of England. The probabilistic system has been developed to model the uncertainty associated to radar rainfall estimates and propagate it through radar-based ensemble sewer flow predictions. The assessment of this system aims at outlining the benefits of addressing the uncertainty associated to radar rainfall estimates in a probabilistic framework, to be potentially implemented in the real-time management of the sewer network in the study area. Radar rainfall estimates are affected by uncertainty due to various factors [1-3] and quality control and correction techniques have been developed in order to improve their accuracy. However, the hydrological use of radar rainfall estimates and forecasts remains challenging. A significant effort has been devoted by the international research community to the assessment of the uncertainty propagation through probabilistic hydro-meteorological forecast systems [4-5], and various approaches have been implemented for the purpose of characterizing the uncertainty in radar rainfall estimates and forecasts [6-11]. A radar-based ensemble stochastic approach, similar to the one implemented for use in the Southern-Alps by the REAL system [6], has been developed for the purpose of this work. An ensemble generator has been calibrated on the basis of the spatial-temporal characteristics of the residual error in radar estimates assessed with reference to rainfall records from around 200 rain gauges available for the year 2007, previously post-processed and corrected by the UK Met Office [12-13]. Each ensemble member is determined by summing a perturbation field to the unperturbed radar rainfall field. The perturbations are generated by imposing the radar error spatial and temporal correlation structure to purely stochastic fields. A
Assessing and propagating uncertainty in model inputs in corsim
Energy Technology Data Exchange (ETDEWEB)
Molina, G.; Bayarri, M. J.; Berger, J. O.
2001-07-01
CORSIM is a large simulator for vehicular traffic, and is being studied with respect to its ability to successfully model and predict behavior of traffic in a 36 block section of Chicago. Inputs to the simulator include information about street configuration, driver behavior, traffic light timing, turning probabilities at each corner and distributions of traffic ingress into the system. This work is described in more detail in the article Fast Simulators for Assessment and Propagation of Model Uncertainty also in these proceedings. The focus of this conference poster is on the computational aspects of this problem. In particular, we address the description of the full conditional distributions needed for implementation of the MCMC algorithm and, in particular, how the constraints can be incorporated; details concerning the run time and convergence of the MCMC algorithm; and utilisation of the MCMC output for prediction and uncertainty analysis concerning the CORSIM computer model. As this last is the ultimate goal, it is worth emphasizing that the incorporation of all uncertainty concerning inputs can significantly affect the model predictions. (Author)
A new analytical framework for tidal propagation in estuaries
Cai, H.
2014-01-01
The ultimate aim of this thesis is to enhance our understanding of tidal wave propagation in convergent alluvial estuaries (of infinite length). In the process, a new analytical model has been developed as a function of externally defined dimensionless parameters describing friction, channel converg
Ultrashort Optical Pulse Propagation in terms of Analytic Signal
Directory of Open Access Journals (Sweden)
Sh. Amiranashvili
2011-01-01
Full Text Available We demonstrate that ultrashort optical pulses propagating in a nonlinear dispersive medium are naturally described through incorporation of analytic signal for the electric field. To this end a second-order nonlinear wave equation is first simplified using a unidirectional approximation. Then the analytic signal is introduced, and all nonresonant nonlinear terms are eliminated. The derived propagation equation accounts for arbitrary dispersion, resonant four-wave mixing processes, weak absorption, and arbitrary pulse duration. The model applies to the complex electric field and is independent of the slowly varying envelope approximation. Still the derived propagation equation posses universal structure of the generalized nonlinear Schrödinger equation (NSE. In particular, it can be solved numerically with only small changes of the standard split-step solver or more complicated spectral algorithms for NSE. We present exemplary numerical solutions describing supercontinuum generation with an ultrashort optical pulse.
Analytic structure of QCD propagators in Minkowski space
Siringo, Fabio
2016-01-01
Analytical functions for the propagators of QCD, including a set of chiral quarks, are derived by a one-loop massive expansion in the Landau gauge, deep in the infrared. By analytic continuation, the spectral functions are studied in Minkowski space, yielding a direct proof of positivity violation and confinement from first principles.The dynamical breaking of chiral symmetry is described on the same footing of gluon mass generation, providing a unified picture. While dealing with the exact Lagrangian, the expansion is based on massive free-particle propagators, is safe in the infrared and is equivalent to the standard perturbation theory in the UV. By dimensional regularization, all diverging mass terms cancel exactly without including mass counterterms that would spoil the gauge and chiral symmetry of the Lagrangian. Universal scaling properties are predicted for the inverse dressing functions and shown to be satisfied by the lattice data. Complex conjugated poles are found for the gluon propagator, in agre...
Risk classification and uncertainty propagation for virtual water distribution systems
International Nuclear Information System (INIS)
While the secrecy of real water distribution system data is crucial, it poses difficulty for research as results cannot be publicized. This data includes topological layouts of pipe networks, pump operation schedules, and water demands. Therefore, a library of virtual water distribution systems can be an important research tool for comparative development of analytical methods. A virtual city, 'Micropolis', has been developed, including a comprehensive water distribution system, as a first entry into such a library. This virtual city of 5000 residents is fully described in both geographic information systems (GIS) and EPANet hydraulic model frameworks. A risk classification scheme and Monte Carlo analysis are employed for an attempted water supply contamination attack. Model inputs to be considered include uncertainties in: daily water demand, seasonal demand, initial storage tank levels, the time of day a contamination event is initiated, duration of contamination event, and contaminant quantity. Findings show that reasonable uncertainties in model inputs produce high variability in exposure levels. It is also shown that exposure level distributions experience noticeable sensitivities to population clusters within the contaminant spread area. High uncertainties in exposure patterns lead to greater resources needed for more effective mitigation strategies.
Uncertainty propagation in orbital mechanics via tensor decomposition
Sun, Yifei; Kumar, Mrinal
2016-03-01
Uncertainty forecasting in orbital mechanics is an essential but difficult task, primarily because the underlying Fokker-Planck equation (FPE) is defined on a relatively high dimensional (6-D) state-space and is driven by the nonlinear perturbed Keplerian dynamics. In addition, an enormously large solution domain is required for numerical solution of this FPE (e.g. encompassing the entire orbit in the x-y-z subspace), of which the state probability density function (pdf) occupies a tiny fraction at any given time. This coupling of large size, high dimensionality and nonlinearity makes for a formidable computational task, and has caused the FPE for orbital uncertainty propagation to remain an unsolved problem. To the best of the authors' knowledge, this paper presents the first successful direct solution of the FPE for perturbed Keplerian mechanics. To tackle the dimensionality issue, the time-varying state pdf is approximated in the CANDECOMP/PARAFAC decomposition tensor form where all the six spatial dimensions as well as the time dimension are separated from one other. The pdf approximation for all times is obtained simultaneously via the alternating least squares algorithm. Chebyshev spectral differentiation is employed for discretization on account of its spectral ("super-fast") convergence rate. To facilitate the tensor decomposition and control the solution domain size, system dynamics is expressed using spherical coordinates in a noninertial reference frame. Numerical results obtained on a regular personal computer are compared with Monte Carlo simulations.
Uncertainties in workplace external dosimetry - An analytical approach
International Nuclear Information System (INIS)
The uncertainties associated with external dosimetry measurements at workplaces depend on the type of dosemeter used together with its performance characteristics and the information available on the measurement conditions. Performance characteristics were determined in the course of a type test and information about the measurement conditions can either be general, e.g. 'research' and 'medicine', or specific, e.g. 'X-ray testing equipment for aluminium wheel rims'. This paper explains an analytical approach to determine the measurement uncertainty. It is based on the Draft IEC Technical Report IEC 62461 Radiation Protection Instrumentation - Determination of Uncertainty in Measurement. Both this paper and the report cannot eliminate the fact that the determination of the uncertainty requires a larger effort than performing the measurement itself. As a counterbalance, the process of determining the uncertainty results not only in a numerical value of the uncertainty but also produces the best estimate of the quantity to be measured, which may differ from the indication of the instrument. Thus it also improves the result of the measurement. (authors)
Uncertainty propagation for systems of conservation laws, stochastic spectral methods
International Nuclear Information System (INIS)
Uncertainty quantification through stochastic spectral methods has been recently applied to several kinds of stochastic PDEs. This thesis deals with stochastic systems of conservation laws. These systems are non linear and develop discontinuities in finite times: these difficulties can trigger the loss of hyperbolicity of the truncated system resulting of the application of sG-gPC (stochastic Galerkin-generalized Polynomial Chaos). We introduce a formalism based on both kinetic theory and moments theory in order to close the truncated system in such a way that the hyperbolicity of the latter is ensured. The idea is to close the truncated system obtained by Galerkin projection via the introduction of an entropy - strictly convex function on the definition domain of our unknowns. In the case this entropy is the mathematical entropy of the non truncated system, the hyperbolicity is ensured. We state several properties of this truncated system from a general non truncated system of conservation laws. We then apply the method to the case of the stochastic inviscid Burgers' equation with random initial conditions and to the stochastic Euler system in one and two space dimensions. In the vicinity of discontinuities, the new method bounds the oscillations due to Gibbs phenomenon to a certain range through the entropy of the system without the use of any adaptative random space discretizations. It is found to be more precise than the stochastic Galerkin method for several test problems. In a last chapter, we present two prospective outlooks: we first suggest an uncertainty propagation method based on the coupling of intrusive and non intrusive methods. We finally emphasize the modelling possibilities of the intrusive Polynomial Chaos methods in order to take into account three dimensional perturbations of a mean one dimensional flow. (author)
Propagation of nuclear data uncertainty: Exact or with covariances
Directory of Open Access Journals (Sweden)
van Veen D.
2010-10-01
Full Text Available Two distinct methods of propagation for basic nuclear data uncertainties to large scale systems will be presented and compared. The “Total Monte Carlo” method is using a statistical ensemble of nuclear data libraries randomly generated by means of a Monte Carlo approach with the TALYS system. These libraries are then directly used in a large number of reactor calculations (for instance with MCNP after which the exact probability distribution for the reactor parameter is obtained. The second method makes use of available covariance files and can be done in a single reactor calculation (by using the perturbation method. In this exercise, both methods are using consistent sets of data files, which implies that covariance files used in the second method are directly obtained from the randomly generated nuclear data libraries from the first method. This is a unique and straightforward comparison allowing to directly apprehend advantages and drawbacks of each method. Comparisons for different reactions and criticality-safety benchmarks from 19F to actinides will be presented. We can thus conclude whether current methods for using covariance data are good enough or not.
Estimation of the uncertainty of analyte concentration from the measurement uncertainty.
Brown, Simon; Cooke, Delwyn G; Blackwell, Leonard F
2015-09-01
Ligand-binding assays, such as immunoassays, are usually analysed using standard curves based on the four-parameter and five-parameter logistic models. An estimate of the uncertainty of an analyte concentration obtained from such curves is needed for confidence intervals or precision profiles. Using a numerical simulation approach, it is shown that the uncertainty of the analyte concentration estimate becomes significant at the extremes of the concentration range and that this is affected significantly by the steepness of the standard curve. We also provide expressions for the coefficient of variation of the analyte concentration estimate from which confidence intervals and the precision profile can be obtained. Using three examples, we show that the expressions perform well.
Assessment and Propagation of Input Uncertainty in Tree-based Option Pricing Models
Gzyl, Henryk; Molina, German; ter Horst, Enrique
2007-01-01
This paper aims to provide a practical example on the assessment and propagation of input uncertainty for option pricing when using tree-based methods. Input uncertainty is propagated into output uncertainty, reflecting that option prices are as unknown as the inputs they are based on. Option pricing formulas are tools whose validity is conditional not only on how close the model represents reality, but also on the quality of the inputs they use, and those inputs are usually not observable. W...
A comparative study of new non-linear uncertainty propagation methods for space surveillance
Horwood, Joshua T.; Aristoff, Jeffrey M.; Singh, Navraj; Poore, Aubrey B.
2014-06-01
We propose a unified testing framework for assessing uncertainty realism during non-linear uncertainty propagation under the perturbed two-body problem of celestial mechanics, with an accompanying suite of metrics and benchmark test cases on which to validate different methods. We subsequently apply the testing framework to different combinations of uncertainty propagation techniques and coordinate systems for representing the uncertainty. In particular, we recommend the use of a newly-derived system of orbital element coordinates that mitigate the non-linearities in uncertainty propagation and the recently-developed Gauss von Mises filter which, when used in tandem, provide uncertainty realism over much longer periods of time compared to Gaussian representations of uncertainty in Cartesian spaces, at roughly the same computational cost.
Díez, C. J.; Cabellos, O.; Martínez, J. S.
2015-01-01
Several approaches have been developed in last decades to tackle nuclear data uncertainty propagation problems of burn-up calculations. One approach proposed was the Hybrid Method, where uncertainties in nuclear data are propagated only on the depletion part of a burn-up problem. Because only depletion is addressed, only one-group cross sections are necessary, and hence, their collapsed one-group uncertainties. This approach has been applied successfully in several advanced reactor systems like EFIT (ADS-like reactor) or ESFR (Sodium fast reactor) to assess uncertainties on the isotopic composition. However, a comparison with using multi-group energy structures was not carried out, and has to be performed in order to analyse the limitations of using one-group uncertainties.
Preliminary Results on Uncertainty Quantification for Pattern Analytics
Energy Technology Data Exchange (ETDEWEB)
Stracuzzi, David John [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Brost, Randolph [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Chen, Maximillian Gene [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Malinas, Rebecca [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Peterson, Matthew Gregor [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Phillips, Cynthia A. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robinson, David G. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Woodbridge, Diane [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
This report summarizes preliminary research into uncertainty quantification for pattern ana- lytics within the context of the Pattern Analytics to Support High-Performance Exploitation and Reasoning (PANTHER) project. The primary focus of PANTHER was to make large quantities of remote sensing data searchable by analysts. The work described in this re- port adds nuance to both the initial data preparation steps and the search process. Search queries are transformed from does the specified pattern exist in the data? to how certain is the system that the returned results match the query? We show example results for both data processing and search, and discuss a number of possible improvements for each.
An Analytical Study of the Mode Propagation along the Plasmaline
Szeremley, Daniel; Brinkmann, Ralf Peter; Mussenbrock, Thomas; Eremin, Denis; Theoretical Electrical Engineering Team
2014-10-01
The market shows in recent years a growing demand for bottles made of polyethylene terephthalate (PET). Therefore, fast and efficient sterilization processes as well as barrier coatings to decrease gas permeation are required. A specialized microwave plasma source - referred to as the plasmaline - has been developed to allow for treatment of the inner surface of such PET bottles The plasmaline is a coaxial waveguide combined with a gas-inlet which is inserted into the empty bottle and initiates a reactive plasma. To optimize and control the different surface processes, it is essential to fully understand the microwave power coupling to the plasma inside the bottle and thus the electromagnetic wave propagation along the plasmaline. In this contribution, we present a detailed dispersion analysis based on an analytical approach. We study how modes of guided waves are propagating under different conditions (if at all). The analytical results are supported by a series of self-consistent numerical simulations of the plasmaline and the plasma. The authors acknowledge funding by the Deutsche Forschungsgemeinschaft within the frame of SFB-TR 87.
A Multi-Model Approach for Uncertainty Propagation and Model Calibration in CFD Applications
Wang, Jian-xun; Xiao, Heng
2015-01-01
Proper quantification and propagation of uncertainties in computational simulations are of critical importance. This issue is especially challenging for CFD applications. A particular obstacle for uncertainty quantifications in CFD problems is the large model discrepancies associated with the CFD models used for uncertainty propagation. Neglecting or improperly representing the model discrepancies leads to inaccurate and distorted uncertainty distribution for the Quantities of Interest. High-fidelity models, being accurate yet expensive, can accommodate only a small ensemble of simulations and thus lead to large interpolation errors and/or sampling errors; low-fidelity models can propagate a large ensemble, but can introduce large modeling errors. In this work, we propose a multi-model strategy to account for the influences of model discrepancies in uncertainty propagation and to reduce their impact on the predictions. Specifically, we take advantage of CFD models of multiple fidelities to estimate the model ...
A comparative study: top event unavailability by point estimates and uncertainty propagation
International Nuclear Information System (INIS)
The results of five cases studied are presented to identify how close the cumulative value represented by the point estimate is to the corresponding statistics of the top event distribution. The computer code FTA-J is used for quantification of the fault trees studied, top event unavailabilities, moments and cumulative probability distributions inclusive. The FTA-J demonstrates its usefulness for large trees. In all cases, the point estimate unavailability of the top event based on the median values of the basic events, which has been widely and commonly used in risk assessment for the sake of its simplicity, are lower than its median unavailability by uncertainty propagation. The top event unavailability thus obtained can represent much lower values: i.e. the system would appear much better than it actually is. The point estimate based on the mean values, however, is shown the same as that obtained by uncertainty propagation numerically and analytically. The mean of the top event can be well approximated by forming the product of the means of the components in each cut set, then summing these products. The point estimate can not represent all of the probability distribution characteristics of the top event, so that the estimation of probability intervals for the top event unavailability should be made either by a Monte Carlo simulation or other analytical method. When it is forced to calculate the top event unavailability only by the point estimate, it is the mean value of the component failure data that should be used for its quantification. (author)
Measuring the Gas Constant "R": Propagation of Uncertainty and Statistics
Olsen, Robert J.; Sattar, Simeen
2013-01-01
Determining the gas constant "R" by measuring the properties of hydrogen gas collected in a gas buret is well suited for comparing two approaches to uncertainty analysis using a single data set. The brevity of the experiment permits multiple determinations, allowing for statistical evaluation of the standard uncertainty u[subscript…
'spup' - an R package for uncertainty propagation in spatial environmental modelling
Sawicka, Kasia; Heuvelink, Gerard
2016-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in
Propagation of nuclear data uncertainties in fuel cycle calculations using Monte-Carlo technique
International Nuclear Information System (INIS)
Nowadays, the knowledge of uncertainty propagation in depletion calculations is a critical issue because of the safety and economical performance of fuel cycles. Response magnitudes such as decay heat, radiotoxicity and isotopic inventory and their uncertainties should be known to handle spent fuel in present fuel cycles (e.g. high burnup fuel programme) and furthermore in new fuel cycles designs (e.g. fast breeder reactors and ADS). To deal with this task, there are different error propagation techniques, deterministic (adjoint/forward sensitivity analysis) and stochastic (Monte-Carlo technique) to evaluate the error in response magnitudes due to nuclear data uncertainties. In our previous works, cross-section uncertainties were propagated using a Monte-Carlo technique to calculate the uncertainty of response magnitudes such as decay heat and neutron emission. Also, the propagation of decay data, fission yield and cross-section uncertainties was performed, but only isotopic composition was the response magnitude calculated. Following the previous technique, the nuclear data uncertainties are taken into account and propagated to response magnitudes, decay heat and radiotoxicity. These uncertainties are assessed during cooling time. To evaluate this Monte-Carlo technique, two different applications are performed. First, a fission pulse decay heat calculation is carried out to check the Monte-Carlo technique, using decay data and fission yields uncertainties. Then, the results, experimental data and reference calculation (JEFF Report20), are compared. Second, we assess the impact of basic nuclear data (activation cross-section, decay data and fission yields) uncertainties on relevant fuel cycle parameters (decay heat and radiotoxicity) for a conceptual design of a modular European Facility for Industrial Transmutation (EFIT) fuel cycle. After identifying which time steps have higher uncertainties, an assessment of which uncertainties have more relevance is performed
Pragmatic aspects of uncertainty propagation: A conceptual review
Thacker, W.Carlisle
2015-09-11
When quantifying the uncertainty of the response of a computationally costly oceanographic or meteorological model stemming from the uncertainty of its inputs, practicality demands getting the most information using the fewest simulations. It is widely recognized that, by interpolating the results of a small number of simulations, results of additional simulations can be inexpensively approximated to provide a useful estimate of the variability of the response. Even so, as computing the simulations to be interpolated remains the biggest expense, the choice of these simulations deserves attention. When making this choice, two requirement should be considered: (i) the nature of the interpolation and ii) the available information about input uncertainty. Examples comparing polynomial interpolation and Gaussian process interpolation are presented for three different views of input uncertainty.
Analytic Matrix Method for the Study of Propagation Characteristics of a Bent Planar Waveguide
Institute of Scientific and Technical Information of China (English)
LIU Qing; CAO Zhuang-Qi; SHEN Qi-Shun; DOU Xiao-Ming; CHEN Ying-Li
2000-01-01
An analytic matrix method is used to analyze and accurately calculate the propagation constant and bendinglosses of a bent planar waveguide. This method gives not only a dispersion equation with explicit physical insight,but also accurate complex propagation constants.
Monte Carlo uncertainty propagation approaches in ADS burn-up calculations
International Nuclear Information System (INIS)
Highlights: ► Two Monte Carlo uncertainty propagation approaches are compared. ► How to make both approaches equivalent is presented and applied. ► ADS burn-up calculation is selected as the application of approaches. ► The cross-section uncertainties of 239Pu and 241Pu are propagated. ► Cross-correlations appear as a source of differences between approaches. - Abstract: In activation calculations, there are several approaches to quantify uncertainties: deterministic by means of sensitivity analysis, and stochastic by means of Monte Carlo. Here, two different Monte Carlo approaches for nuclear data uncertainty are presented: the first one is the Total Monte Carlo (TMC). The second one is by means of a Monte Carlo sampling of the covariance information included in the nuclear data libraries to propagate these uncertainties throughout the activation calculations. This last approach is what we named Covariance Uncertainty Propagation, CUP. This work presents both approaches and their differences. Also, they are compared by means of an activation calculation, where the cross-section uncertainties of 239Pu and 241Pu are propagated in an ADS activation calculation
An analytical approach for the Propagation Saw Test
Benedetti, Lorenzo; Fischer, Jan-Thomas; Gaume, Johan
2016-04-01
The Propagation Saw Test (PST) [1, 2] is an experimental in-situ technique that has been introduced to assess crack propagation propensity in weak snowpack layers buried below cohesive snow slabs. This test attracted the interest of a large number of practitioners, being relatively easy to perform and providing useful insights for the evaluation of snow instability. The PST procedure requires isolating a snow column of 30 centimeters of width and -at least-1 meter in the downslope direction. Then, once the stratigraphy is known (e.g. from a manual snow profile), a saw is used to cut a weak layer which could fail, potentially leading to the release of a slab avalanche. If the length of the saw cut reaches the so-called critical crack length, the onset of crack propagation occurs. Furthermore, depending on snow properties, the crack in the weak layer can initiate the fracture and detachment of the overlying slab. Statistical studies over a large set of field data confirmed the relevance of the PST, highlighting the positive correlation between test results and the likelihood of avalanche release [3]. Recent works provided key information on the conditions for the onset of crack propagation [4] and on the evolution of slab displacement during the test [5]. In addition, experimental studies [6] and simplified models [7] focused on the qualitative description of snowpack properties leading to different failure types, namely full propagation or fracture arrest (with or without slab fracture). However, beside current numerical studies utilizing discrete elements methods [8], only little attention has been devoted to a detailed analytical description of the PST able to give a comprehensive mechanical framework of the sequence of processes involved in the test. Consequently, this work aims to give a quantitative tool for an exhaustive interpretation of the PST, stressing the attention on important parameters that influence the test outcomes. First, starting from a pure
An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method
Energy Technology Data Exchange (ETDEWEB)
Campolina, Daniel; Lima, Paulo Rubens I., E-mail: campolina@cdtn.br, E-mail: pauloinacio@cpejr.com.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil). Servico de Tecnologia de Reatores; Pereira, Claubia; Veloso, Maria Auxiliadora F., E-mail: claubia@nuclear.ufmg.br, E-mail: dora@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Dept. de Engenharia Nuclear
2015-07-01
Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k{sub eff} was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)
Soft computing approaches to uncertainty propagation in environmental risk mangement
Kumar, Vikas
2008-01-01
Real-world problems, especially those that involve natural systems, are complex and composed of many nondeterministic components having non-linear coupling. It turns out that in dealing with such systems, one has to face a high degree of uncertainty and tolerate imprecision. Classical system models based on numerical analysis, crisp logic or binary logic have characteristics of precision and categoricity and classified as hard computing approach. In contrast soft computing approaches like pro...
Gadomski, P. J.; Deems, J. S.; Glennie, C. L.; Hartzell, P. J.; Butler, H.; Finnegan, D. C.
2015-12-01
The use of high-resolution topographic data in the form of three-dimensional point clouds obtained from laser scanning systems (LiDAR) is becoming common across scientific disciplines.However little consideration has typically been given to the accuracy and the precision of LiDAR-derived measurements at the individual point scale.Numerous disparate sources contribute to the aggregate precision of each point measurement, including uncertainties in the range measurement, measurement of the attitude and position of the LiDAR collection platform, uncertainties associated with the interaction between the laser pulse and the target surface, and more.We have implemented open-source software tools to calculate per-point stochastic measurement errors for a point cloud using the general LiDAR georeferencing equation.We demonstrate the use of these propagated uncertainties by applying our methods to data collected by the Airborne Snow Observatory ALS, a NASA JPL project using a combination of airborne hyperspectral and LiDAR data to estimate snow-water equivalent distributions over full river basins.We present basin-scale snow depth maps with associated uncertainties, and demonstrate the propagation of those uncertainties to snow volume and snow-water equivalent calculations.
Servin, Christian
2015-01-01
On various examples ranging from geosciences to environmental sciences, this book explains how to generate an adequate description of uncertainty, how to justify semiheuristic algorithms for processing uncertainty, and how to make these algorithms more computationally efficient. It explains in what sense the existing approach to uncertainty as a combination of random and systematic components is only an approximation, presents a more adequate three-component model with an additional periodic error component, and explains how uncertainty propagation techniques can be extended to this model. The book provides a justification for a practically efficient heuristic technique (based on fuzzy decision-making). It explains how the computational complexity of uncertainty processing can be reduced. The book also shows how to take into account that in real life, the information about uncertainty is often only partially known, and, on several practical examples, explains how to extract the missing information about uncer...
International Nuclear Information System (INIS)
For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95. percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input. Particularly it was shown that during the burnup, the variances when considering all the parameters uncertainties is equivalent to the sum of variances if the parameter uncertainties are sampled separately
Myers, Casey A.; Laz, Peter J.; Shelburne, Kevin B.; Davidson, Bradley S.
2014-01-01
Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse ki...
Comparison of nuclear data uncertainty propagation methodologies for PWR burn-up simulations
Diez, Carlos Javier; Hoefer, Axel; Porsch, Dieter; Cabellos, Oscar
2014-01-01
Several methodologies using different levels of approximations have been developed for propagating nuclear data uncertainties in nuclear burn-up simulations. Most methods fall into the two broad classes of Monte Carlo approaches, which are exact apart from statistical uncertainties but require additional computation time, and first order perturbation theory approaches, which are efficient for not too large numbers of considered response functions but only applicable for sufficiently small nuclear data uncertainties. Some methods neglect isotopic composition uncertainties induced by the depletion steps of the simulations, others neglect neutron flux uncertainties, and the accuracy of a given approximation is often very hard to quantify. In order to get a better sense of the impact of different approximations, this work aims to compare results obtained based on different approximate methodologies with an exact method, namely the NUDUNA Monte Carlo based approach developed by AREVA GmbH. In addition, the impact ...
Propagation of Nuclear Data Uncertainties for ELECTRA Burn-up Calculations
Sjöstrand, H.; Alhassan, E.; Duan, J.; Gustavsson, C.; Koning, A. J.; Pomp, S.; Rochman, D.; Österlund, M.
2014-04-01
The European Lead-Cooled Training Reactor (ELECTRA) has been proposed as a training reactor for fast systems within the Swedish nuclear program. It is a low-power fast reactor cooled by pure liquid lead. In this work, we propagate the uncertainties in 239Pu transport data to uncertainties in the fuel inventory of ELECTRA during the reactor lifetime using the Total Monte Carlo approach (TMC). Within the TENDL project, nuclear models input parameters were randomized within their uncertainties and 740 239Pu nuclear data libraries were generated. These libraries are used as inputs to reactor codes, in our case SERPENT, to perform uncertainty analysis of nuclear reactor inventory during burn-up. The uncertainty in the inventory determines uncertainties in: the long-term radio-toxicity, the decay heat, the evolution of reactivity parameters, gas pressure and volatile fission product content. In this work, a methodology called fast TMC is utilized, which reduces the overall calculation time. The uncertainty of some minor actinides were observed to be rather large and therefore their impact on multiple recycling should be investigated further. It was also found that, criticality benchmarks can be used to reduce inventory uncertainties due to nuclear data. Further studies are needed to include fission yield uncertainties, more isotopes, and a larger set of benchmarks.
Directory of Open Access Journals (Sweden)
Wansik Yu
2016-01-01
Full Text Available The common approach to quantifying the precipitation forecast uncertainty is ensemble simulations where a numerical weather prediction (NWP model is run for a number of cases with slightly different initial conditions. In practice, the spread of ensemble members in terms of flood discharge is used as a measure of forecast uncertainty due to uncertain precipitation forecasts. This study presents the uncertainty propagation of rainfall forecast into hydrological response with catchment scale through distributed rainfall-runoff modeling based on the forecasted ensemble rainfall of NWP model. At first, forecast rainfall error based on the BIAS is compared with flood forecast error to assess the error propagation. Second, the variability of flood forecast uncertainty according to catchment scale is discussed using ensemble spread. Then we also assess the flood forecast uncertainty with catchment scale using an estimation regression equation between ensemble rainfall BIAS and discharge BIAS. Finally, the flood forecast uncertainty with RMSE using specific discharge in catchment scale is discussed. Our study is carried out and verified using the largest flood event by typhoon “Talas” of 2011 over the 33 subcatchments of Shingu river basin (2,360 km2, which is located in the Kii Peninsula, Japan.
Energy Technology Data Exchange (ETDEWEB)
Mullor, R. [Dpto. Estadistica e Investigacion Operativa, Universidad Alicante (Spain); Sanchez, A., E-mail: aisanche@eio.upv.e [Dpto. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica Valencia, Camino de Vera s/n 46022 (Spain); Martorell, S. [Dpto. Ingenieria Quimica y Nuclear, Universidad Politecnica Valencia (Spain); Martinez-Alzamora, N. [Dpto. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica Valencia, Camino de Vera s/n 46022 (Spain)
2011-06-15
Safety related systems performance optimization is classically based on quantifying the effects that testing and maintenance activities have on reliability and cost (R+C). However, R+C quantification is often incomplete in the sense that important uncertainties may not be considered. An important number of studies have been published in the last decade in the field of R+C based optimization considering uncertainties. They have demonstrated that inclusion of uncertainties in the optimization brings the decision maker insights concerning how uncertain the R+C results are and how this uncertainty does matter as it can result in differences in the outcome of the decision making process. Several methods of uncertainty propagation based on the theory of tolerance regions have been proposed in the literature depending on the particular characteristics of the variables in the output and their relations. In this context, the objective of this paper focuses on the application of non-parametric and parametric methods to analyze uncertainty propagation, which will be implemented on a multi-objective optimization problem where reliability and cost act as decision criteria and maintenance intervals act as decision variables. Finally, a comparison of results of these applications and the conclusions obtained are presented.
Parker, Jack C.; Park, Eungyu; Tang, Guoping
2008-11-01
A vertically-integrated analytical model for dissolved phase transport is described that considers a time-dependent DNAPL source based on the upscaled dissolution kinetics model of Parker and Park with extensions to consider time-dependent source zone biodecay, partial source mass reduction, and remediation-enhanced source dissolution kinetics. The model also considers spatial variability in aqueous plume decay, which is treated as the sum of aqueous biodecay and volatilization due to diffusive transport and barometric pumping through the unsaturated zone. The model is implemented in Excel/VBA coupled with (1) an inverse solution that utilizes prior information on model parameters and their uncertainty to condition the solution, and (2) an error analysis module that computes parameter covariances and total prediction uncertainty due to regression error and parameter uncertainty. A hypothetical case study is presented to evaluate the feasibility of calibrating the model from limited noisy field data. The results indicate that prediction uncertainty increases significantly over time following calibration, primarily due to propagation of parameter uncertainty. However, differences between the predicted performance of source zone partial mass reduction and the known true performance were reasonably small. Furthermore, a clear difference is observed between the predicted performance for the remedial action scenario versus that for a no-action scenario, which is consistent with the true system behavior. The results suggest that the model formulation can be effectively utilized to assess monitored natural attenuation and source remediation options if careful attention is given to model calibration and prediction uncertainty issues.
Parker, Jack C; Park, Eungyu; Tang, Guoping
2008-11-14
A vertically-integrated analytical model for dissolved phase transport is described that considers a time-dependent DNAPL source based on the upscaled dissolution kinetics model of Parker and Park with extensions to consider time-dependent source zone biodecay, partial source mass reduction, and remediation-enhanced source dissolution kinetics. The model also considers spatial variability in aqueous plume decay, which is treated as the sum of aqueous biodecay and volatilization due to diffusive transport and barometric pumping through the unsaturated zone. The model is implemented in Excel/VBA coupled with (1) an inverse solution that utilizes prior information on model parameters and their uncertainty to condition the solution, and (2) an error analysis module that computes parameter covariances and total prediction uncertainty due to regression error and parameter uncertainty. A hypothetical case study is presented to evaluate the feasibility of calibrating the model from limited noisy field data. The results indicate that prediction uncertainty increases significantly over time following calibration, primarily due to propagation of parameter uncertainty. However, differences between the predicted performance of source zone partial mass reduction and the known true performance were reasonably small. Furthermore, a clear difference is observed between the predicted performance for the remedial action scenario versus that for a no-action scenario, which is consistent with the true system behavior. The results suggest that the model formulation can be effectively utilized to assess monitored natural attenuation and source remediation options if careful attention is given to model calibration and prediction uncertainty issues. PMID:18502537
Propagation of statistical and nuclear data uncertainties in Monte Carlo burn-up calculations
Energy Technology Data Exchange (ETDEWEB)
Garcia-Herranz, Nuria [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain)], E-mail: nuria@din.upm.es; Cabellos, Oscar [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain); Sanz, Javier [Departamento de Ingenieria Energetica, Universidad Nacional de Educacion a Distancia, UNED (Spain); Juan, Jesus [Laboratorio de Estadistica, Universidad Politecnica de Madrid, UPM (Spain); Kuijper, Jim C. [NRG - Fuels, Actinides and Isotopes Group, Petten (Netherlands)
2008-04-15
Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB. A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants. This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files.
International Nuclear Information System (INIS)
This thesis presents a comprehensive study of sensitivity/uncertainty analysis for reactor performance parameters (e.g. the k-effective) to the base nuclear data from which they are computed. The analysis starts at the fundamental step, the Evaluated Nuclear Data File and the uncertainties inherently associated with the data they contain, available in the form of variance/covariance matrices. We show that when a methodical and consistent computation of sensitivity is performed, conventional deterministic formalisms can be sufficient to propagate nuclear data uncertainties with the level of accuracy obtained by the most advanced tools, such as state-of-the-art Monte Carlo codes. By applying our developed methodology to three exercises proposed by the OECD (Uncertainty Analysis for Criticality Safety Assessment Benchmarks), we provide insights of the underlying physical phenomena associated with the used formalisms. (author)
Sensor Analytics: Radioactive gas Concentration Estimation and Error Propagation
Energy Technology Data Exchange (ETDEWEB)
Anderson, Dale N.; Fagan, Deborah K.; Suarez, Reynold; Hayes, James C.; McIntyre, Justin I.
2007-04-15
This paper develops the mathematical statistics of a radioactive gas quantity measurement and associated error propagation. The probabilistic development is a different approach to deriving attenuation equations and offers easy extensions to more complex gas analysis components through simulation. The mathematical development assumes a sequential process of three components; I) the collection of an environmental sample, II) component gas extraction from the sample through the application of gas separation chemistry, and III) the estimation of radioactivity of component gases.
Fragmentation cross-sections and model uncertainties in Cosmic Ray propagation physics
Tomassetti, Nicola
2015-01-01
Abundances and energy spectra of cosmic ray nuclei are being measured with high accuracy by the AMS experiment. These observations can provide tight constraints to the propagation models of galactic cosmic rays. In the view of the release of these data, I present an evaluation of the model uncertainties associated to the cross-sections for secondary production of Li-Be-B nuclei in cosmic rays. I discuss the role of cross section uncertainties in the calculation of the boron-to-carbon and beryllium-to-boron ratios, as well as their impact in the determination of the cosmic-ray transport parameters.
International Nuclear Information System (INIS)
The report describes the how-to-use of the codes: MUP (Monte Carlo Uncertainty Propagation) for uncertainty analysis by Monte Carlo simulation, including correlation analysis, extreme value identification and study of selected ranges of the variable space; CEC-DES (Central Composite Design) for building experimental matrices according to the requirements of Central Composite and Factorial Experimental Designs; and, STRADE (Stratified Random Design) for experimental designs based on the Latin Hypercube Sampling Techniques. Application fields, of the codes are probabilistic risk assessment, experimental design, sensitivity analysis and system identification problems
Propagation of nuclear data uncertainties for ELECTRA burn-up calculations
ostrand, H; Duan, J; Gustavsson, C; Koning, A; Pomp, S; Rochman, D; Osterlund, M
2013-01-01
The European Lead-Cooled Training Reactor (ELECTRA) has been proposed as a training reactor for fast systems within the Swedish nuclear program. It is a low-power fast reactor cooled by pure liquid lead. In this work, we propagate the uncertainties in Pu-239 transport data to uncertainties in the fuel inventory of ELECTRA during the reactor life using the Total Monte Carlo approach (TMC). Within the TENDL project the nuclear models input parameters were randomized within their uncertainties and 740 Pu-239 nuclear data libraries were generated. These libraries are used as inputs to reactor codes, in our case SERPENT, to perform uncertainty analysis of nuclear reactor inventory during burn-up. The uncertainty in the inventory determines uncertainties in: the long-term radio-toxicity, the decay heat, the evolution of reactivity parameters, gas pressure and volatile fission product content. In this work, a methodology called fast TMC is utilized, which reduces the overall calculation time. The uncertainty in the ...
Analytical Model for Fictitious Crack Propagation in Concrete Beams
DEFF Research Database (Denmark)
Ulfkjær, J. P.; Krenk, Steen; Brincker, Rune
1995-01-01
are modeled by beam theory. The state of stress in the elastic layer is assumed to depend bilinearly on local elongation corresponding to a linear softening relation for the fictitious crack. Results from the analytical model are compared with results from a more detailed model based on numerical......An analytical model for load-displacement curves of concrete beams is presented. The load-displacement curve is obtained by combining two simple models. The fracture is modeled by a fictitious crack in an elastic layer around the midsection of the beam. Outside the elastic layer the deformations...... methods for different beam sizes. The analytical model is shown to be in agreement with the numerical results if the thickness of the elastic layer is taken as half the beam depth. It is shown that the point on the load-displacement curve where the fictitious crack starts to develop and the point where...
Analytical solution to an investment problem under uncertainties with shocks
Cl\\'audia Nunes; Rita Pimentel
2015-01-01
We derive the optimal investment decision in a project where both demand and investment costs are stochastic processes, eventually subject to shocks. We extend the approach used in Dixit and Pindyck (1994), chapter 6.5, to deal with two sources of uncertainty, but assuming that the underlying processes are no longer geometric Brownian diffusions but rather jump diffusion processes. For the class of isoelastic functions that we address in this paper, it is still possible to derive a closed exp...
Analytical Model for Fictitious Crack Propagation in Concrete Beams
DEFF Research Database (Denmark)
Ulfkjær, J. P.; Krenk, S.; Brincker, Rune
the elastic layer the deformations are modelled by the Timoshenko beam theory. The state of stress in the elastic layer is assumed to depend bi-lineary on local elongation corresponding to a linear softening relation for the fictitious crack. For different beam size results from the analytical model......An analytical model for load-displacement curves of unreinforced notched and un-notched concrete beams is presented. The load displacement-curve is obtained by combining two simple models. The fracture is modelled by a fictitious crack in an elastic layer around the mid-section of the beam. Outside...... the load-displacement curve where the fictitious crack starts to develope, and the point where the real crack starts to grow will always correspond to the same bending moment. Closed from solutions for the maximum size of the fracture zone and the minimum slope on the load-displacement curve is given...
Propagation of systematic uncertainty due to data reduction in transmission measurement of iron
International Nuclear Information System (INIS)
A technique of determinantal inequalities to estimate the bounds for statistical and systematic uncertainties in neutron cross section measurement have been developed. In the measurement of neutron cross section, correlation is manifested due to the process of measurement and due to many systematic components like geometrical factor, half life, back scattering etc. However propagation of experimental uncertainties through the reduction of cross section data is itself a complicated procedure and has been inviting attention in recent times. The concept of determinantal inequalities to a transmission measurement of iron cross section and demonstration of how in such data reduction procedures the systematic uncertainty dominates over the statistical and estimate their individual bounds have been applied in this paper. (author). 2 refs., 1 tab
Uncertainty propagation in a 3-D thermal code for performance assessment of a nuclear waste disposal
Energy Technology Data Exchange (ETDEWEB)
Dutfoy, A. [Electricite de France (EDF), Research and Development Div., Safety and Reliability Branch, ESF, 92 - Clamart (France); Ritz, J.B. [Electricite de France (EDF), Research and Development Div., Fluid Mechanics and Heat Transfer, MFTT, 78 - Chatou (France)
2001-07-01
Given the very large time scale involved, the performance assessment of a nuclear waste repository requires numerical modelling. Because we are uncertain of the exact value of the input parameters, we have to analyse the impact of these uncertainties on the outcome of the physical models. The EDF Division Research and Development has set a reliability method to propagate these uncertainties or variability through models which requires much less physical simulations than the usual simulation methods. We apply the reliability method MEFISTO to a base case modelling the heat transfers in a virtual disposal in the future site of the French underground research laboratory, in the East of France. This study is led in collaboration with ANDRA which is the French Nuclear Waste Management Agency. With this exercise, we want to evaluate the thermal behaviour of a concept related to the variation of physical parameters and their uncertainty. (author)
Myers, Casey A; Laz, Peter J; Shelburne, Kevin B; Davidson, Bradley S
2015-05-01
Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse kinematics, inverse dynamics, and muscle force prediction. A series of Monte Carlo simulations were performed that considered intrarater variability in marker placement, movement artifacts in each phase of gait, variability in body segment parameters, and variability in muscle parameters calculated from cadaveric investigations. Propagation of uncertainty was performed by also using the output distributions from one stage as input distributions to subsequent stages. Confidence bounds (5-95%) and sensitivity of outputs to model input parameters were calculated throughout the gait cycle. The combined impact of uncertainty resulted in mean bounds that ranged from 2.7° to 6.4° in joint kinematics, 2.7 to 8.1 N m in joint moments, and 35.8 to 130.8 N in muscle forces. The impact of movement artifact was 1.8 times larger than any other propagated source. Sensitivity to specific body segment parameters and muscle parameters were linked to where in the gait cycle they were calculated. We anticipate that through the increased use of probabilistic tools, researchers will better understand the strengths and limitations of their musculoskeletal simulations and more effectively use simulations to evaluate hypotheses and inform clinical decisions. PMID:25404535
Myers, Casey A; Laz, Peter J; Shelburne, Kevin B; Davidson, Bradley S
2015-05-01
Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse kinematics, inverse dynamics, and muscle force prediction. A series of Monte Carlo simulations were performed that considered intrarater variability in marker placement, movement artifacts in each phase of gait, variability in body segment parameters, and variability in muscle parameters calculated from cadaveric investigations. Propagation of uncertainty was performed by also using the output distributions from one stage as input distributions to subsequent stages. Confidence bounds (5-95%) and sensitivity of outputs to model input parameters were calculated throughout the gait cycle. The combined impact of uncertainty resulted in mean bounds that ranged from 2.7° to 6.4° in joint kinematics, 2.7 to 8.1 N m in joint moments, and 35.8 to 130.8 N in muscle forces. The impact of movement artifact was 1.8 times larger than any other propagated source. Sensitivity to specific body segment parameters and muscle parameters were linked to where in the gait cycle they were calculated. We anticipate that through the increased use of probabilistic tools, researchers will better understand the strengths and limitations of their musculoskeletal simulations and more effectively use simulations to evaluate hypotheses and inform clinical decisions.
Fuzzy probability based fault tree analysis to propagate and quantify epistemic uncertainty
International Nuclear Information System (INIS)
Highlights: • Fuzzy probability based fault tree analysis is to evaluate epistemic uncertainty in fuzzy fault tree analysis. • Fuzzy probabilities represent likelihood occurrences of all events in a fault tree. • A fuzzy multiplication rule quantifies epistemic uncertainty of minimal cut sets. • A fuzzy complement rule estimate epistemic uncertainty of the top event. • The proposed FPFTA has successfully evaluated the U.S. Combustion Engineering RPS. - Abstract: A number of fuzzy fault tree analysis approaches, which integrate fuzzy concepts into the quantitative phase of conventional fault tree analysis, have been proposed to study reliabilities of engineering systems. Those new approaches apply expert judgments to overcome the limitation of the conventional fault tree analysis when basic events do not have probability distributions. Since expert judgments might come with epistemic uncertainty, it is important to quantify the overall uncertainties of the fuzzy fault tree analysis. Monte Carlo simulation is commonly used to quantify the overall uncertainties of conventional fault tree analysis. However, since Monte Carlo simulation is based on probability distribution, this technique is not appropriate for fuzzy fault tree analysis, which is based on fuzzy probabilities. The objective of this study is to develop a fuzzy probability based fault tree analysis to overcome the limitation of fuzzy fault tree analysis. To demonstrate the applicability of the proposed approach, a case study is performed and its results are then compared to the results analyzed by a conventional fault tree analysis. The results confirm that the proposed fuzzy probability based fault tree analysis is feasible to propagate and quantify epistemic uncertainties in fault tree analysis
Energy Technology Data Exchange (ETDEWEB)
Heo, Jaeseok, E-mail: jheo@kaeri.re.kr; Kim, Kyung Doo, E-mail: kdkim@kaeri.re.kr
2015-10-15
Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper.
International Nuclear Information System (INIS)
Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper
Wave-like warp propagation in circumbinary discs I. Analytic theory and numerical simulations
Facchini, Stefano; Lodato, Giuseppe; Price, Daniel J.
2013-01-01
In this paper we analyse the propagation of warps in protostellar circumbinary discs. We use these systems as a test environment in which to study warp propagation in the bending-wave regime, with the addition of an external torque due to the binary gravitational potential. In particular, we want to test the linear regime, for which an analytic theory has been developed. In order to do so, we first compute analytically the steady state shape of an inviscid disc subject to the binary torques. ...
Energy Technology Data Exchange (ETDEWEB)
Dutfoy, A. [Electricite de France R and D Safety and Reliability Branch (EDF), 92 - Clamart (France); Bouton, M. [Electricite de France R and D National Hydraulic Lab. and Environment (EDF), 78 - Chatou (France)
2001-07-01
Given the complexity of the involved phenomenon, performance assessment of a nuclear waste disposal requires numerical modelling. Because many of the input parameters of models are uncertain, analysis of uncertainties and their impact on the probabilistic outcome has become of major importance. This paper presents the EDF Research and Development Division methodology to propagate uncertainties arising from the parameters through models. This reliability approach provides two important quantitative results: an estimate of the probability that the outcome exceeds some two important quantitative results: an estimate of the probability that the outcome exceeds some specified threshold level (called failure event), and a probabilistic sensitivity measure which quantifies the relative importance of each uncertain variable with respect to the probabilistic outcome. Such results could become an integral component of the decision process for the nuclear disposal. The reliability method proposed in this paper is applied to a radionuclide transport model. (authors)
Nikolopoulos, Efthymios I.; Polcher, Jan; Anagnostou, Emmanouil N.; Eisner, Stephanie; Fink, Gabriel; Kallos, George
2016-04-01
Precipitation is arguably one of the most important forcing variables that drive terrestrial water cycle processes. The process of precipitation exhibits significant variability in space and time, is associated with different water phases (liquid or solid) and depends on several other factors (aerosols, orography etc), which make estimation and modeling of this process a particularly challenging task. As such, precipitation information from different sensors/products is associated with uncertainty. Propagation of this uncertainty into hydrologic simulations can have a considerable impact on the accuracy of the simulated hydrologic variables. Therefore, to make hydrologic predictions more useful, it is important to investigate and assess the impact of precipitation uncertainty in hydrologic simulations in order to be able to quantify it and identify ways to minimize it. In this work we investigate the impact of precipitation uncertainty in hydrologic simulations using land surface models (e.g. ORCHIDEE) and global hydrologic models (e.g. WaterGAP3) for the simulation of several hydrologic variables (soil moisture, ET, runoff) over the Iberian Peninsula. Uncertainty in precipitation is assessed by utilizing various sources of precipitation input that include one reference precipitation dataset (SAFRAN), three widely-used satellite precipitation products (TRMM 3B42v7, CMORPH, PERSIANN) and a state-of-the-art reanalysis product (WFDEI) based on the ECMWF ERA-Interim reanalysis. Comparative analysis is based on using the SAFRAN-simulations as reference and it is carried out at different space (0.5deg or regional average) and time (daily or seasonal) scales. Furthermore, as an independent verification, simulated discharge is compared against available discharge observations for selected major rivers of Iberian region. Results allow us to draw conclusions regarding the impact of precipitation uncertainty with respect to i) hydrologic variable of interest, ii
Wijnant, Ysbrand; Spiering, Ruud; Blijderveen, van Maarten; Boer, de André
2006-01-01
Previous research has shown that viscothermal wave propagation in narrow gaps can efficiently be described by means of the low reduced frequency model. For simple geometries and boundary conditions, analytical solutions are available. For example, Beltman [4] gives the acoustic pressure in the gap b
Uncertainty Propagation in a Fundamental Climate Data Record derived from Meteosat Visible Band Data
Rüthrich, Frank; John, Viju; Roebeling, Rob; Wagner, Sebastien; Viticchie, Bartolomeo; Hewison, Tim; Govaerts, Yves; Quast, Ralf; Giering, Ralf; Schulz, Jörg
2016-04-01
The series of Meteosat First Generation (MFG) Satellites provides a unique opportunity for the monitoring of climate variability and of possible changes. 6 Satellites were operationally employed; all equipped with identical MVIRI radiometers. The time series now covers, for some parts of the globe, more than 34 years with a high temporal (30 minutes) and spatial (2.5 x 2.5 km²) resolution for the visible band. However, subtle differences between the radiometers in terms of the silicon photodiodes, sensor spectral ageing and variability due to other sources of uncertainties have limited the thorough exploitation of this unique time series so far. For instance upper level wind fields and surface albedo data records could be derived and used for the assimilation into Numerical Weather Prediction models for re-analysis and climate studies, respectively. However, the derivation of aerosol depth with high quality has not been possible so far. In order to enhance the quality of MVIRI reflectances for enabling an aerosol and improved surface albedo data record it is necessary to perform a re-calibration of the MVIRI instruments visible bands that corrects for above mentioned effects and results in an improved Fundamental Climate Data Record (FCDR) of Meteosat/MVIRI radiance data. This re-calibration has to be consistent over the entire period, to consider the ageing of the sensor's spectral response functions and to add accurate information about the combined uncertainty of the radiances. Therefore the uncertainties from all different sources have to be thoroughly investigated and propagated into the final product. This presentation aims to introduce all sources of uncertainty present in MVIRI visible data and points on the major mechanisms of uncertainty propagation. An outlook will be given on the enhancements of the calibration procedure as it will be carried out at EUMETSAT in the course of the EU Horizon 2020 FIDUCEO project (FIDelity and Uncertainty in Climate data
DEFF Research Database (Denmark)
Abdul Samad, Noor Asma Fazli Bin; Sin, Gürkan; Gernaey, Krist;
2013-01-01
This paper presents the application of uncertainty and sensitivity analysis as part of a systematic modelbased process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty...
Birrell, Jane; Meares, Kevin; Wilkinson, Andrew; Freeston, Mark
2011-11-01
Since its emergence in the early 1990s, a narrow but concentrated body of research has developed examining the role of intolerance of uncertainty (IU) in worry, and yet we still know little about its phenomenology. In an attempt to clarify our understanding of this construct, this paper traces the way in which our understanding and definition of IU have evolved throughout the literature. This paper also aims to further our understanding of IU by exploring the latent variables measures by the Intolerance of Uncertainty Scale (IUS; Freeston, Rheaume, Letarte, Dugas & Ladouceur, 1994). A review of the literature surrounding IU confirmed that the current definitions are categorical and lack specificity. A critical review of existing factor analytic studies was carried out in order to determine the underlying factors measured by the IUS. Systematic searches yielded 9 papers for review. Two factors with 12 consistent items emerged throughout the exploratory studies, and the stability of models containing these two factors was demonstrated in subsequent confirmatory studies. It is proposed that these factors represent (i) desire for predictability and an active engagement in seeking certainty, and (ii) paralysis of cognition and action in the face of uncertainty. It is suggested that these factors may represent approach and avoidance responses to uncertainty. Further research is required to confirm the construct validity of these factors and to determine the stability of this structure within clinical samples.
International Nuclear Information System (INIS)
The present document constitutes my Habilitation thesis report. It recalls my scientific activity of the twelve last years, since my PhD thesis until the works completed as a research engineer at CEA Cadarache. The two main chapters of this document correspond to two different research fields both referring to the uncertainty treatment in engineering problems. The first chapter establishes a synthesis of my work on high frequency wave propagation in random medium. It more specifically relates to the study of the statistical fluctuations of acoustic wave travel-times in random and/or turbulent media. The new results mainly concern the introduction of the velocity field statistical anisotropy in the analytical expressions of the travel-time statistical moments according to those of the velocity field. This work was primarily carried by requirements in geophysics (oil exploration and seismology). The second chapter is concerned by the probabilistic techniques to study the effect of input variables uncertainties in numerical models. My main applications in this chapter relate to the nuclear engineering domain which offers a large variety of uncertainty problems to be treated. First of all, a complete synthesis is carried out on the statistical methods of sensitivity analysis and global exploration of numerical models. The construction and the use of a meta-model (inexpensive mathematical function replacing an expensive computer code) are then illustrated by my work on the Gaussian process model (kriging). Two additional topics are finally approached: the high quantile estimation of a computer code output and the analysis of stochastic computer codes. We conclude this memory with some perspectives about the numerical simulation and the use of predictive models in industry. This context is extremely positive for future researches and application developments. (author)
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
For structural system with random basic variables as well as fuzzy basic variables,uncertain propagation from two kinds of basic variables to the response of the structure is investigated.A novel algorithm for obtaining membership function of fuzzy reliability is presented with saddlepoint approximation(SA)based line sampling method.In the presented method,the value domain of the fuzzy basic variables under the given membership level is firstly obtained according to their membership functions.In the value domain of the fuzzy basic variables corresponding to the given membership level,bounds of reliability of the structure response satisfying safety requirement are obtained by employing the SA based line sampling method in the reduced space of the random variables.In this way the uncertainty of the basic variables is propagated to the safety measurement of the structure,and the fuzzy membership function of the reliability is obtained.Compared to the direct Monte Carlo method for propagating the uncertainties of the fuzzy and random basic variables,the presented method can considerably improve computational efficiency with acceptable precision.The presented method has wider applicability compared to the transformation method,because it doesn’t limit the distribution of the variable and the explicit expression of performance function, and no approximation is made for the performance function during the computing process.Additionally,the presented method can easily treat the performance function with cross items of the fuzzy variable and the random variable,which isn’t suitably approximated by the existing transformation methods.Several examples are provided to illustrate the advantages of the presented method.
Uncertainty Propagation Analysis for the Monte Carlo Time-Dependent Simulations
Energy Technology Data Exchange (ETDEWEB)
Shaukata, Nadeem; Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of)
2015-10-15
In this paper, a conventional method to control the neutron population for super-critical systems is implemented. Instead of considering the cycles, the simulation is divided in time intervals. At the end of each time interval, neutron population control is applied on the banked neutrons. Randomly selected neutrons are discarded, until the size of neutron population matches the initial neutron histories at the beginning of time simulation. A time-dependent simulation mode has also been implemented in the development version of SERPENT 2 Monte Carlo code. In this mode, sequential population control mechanism has been proposed for modeling of prompt super-critical systems. A Monte Carlo method has been properly used in TART code for dynamic criticality calculations. For super-critical systems, the neutron population is allowed to grow over a period of time. The neutron population is uniformly combed to return it to the neutron population started with at the beginning of time boundary. In this study, conventional time-dependent Monte Carlo (TDMC) algorithm is implemented. There is an exponential growth of neutron population in estimation of neutron density tally for super-critical systems and the number of neutrons being tracked exceed the memory of the computer. In order to control this exponential growth at the end of each time boundary, a conventional time cut-off controlling population strategy is included in TDMC. A scale factor is introduced to tally the desired neutron density at the end of each time boundary. The main purpose of this paper is the quantification of uncertainty propagation in neutron densities at the end of each time boundary for super-critical systems. This uncertainty is caused by the uncertainty resulting from the introduction of scale factor. The effectiveness of TDMC is examined for one-group infinite homogeneous problem (the rod model) and two-group infinite homogeneous problem. The desired neutron density is tallied by the introduction of
International Nuclear Information System (INIS)
Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM
eHabitat - A web service for habitat similarity modeling with uncertainty propagation
Olav Skøien, Jon; Schulz, Michael; Dubois, Gregoire; Heuvelink, Gerard
2013-04-01
We are developing eHabitat, a Web Processing Service (WPS) that can model current and future habitat similarity for point observations, polygons defining an existing or hypothetical protected area, or sets of polygons defining the estimated ranges for one or more species. A range of Web Clients makes it easy to use the WPS with predefined data for predictions of the current or future climatic niche. The WPS is also able to document propagating uncertainties of the input data to the estimated similarity maps, if such information is available. The presentation will focus on the architecture of the service and the clients, on how uncertainties are handled by the model and on the presentation of uncertain results. The idea behind eHabitat is that one can classify the similarity between a reference geometry (point locations or polygons) and the surroundings based on one or more species distribution models (SDMs) and a set of ecological indicators. The ecological indicators are typically raster bioclimatic data (DEMs, climate data, vegetation maps …) describing important features for the species or habitats of interest. All these data sets have uncertainties, which can usually be described by treating the value of each pixel as a mean with a standard deviation. As the standard deviation will also be pixel based, it can be given as rasters. If standard deviations of the rasters are not available in the input data, this can also be guesstimated by the service to allow end-users to generate uncertainty scenarios. Rasters of standard deviations are used for simulating a set of spatially correlated maps of the input data, which are then used in the SDM. Additionally, the service can do bootstrapping samples from the input data, which is one of the classic methods for assessing uncertainty of SDMs. The two methods can also be combined, a convenient solution considering that simulation is a computationally much slower process than bootstrapping. Uncertainties in the results
On the uncertainty of stream networks derived from elevation data: the error propagation approach
Directory of Open Access Journals (Sweden)
T. Hengl
2010-01-01
Full Text Available DEM error propagation methodology is extended to the derivation of vector-based objects (stream networks using geostatistical simulations. First, point sampled elevations are used to fit a variogram model. Next 100 DEM realizations are generated using conditional sequential Gaussian simulation; the stream network map is extracted for each of these realizations, and the collection of stream networks is analyzed to quantify the error propagation. At each grid cell, the probability of the occurrence of a stream and the propagated error are estimated. The method is illustrated using two small data sets: Baranja hill (30 m grid cell size; 16 512 pixels; 6367 sampled elevations, and Zlatibor (30 m grid cell size; 15 000 pixels; 2051 sampled elevations. All computations are run in the open source software for statistical computing R: package geoR is used to fit variogram; package gstat is used to run sequential Gaussian simulation; streams are extracted using the open source GIS SAGA via the RSAGA library. The resulting stream error map (Information entropy of a Bernoulli trial clearly depicts areas where the extracted stream network is least precise – usually areas of low local relief, slightly concave. In both cases, significant parts of the study area (17.3% for Baranja Hill; 6.2% for Zlatibor show high error (H>0.5 of locating streams. By correlating the propagated uncertainty of the derived stream network with various land surface parameters sampling of height measurements can be optimized so that delineated streams satisfy a required accuracy level. Remaining issue to be tackled is the computational burden of geostatistical simulations: this framework is at the moment limited to small to moderate data sets with several hundreds of points. Scripts and data sets used in this article are available on-line via the http://www.geomorphometry.org/ website and can be easily adopted
A framework for propagation of uncertainties in the Kepler data analysis pipeline
Clarke, Bruce D.; Allen, Christopher; Bryson, Stephen T.; Caldwell, Douglas A.; Chandrasekaran, Hema; Cote, Miles T.; Girouard, Forrest; Jenkins, Jon M.; Klaus, Todd C.; Li, Jie; Middour, Chris; McCauliff, Sean; Quintana, Elisa V.; Tenenbaum, Peter; Twicken, Joseph D.; Wohler, Bill; Wu, Hayley
2010-07-01
The Kepler space telescope is designed to detect Earth-like planets around Sun-like stars using transit photometry by simultaneously observing more than 100,000 stellar targets nearly continuously over a three-and-a-half year period. The 96.4-megapixel focal plane consists of 42 Charge-Coupled Devices (CCD), each containing two 1024 x 1100 pixel arrays. Since cross-correlations between calibrated pixels are introduced by common calibrations performed on each CCD, downstream data processing requires access to the calibrated pixel covariance matrix to properly estimate uncertainties. However, the prohibitively large covariance matrices corresponding to the ~75,000 calibrated pixels per CCD preclude calculating and storing the covariance in standard lock-step fashion. We present a novel framework used to implement standard Propagation of Uncertainties (POU) in the Kepler Science Operations Center (SOC) data processing pipeline. The POU framework captures the variance of the raw pixel data and the kernel of each subsequent calibration transformation, allowing the full covariance matrix of any subset of calibrated pixels to be recalled on the fly at any step in the calibration process. Singular Value Decomposition (SVD) is used to compress and filter the raw uncertainty data as well as any data-dependent kernels. This combination of POU framework and SVD compression allows the downstream consumer access to the full covariance matrix of any subset of the calibrated pixels which is traceable to the pixel-level measurement uncertainties, all without having to store, retrieve, and operate on prohibitively large covariance matrices. We describe the POU framework and SVD compression scheme and its implementation in the Kepler SOC pipeline.
A Framework for Propagation of Uncertainties in the Kepler Data Analysis Pipeline
Clarke, Bruce D.; Allen, Christopher; Bryson, Stephen T.; Caldwell, Douglas A.; Chandrasekaran, Hema; Cote, Miles T.; Girouard, Forrest; Jenkins, Jon M.; Klaus, Todd C.; Li, Jie; Middour, Chris; McCauliff, Sean; Quintana, Elisa V.; Tenebaum, Peter; Twicken, Joseph D.; Wohler, Bill; Wu, Hayley
2010-01-01
The Kepler space telescope is designed to detect Earth-like planets around Sun-like stars using transit photometry by simultaneously observing 100,000 stellar targets nearly continuously over a three and a half year period. The 96-megapixel focal plane consists of 42 charge-coupled devices (CCD) each containing two 1024 x 1100 pixel arrays. Cross-correlations between calibrated pixels are introduced by common calibrations performed on each CCD requiring downstream data products access to the calibrated pixel covariance matrix in order to properly estimate uncertainties. The prohibitively large covariance matrices corresponding to the 75,000 calibrated pixels per CCD preclude calculating and storing the covariance in standard lock-step fashion. We present a novel framework used to implement standard propagation of uncertainties (POU) in the Kepler Science Operations Center (SOC) data processing pipeline. The POU framework captures the variance of the raw pixel data and the kernel of each subsequent calibration transformation allowing the full covariance matrix of any subset of calibrated pixels to be recalled on-the-fly at any step in the calibration process. Singular value decomposition (SVD) is used to compress and low-pass filter the raw uncertainty data as well as any data dependent kernels. The combination of POU framework and SVD compression provide downstream consumers of the calibrated pixel data access to the full covariance matrix of any subset of the calibrated pixels traceable to pixel level measurement uncertainties without having to store, retrieve and operate on prohibitively large covariance matrices. We describe the POU Framework and SVD compression scheme and its implementation in the Kepler SOC pipeline.
Propagation of Isotopic Bias and Uncertainty to Criticality Safety Analyses of PWR Waste Packages
Energy Technology Data Exchange (ETDEWEB)
Radulescu, Georgeta [ORNL
2010-06-01
predicted spent fuel compositions (i.e., determine the penalty in reactivity due to isotopic composition bias and uncertainty) for use in disposal criticality analysis employing burnup credit. The method used in this calculation to propagate the isotopic bias and bias-uncertainty values to k{sub eff} is the Monte Carlo uncertainty sampling method. The development of this report is consistent with 'Test Plan for: Isotopic Validation for Postclosure Criticality of Commercial Spent Nuclear Fuel'. This calculation report has been developed in support of burnup credit activities for the proposed repository at Yucca Mountain, Nevada, and provides a methodology that can be applied to other criticality safety applications employing burnup credit.
Analytical Algorithms to Quantify the Uncertainty in Remaining Useful Life Prediction
Sankararaman, Shankar; Saxena, Abhinav; Daigle, Matthew; Goebel, Kai
2013-01-01
This paper investigates the use of analytical algorithms to quantify the uncertainty in the remaining useful life (RUL) estimate of components used in aerospace applications. The prediction of RUL is affected by several sources of uncertainty and it is important to systematically quantify their combined effect by computing the uncertainty in the RUL prediction in order to aid risk assessment, risk mitigation, and decisionmaking. While sampling-based algorithms have been conventionally used for quantifying the uncertainty in RUL, analytical algorithms are computationally cheaper and sometimes, are better suited for online decision-making. While exact analytical algorithms are available only for certain special cases (for e.g., linear models with Gaussian variables), effective approximations can be made using the the first-order second moment method (FOSM), the first-order reliability method (FORM), and the inverse first-order reliability method (Inverse FORM). These methods can be used not only to calculate the entire probability distribution of RUL but also to obtain probability bounds on RUL. This paper explains these three methods in detail and illustrates them using the state-space model of a lithium-ion battery.
Identifying the Uncertainty in Physician Practice Location through Spatial Analytics and Text Mining
Directory of Open Access Journals (Sweden)
Xuan Shi
2016-09-01
Full Text Available In response to the widespread concern about the adequacy, distribution, and disparity of access to a health care workforce, the correct identification of physicians’ practice locations is critical to access public health services. In prior literature, little effort has been made to detect and resolve the uncertainty about whether the address provided by a physician in the survey is a practice address or a home address. This paper introduces how to identify the uncertainty in a physician’s practice location through spatial analytics, text mining, and visual examination. While land use and zoning code, embedded within the parcel datasets, help to differentiate resident areas from other types, spatial analytics may have certain limitations in matching and comparing physician and parcel datasets with different uncertainty issues, which may lead to unforeseen results. Handling and matching the string components between physicians’ addresses and the addresses of the parcels could identify the spatial uncertainty and instability to derive a more reasonable relationship between different datasets. Visual analytics and examination further help to clarify the undetectable patterns. This research will have a broader impact over federal and state initiatives and policies to address both insufficiency and maldistribution of a health care workforce to improve the accessibility to public health services.
Identifying the Uncertainty in Physician Practice Location through Spatial Analytics and Text Mining
Shi, Xuan; Xue, Bowei; Xierali, Imam M.
2016-01-01
In response to the widespread concern about the adequacy, distribution, and disparity of access to a health care workforce, the correct identification of physicians’ practice locations is critical to access public health services. In prior literature, little effort has been made to detect and resolve the uncertainty about whether the address provided by a physician in the survey is a practice address or a home address. This paper introduces how to identify the uncertainty in a physician’s practice location through spatial analytics, text mining, and visual examination. While land use and zoning code, embedded within the parcel datasets, help to differentiate resident areas from other types, spatial analytics may have certain limitations in matching and comparing physician and parcel datasets with different uncertainty issues, which may lead to unforeseen results. Handling and matching the string components between physicians’ addresses and the addresses of the parcels could identify the spatial uncertainty and instability to derive a more reasonable relationship between different datasets. Visual analytics and examination further help to clarify the undetectable patterns. This research will have a broader impact over federal and state initiatives and policies to address both insufficiency and maldistribution of a health care workforce to improve the accessibility to public health services. PMID:27657100
Vecherin, S.; Ketcham, S.; Parker, M.; Picucci, J.
2015-12-01
To make a prediction for the propagation of seismic pulses, one needs to specify physical properties and subsurface ground structure of the site. This information is frequently unknown or estimated with significant uncertainty. We developed a methodology for the ensemble prediction of the propagation of weak seismic pulses for short ranges. The ranges of interest are 10-100 of meters, and the pulse bandwidth is up to 200 Hz. Instead of specifying specific values for viscoelastic site properties, the methodology operates with probability distribution functions of the inputs. This yields ensemble realizations of the pulse at specified locations, where mean, median, and maximum likelihood predictions can be made, and confidence intervals are estimated. Starting with the site's Vs30, the methodology creates an ensemble of plausible vertically stratified Vs profiles for the site. The number and thickness of the layers are modeled using inhomogeneous Poisson process, and the Vs values in the layers are modeled by Gaussian correlated process. The Poisson expectation rate and Vs correlation between adjacent layers take into account layers depth and thickness, and are specific for a site class, as defined by the Federal Emergency Management Agency (FEMA). High-fidelity three-dimension thin layer method (TLM) is used to yield an ensemble of frequency response functions. Comparison with experiments revealed that measured signals are not always within the predicted ensemble. Variance-based global sensitivity analysis has shown that the most significant parameter in the TLM for the prediction of the pulse energy is the shear quality factor, Qs. Some strategies how to account for significant uncertainty in this parameter and to improve accuracy of the ensemble predictions for a specific site are investigated and discussed.
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-09-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Study of Gaussian and Bessel beam propagation using a new analytic approach
Dartora, C. A.; Nobrega, K. Z.
2012-03-01
The main feature of Bessel beams realized in practice is their ability to resist diffractive effects over distances exceeding the usual diffraction length. The theory and experimental demonstration of such waves can be traced back to the seminal work of Durnin and co-workers already in 1987. Despite that fact, to the best of our knowledge, the study of propagation of apertured Bessel beams found no solution in closed analytic form and it often leads to the numerical evaluation of diffraction integrals, which can be very awkward. In the context of paraxial optics, wave propagation in lossless media is described by an equation similar to the non-relativistic Schrödinger equation of quantum mechanics, but replacing the time t in quantum mechanics by the longitudinal coordinate z. Thus, the same mathematical methods can be employed in both cases. Using Bessel functions of the first kind as basis functions in a Hilbert space, here we present a new approach where it is possible to expand the optical wave field in a series, allowing to obtain analytic expressions for the propagation of any given initial field distribution. To demonstrate the robustness of the method two cases were taken into account: Gaussian and zeroth-order Bessel beam propagation.
Brault, A; Lucor, D
2016-01-01
SUMMARY This work aims at quantifying the effect of inherent uncertainties from cardiac output on the sensitivity of a human compliant arterial network response based on stochastic simulations of a reduced-order pulse wave propagation model. A simple pulsatile output form is utilized to reproduce the most relevant cardiac features with a minimum number of parameters associated with left ventricle dynamics. Another source of critical uncertainty is the spatial heterogeneity of the aortic compliance which plays a key role in the propagation and damping of pulse waves generated at each cardiac cycle. A continuous representation of the aortic stiffness in the form of a generic random field of prescribed spatial correlation is then considered. Resorting to a stochastic sparse pseudospectral method, we investigate the spatial sensitivity of the pulse pressure and waves reflection magnitude with respect to the different model uncertainties. Results indicate that uncertainties related to the shape and magnitude of th...
International Nuclear Information System (INIS)
This paper presents a 3D uncertainty propagation methodology and its application to the case of a small heterogeneous reactor system ('slab' reactor benchmark). Key neutron parameters (keff, reactivity worth, local power, ...) and their corresponding cross-section sensitivities are derived by using the French calculation route APOLLO2 (2D transport lattice code), CRONOS2 (3D diffusion code) and TRIPOLI4 (3D Monte-Carlo reference calculations) with consistent JEF2.2 cross-section libraries (punctual or CEA93 multigroup cross-sections) and adapted perturbation methods (the Heuristically-based Generalized Perturbation Theory implemented in the framework of the CRONOS2 diffusion method or the correlation techniques used in Monte-Carlo simulations). The investigation of the slab system underlined notable differences between the 2D/3D computed sensitivity coefficients and consequently a priori uncertainties (when sensitivity coefficients are combined with covariance matrices the discrepancies rise up to 20% due to thermal and fast flux variations). In addition, the induced local power effect of nuclear data perturbations (JEF-2.2 vs. Leal-Derrien-Wright-Larson 235U evaluation) had been be correctly estimated with the standard 3D CRONOS2 depletion calculations. For industrial applications (PWR neutron parameters optimization problems, R and D studies dealing with the design of future fission reactors, ...), the same calculation route could be advantageously applied to infer the target accuracies (knowing the required safety criteria) of future nuclear data evaluation (JEFF-3 data library for instance). (author)
Long-time uncertainty propagation using generalized polynomial chaos and flow map composition
International Nuclear Information System (INIS)
We present an efficient and accurate method for long-time uncertainty propagation in dynamical systems. Uncertain initial conditions and parameters are both addressed. The method approximates the intermediate short-time flow maps by spectral polynomial bases, as in the generalized polynomial chaos (gPC) method, and uses flow map composition to construct the long-time flow map. In contrast to the gPC method, this approach has spectral error convergence for both short and long integration times. The short-time flow map is characterized by small stretching and folding of the associated trajectories and hence can be well represented by a relatively low-degree basis. The composition of these low-degree polynomial bases then accurately describes the uncertainty behavior for long integration times. The key to the method is that the degree of the resulting polynomial approximation increases exponentially in the number of time intervals, while the number of polynomial coefficients either remains constant (for an autonomous system) or increases linearly in the number of time intervals (for a non-autonomous system). The findings are illustrated on several numerical examples including a nonlinear ordinary differential equation (ODE) with an uncertain initial condition, a linear ODE with an uncertain model parameter, and a two-dimensional, non-autonomous double gyre flow
Energy Technology Data Exchange (ETDEWEB)
Han, Gi Young; Seo, Bo Kyun [Korea Institute of Nuclear Safety,, Daejeon (Korea, Republic of); Kim, Do Hyun; Shin, Chang Ho; Kim, Song Hyun [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of); Sun, Gwang Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2016-06-15
In analyzing residual radiation, researchers generally use a two-step Monte Carlo (MC) simulation. The first step (MC1) simulates neutron transport, and the second step (MC2) transports the decay photons emitted from the activated materials. In this process, the stochastic uncertainty estimated by the MC2 appears only as a final result, but it is underestimated because the stochastic error generated in MC1 cannot be directly included in MC2. Hence, estimating the true stochastic uncertainty requires quantifying the propagation degree of the stochastic error in MC1. The brute force technique is a straightforward method to estimate the true uncertainty. However, it is a costly method to obtain reliable results. Another method, called the adjoint-based method, can reduce the computational time needed to evaluate the true uncertainty; however, there are limitations. To address those limitations, we propose a new strategy to estimate uncertainty propagation without any additional calculations in two-step MC simulations. To verify the proposed method, we applied it to activation benchmark problems and compared the results with those of previous methods. The results show that the proposed method increases the applicability and user-friendliness preserving accuracy in quantifying uncertainty propagation. We expect that the proposed strategy will contribute to efficient and accurate two-step MC calculations.
Directory of Open Access Journals (Sweden)
S.V. Bystrov
2016-05-01
Full Text Available Subject of Research.We present research results for the signal uncertainty problem that naturally arises for the developers of servomechanisms, including analytical design of serial compensators, delivering the required quality indexes for servomechanisms. Method. The problem was solved with the use of Besekerskiy engineering approach, formulated in 1958. This gave the possibility to reduce requirements for input signal composition of servomechanisms by using only two of their quantitative characteristics, such as maximum speed and acceleration. Information about input signal maximum speed and acceleration allows entering into consideration the equivalent harmonic input signal with calculated amplitude and frequency. In combination with requirements for maximum tracking error, the amplitude and frequency of the equivalent harmonic effects make it possible to estimate analytically the value of the amplitude characteristics of the system by error and then convert it to amplitude characteristic of open-loop system transfer function. While previously Besekerskiy approach was mainly used in relation to the apparatus of logarithmic characteristics, we use this approach for analytical synthesis of consecutive compensators. Main Results. Proposed technique is used to create analytical representation of "input–output" and "error–output" polynomial dynamic models of the designed system. In turn, the desired model of the designed system in the "error–output" form of analytical representation of transfer functions is the basis for the design of consecutive compensator, that delivers the desired placement of state matrix eigenvalues and, consequently, the necessary set of dynamic indexes for the designed system. The given procedure of consecutive compensator analytical design on the basis of Besekerskiy engineering approach under conditions of signal uncertainty is illustrated by an example. Practical Relevance. The obtained theoretical results are
WFR-2D: an analytical model for PWAS-generated 2D ultrasonic guided wave propagation
Shen, Yanfeng; Giurgiutiu, Victor
2014-03-01
This paper presents WaveFormRevealer 2-D (WFR-2D), an analytical predictive tool for the simulation of 2-D ultrasonic guided wave propagation and interaction with damage. The design of structural health monitoring (SHM) systems and self-aware smart structures requires the exploration of a wide range of parameters to achieve best detection and quantification of certain types of damage. Such need for parameter exploration on sensor dimension, location, guided wave characteristics (mode type, frequency, wavelength, etc.) can be best satisfied with analytical models which are fast and efficient. The analytical model was constructed based on the exact 2-D Lamb wave solution using Bessel and Hankel functions. Damage effects were inserted in the model by considering the damage as a secondary wave source with complex-valued directivity scattering coefficients containing both amplitude and phase information from wave-damage interaction. The analytical procedure was coded with MATLAB, and a predictive simulation tool called WaveFormRevealer 2-D was developed. The wave-damage interaction coefficients (WDICs) were extracted from harmonic analysis of local finite element model (FEM) with artificial non-reflective boundaries (NRB). The WFR-2D analytical simulation results were compared and verified with full scale multiphysics finite element models and experiments with scanning laser vibrometer. First, Lamb wave propagation in a pristine aluminum plate was simulated with WFR-2D, compared with finite element results, and verified by experiments. Then, an inhomogeneity was machined into the plate to represent damage. Analytical modeling was carried out, and verified by finite element simulation and experiments. This paper finishes with conclusions and suggestions for future work.
Directory of Open Access Journals (Sweden)
Soheil Salahshour
2015-02-01
Full Text Available In this paper, we apply the concept of Caputo’s H-differentiability, constructed based on the generalized Hukuhara difference, to solve the fuzzy fractional differential equation (FFDE with uncertainty. This is in contrast to conventional solutions that either require a quantity of fractional derivatives of unknown solution at the initial point (Riemann–Liouville or a solution with increasing length of their support (Hukuhara difference. Then, in order to solve the FFDE analytically, we introduce the fuzzy Laplace transform of the Caputo H-derivative. To the best of our knowledge, there is limited research devoted to the analytical methods to solve the FFDE under the fuzzy Caputo fractional differentiability. An analytical solution is presented to confirm the capability of the proposed method.
Antoshchenkova, Ekaterina; Imbert, David; Richet, Yann; Bardet, Lise; Duluc, Claire-Marie; Rebour, Vincent; Gailler, Audrey; Hébert, Hélène
2016-04-01
The aim of this study is to assess evaluation the tsunamigenic potential of the Azores-Gibraltar Fracture Zone (AGFZ). This work is part of the French project TANDEM (Tsunamis in the Atlantic and English ChaNnel: Definition of the Effects through numerical Modeling; www-tandem.cea.fr), special attention is paid to French Atlantic coasts. Structurally, the AGFZ region is complex and not well understood. However, a lot of its faults produce earthquakes with significant vertical slip, of a type that can result in tsunami. We use the major tsunami event of the AGFZ on purpose to have a regional estimation of the tsunamigenic potential of this zone. The major reported event for this zone is the 1755 Lisbon event. There are large uncertainties concerning source location and focal mechanism of this earthquake. Hence, simple deterministic approach is not sufficient to cover on the one side the whole AGFZ with its geological complexity and on the other side the lack of information concerning the 1755 Lisbon tsunami. A parametric modeling environment Promethée (promethee.irsn.org/doku.php) was coupled to tsunami simulation software based on shallow water equations with the aim of propagation of uncertainties. Such a statistic point of view allows us to work with multiple hypotheses simultaneously. In our work we introduce the seismic source parameters in a form of distributions, thus giving a data base of thousands of tsunami scenarios and tsunami wave height distributions. Exploring our tsunami scenarios data base we present preliminary results for France. Tsunami wave heights (within one standard deviation of the mean) can be about 0.5 m - 1 m for the Atlantic coast and approaching 0.3 m for the English Channel.
Hutton, Christopher; Brazier, Richard
2012-06-01
SummaryAdvances in remote sensing technology, notably in airborne Light Detection And Ranging (LiDAR), have facilitated the acquisition of high-resolution topographic and vegetation datasets over increasingly large areas. Whilst such datasets may provide quantitative information on surface morphology and vegetation structure in riparian zones, existing approaches for processing raw LiDAR data perform poorly in riparian channel environments. A new algorithm for separating vegetation from topography in raw LiDAR data, and the performance of the Elliptical Inverse Distance Weighting (EIDW) procedure for interpolating the remaining ground points, are evaluated using data derived from a semi-arid ephemeral river. The filtering procedure, which first applies a threshold (either slope or elevation) to classify vegetation high-points, and second a regional growing algorithm from these high-points, avoids the classification of high channel banks as vegetation, preserving existing channel morphology for subsequent interpolation (2.90-9.21% calibration error; 4.53-7.44% error in evaluation for slope threshold). EIDW, which accounts for surface anisotropy by converting the remaining elevation points to streamwise co-ordinates, can outperform isoptropic interpolation (IDW) on channel banks, however, performs less well in isotropic conditions, and when local anisotropy is different to that of the main channel. A key finding of this research is that filtering parameter uncertainty affects the performance of the interpolation procedure; resultant errors may propagate into the Digital Elevation Model (DEM) and subsequently derived products, such as Canopy Height Models (CHMs). Consequently, it is important that this uncertainty is assessed. Understanding and developing methods to deal with such errors is important to inform users of the true quality of laser scanning products, such that they can be used effectively in hydrological applications.
Korun, M
2001-11-01
Explicit expressions are derived describing the variance of the counting efficiency for a homogeneous cylindrical sample, placed coaxially on the detector's symmetry axis, in terms of the variances of the sample properties thickness, density and composition. In the derivation, the emission of gamma-rays parallel to the sample axis and the efficiency for an area source proportional to the solid angle subtended by the source from the effective point of interaction of the gamma-rays within the detector crystal are assumed. For the uncertainties of the mass attenuation coefficients, as well as for the uncertainties of concentrations of admixtures to the sample matrix, constant relative uncertainties are assumed. PMID:11573802
Sega, Michela; Pennecchi, Francesca; Rinaldi, Sarah; Rolle, Francesca
2016-05-12
A proper evaluation of the uncertainty associated to the quantification of micropollutants in the environment, like Polycyclic Aromatic Hydrocarbons (PAHs), is crucial for the reliability of the measurement results. The present work describes a comparison between the uncertainty evaluation carried out according to the GUM uncertainty framework and the Monte Carlo (MC) method. This comparison was carried out starting from real data sets obtained from the quantification of benzo[a]pyrene (BaP), spiked on filters commonly used for airborne particulate matter sampling. BaP was chosen as target analyte as it is listed in the current European legislation as marker of the carcinogenic risk for the whole class of PAHs. MC method, being useful for nonlinear models and when the resulting output distribution for the measurand is non-symmetric, can particularly fit the cases in which the results of intrinsically positive quantities are very small and the lower limit of a desired coverage interval, obtained according to the GUM uncertainty framework, can be dramatically close to zero, if not even negative. In the case under study, it was observed that the two approaches for the uncertainty evaluation provide different results for BaP masses in samples containing different masses of the analyte, MC method giving larger coverage intervals. In addition, in cases of analyte masses close to zero, the GUM uncertainty framework would give even negative lower limit of uncertainty coverage interval for the measurand, an unphysical result which is avoided when using MC method. MC simulations, indeed, can be configured in a way that only positive values are generated thus obtaining a coverage interval for the measurand that is always positive. PMID:27114218
Wave-like warp propagation in circumbinary discs I. Analytic theory and numerical simulations
Facchini, Stefano; Price, Daniel J
2013-01-01
In this paper we analyse the propagation of warps in protostellar circumbinary discs. We use these systems as a test environment in which to study warp propagation in the bending-wave regime, with the addition of an external torque due to the binary gravitational potential. In particular, we want to test the linear regime, for which an analytic theory has been developed. In order to do so, we first compute analytically the steady state shape of an inviscid disc subject to the binary torques. The steady state tilt is a monotonically increasing function of radius. In the absence of viscosity, the disc does not present any twist. Then, we compare the time-dependent evolution of the warped disc calculated via the known linearised equations both with the analytic solutions and with full 3D numerical simulations, which have been performed with the PHANTOM SPH code using 2 million particles. We find a good agreement both in the tilt and in the phase evolution for small inclinations, even at very low viscosities. Mor...
Gronewold, A. D.; Wolpert, R. L.; Reckhow, K. H.
2007-12-01
Most probable number (MPN) and colony-forming-unit (CFU) are two estimates of fecal coliform bacteria concentration commonly used as measures of water quality in United States shellfish harvesting waters. The MPN is the maximum likelihood estimate (or MLE) of the true fecal coliform concentration based on counts of non-sterile tubes in serial dilution of a sample aliquot, indicating bacterial metabolic activity. The CFU is the MLE of the true fecal coliform concentration based on the number of bacteria colonies emerging on a growth plate after inoculation from a sample aliquot. Each estimating procedure has intrinsic variability and is subject to additional uncertainty arising from minor variations in experimental protocol. Several versions of each procedure (using different sized aliquots or different numbers of tubes, for example) are in common use, each with its own levels of probabilistic and experimental error and uncertainty. It has been observed empirically that the MPN procedure is more variable than the CFU procedure, and that MPN estimates are somewhat higher on average than CFU estimates, on split samples from the same water bodies. We construct a probabilistic model that provides a clear theoretical explanation for the observed variability in, and discrepancy between, MPN and CFU measurements. We then explore how this variability and uncertainty might propagate into shellfish harvesting area management decisions through a two-phased modeling strategy. First, we apply our probabilistic model in a simulation-based analysis of future water quality standard violation frequencies under alternative land use scenarios, such as those evaluated under guidelines of the total maximum daily load (TMDL) program. Second, we apply our model to water quality data from shellfish harvesting areas which at present are closed (either conditionally or permanently) to shellfishing, to determine if alternative laboratory analysis procedures might have led to different
Ryerson, F. J.; Ezzedine, S. M.; Antoun, T.
2013-12-01
The success of implementation and execution of numerous subsurface energy technologies such shale gas extraction, geothermal energy, underground coal gasification rely on detailed characterization of the geology and the subsurface properties. For example, spatial variability of subsurface permeability controls multi-phase flow, and hence impacts the prediction of reservoir performance. Subsurface properties can vary significantly over several length scales making detailed subsurface characterization unfeasible if not forbidden. Therefore, in common practices, only sparse measurements of data are available to image or characterize the entire reservoir. For example pressure, P, permeability, k, and production rate, Q, measurements are only available at the monitoring and operational wells. Elsewhere, the spatial distribution of k is determined by various deterministic or stochastic interpolation techniques and P and Q are calculated from the governing forward mass balance equation assuming k is given at all locations. Several uncertainty drivers, such as PSUADE, are then used to propagate and quantify the uncertainty (UQ) of quantities (variable) of interest using forward solvers. Unfortunately, forward-solver techniques and other interpolation schemes are rarely constrained by the inverse problem itself: given P and Q at observation points determine the spatially variable map of k. The approach presented here, motivated by fluid imaging for subsurface characterization and monitoring, was developed by progressively solving increasingly complex realistic problems. The essence of this novel approach is that the forward and inverse partial differential equations are the interpolator themselves for P, k and Q rather than extraneous and sometimes ad hoc schemes. Three cases with different sparsity of data are investigated. In the simplest case, a sufficient number of passive pressure data (pre-production pressure gradients) are given. Here, only the inverse hyperbolic
International Nuclear Information System (INIS)
This paper presents an evaluation of uncertainty associated to analytical measurement of eighteen polycyclic aromatic compounds (PACs) in ambient air by liquid chromatography with fluorescence detection (HPLC/FD). The study was focused on analyses of PM10, PM2.5 and gas phase fractions. Main analytical uncertainty was estimated for eleven polycyclic aromatic hydrocarbons (PAHs), four nitro polycyclic aromatic hydrocarbons (nitro-PAHs) and two hydroxy polycyclic aromatic hydrocarbons (OH-PAHs) based on the analytical determination, reference material analysis and extraction step. Main contributions reached 15-30% and came from extraction process of real ambient samples, being those for nitro- PAHs the highest (20-30%). Range and mean concentration of PAC mass concentrations measured in gas phase and PM10/PM2.5 particle fractions during a full year are also presented. Concentrations of OH-PAHs were about 2-4 orders of magnitude lower than their parent PAHs and comparable to those sparsely reported in literature. (Author)
Validation of an analytical compressed elastic tube model for acoustic wave propagation
Van Hirtum, A.; Blandin, R.; Pelorson, X.
2015-12-01
Acoustic wave propagation through a compressed elastic tube is a recurrent problem in engineering. Compression of the tube is achieved by pinching it between two parallel bars so that the pinching effort as well as the longitudinal position of pinching can be controlled. A stadium-based geometrical tube model is combined with a plane wave acoustic model in order to estimate acoustic wave propagation through the elastic tube as a function of pinching effort, pinching position, and outlet termination (flanged or unflanged). The model outcome is validated against experimental data obtained in a frequency range from 3.5 kHz up to 10 kHz by displacing an acoustic probe along the tube's centerline. Due to plane wave model assumptions and the decrease of the lowest higher order mode cut-on frequency with increasing pinching effort, the difference between modeled and measured data is analysed in three frequency bands, up to 5 kHz, 8 kHz, and 9.5 kHz, respectively. It is seen that the mean and standard error within each frequency band do not significantly vary with pinching effort, pinching position, or outlet termination. Therefore, it is concluded that the analytical tube model is suitable to approximate the elastic tube geometry when modeling acoustic wave propagation through the pinched elastic tube with either flanged or unflanged termination.
Gosset, Marielle; Casse, Claire; Peugeot, christophe; boone, aaron; pedinotti, vanessa
2015-04-01
Global measurement of rainfall offers new opportunity for hydrological monitoring, especially for some of the largest Tropical river where the rain gauge network is sparse and radar is not available. Member of the GPM constellation, the new French-Indian satellite Mission Megha-Tropiques (MT) dedicated to the water and energy budget in the tropical atmosphere contributes to a better monitoring of rainfall in the inter-tropical zone. As part of this mission, research is developed on the use of satellite rainfall products for hydrological research or operational application such as flood monitoring. A key issue for such applications is how to account for rainfall products biases and uncertainties, and how to propagate them into the end user models ? Another important question is how to choose the best space-time resolution for the rainfall forcing, given that both model performances and rain-product uncertainties are resolution dependent. This paper analyses the potential of satellite rainfall products combined with hydrological modeling to monitor the Niger river floods in the city of Niamey, Niger. A dramatic increase of these floods has been observed in the last decades. The study focuses on the 125000 km2 area in the vicinity of Niamey, where local runoff is responsible for the most extreme floods recorded in recent years. Several rainfall products are tested as forcing to the SURFEX-TRIP hydrological simulations. Differences in terms of rainfall amount, number of rainy days, spatial extension of the rainfall events and frequency distribution of the rain rates are found among the products. Their impacts on the simulated outflow is analyzed. The simulations based on the Real time estimates produce an excess in the discharge. For flood prediction, the problem can be overcome by a prior adjustment of the products - as done here with probability matching - or by analysing the simulated discharge in terms of percentile or anomaly. All tested products exhibit some
A novel stochastic collocation method for uncertainty propagation in complex mechanical systems
Qi, WuChao; Tian, SuMei; Qiu, ZhiPing
2015-02-01
This paper presents a novel stochastic collocation method based on the equivalent weak form of multivariate function integral to quantify and manage uncertainties in complex mechanical systems. The proposed method, which combines the advantages of the response surface method and the traditional stochastic collocation method, only sets integral points at the guide lines of the response surface. The statistics, in an engineering problem with many uncertain parameters, are then transformed into a linear combination of simple functions' statistics. Furthermore, the issue of determining a simple method to solve the weight-factor sets is discussed in detail. The weight-factor sets of two commonly used probabilistic distribution types are given in table form. Studies on the computational accuracy and efforts show that a good balance in computer capacity is achieved at present. It should be noted that it's a non-gradient and non-intrusive algorithm with strong portability. For the sake of validating the procedure, three numerical examples concerning a mathematical function with analytical expression, structural design of a straight wing, and flutter analysis of a composite wing are used to show the effectiveness of the guided stochastic collocation method.
Efficiency of analytical methodologies in uncertainty analysis of seismic core damage frequency
International Nuclear Information System (INIS)
Fault Tree and Event Tree analysis is almost exclusively relied upon in the assessments of seismic Core Damage Frequency (CDF). In this approach, Direct Quantification of Fault tree using Monte Carlo simulation (DQFM) method, or simply called Monte Carlo (MC) method, and Binary Decision Diagram (BDD) method were introduced as alternatives for a traditional approximation method, namely Minimal Cut Set (MCS) method. However, there is still no agreement as to which method should be used in a risk assessment of seismic CDF, especially for uncertainty analysis. The purpose of this study is to examine the efficiencies of the three methods in uncertainty analysis as well as in point estimation so that the decision of selecting a proper method can be made effectively. The results show that the most efficient method would be BDD method in terms of accuracy and computational time. However, it will be discussed that BDD method is not always applicable to PSA models while MC method is so in theory. In turn, MC method was confirmed to agree with the exact solution obtained by BDD method, but it took a large amount of time, in particular for uncertainty analysis. On the other hand, it was shown that the approximation error of MCS method may not be as bad in uncertainty analysis as it is in point estimation. Based on these results and previous works, this paper will propose a scheme to select an appropriate analytical method for a seismic PSA study. Throughout this study, SECOM2-DQFM code was expanded to be able to utilize BDD method and to conduct uncertainty analysis with both MC and BDD method. (author)
Gudimetla, V S Rao; Holmes, Richard B; Riker, Jim F
2012-12-01
An analytical expression for the log-amplitude correlation function for plane wave propagation through anisotropic non-Kolmogorov turbulent atmosphere is derived. The closed-form analytic results are based on the Rytov approximation. These results agree well with wave optics simulation based on the more general Fresnel approximation as well as with numerical evaluations, for low-to-moderate strengths of turbulence. The new expression reduces correctly to the previously published analytic expressions for the cases of plane wave propagation through both nonisotropic Kolmogorov turbulence and isotropic non-Kolmogorov turbulence cases. These results are useful for understanding the potential impact of deviations from the standard isotropic Kolmogorov spectrum.
Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.
2014-01-01
Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.
Energy Technology Data Exchange (ETDEWEB)
Pal Verma, Mahendra [Instituto de Investigaciones Electricas, Cuernavaca, Morelos (Mexico)
2008-07-01
A procedure was developed to consider the analytical uncertainty in each parameter of geochemical analysis of geothermal fluid. The estimation of the uncertainty is based on the results of the geochemical analyses of geothermal fluids (numbered from the 0 to the 14), obtained within the framework of the comparisons program among the geochemical laboratories in the last 30 years. Also the propagation of the analytical uncertainty was realized in the calculation of the parameters of the geothermal fluid in the reservoir, through the methods of interval of uncertainty and GUM (Guide to the expression of Uncertainty of Measurement). The application of the methods is illustrated in the pH calculation of the geothermal fluid in the reservoir, considering samples 10 and 11 as separated waters at atmospheric conditions. [Spanish] Se desarrollo un procedimiento para estimar la incertidumbre analitica en cada parametro de analisis geoquimico de fluido geotermico. La estimacion de la incertidumbre esta basada en los resultados de los analisis geoquimicos de fluidos geotermicos (numerados del 0 al 14), obtenidos en el marco del programa de comparaciones entre los laboratorios geoquimicos en los ultimos 30 anos. Tambien se realizo la propagacion de la incertidumbre analitica en el calculo de los parametros del fluido geotermico en el yacimiento, a traves de los metodos de intervalo de incertidumbre y GUM (Guide to the expression of Uncertainty of Measurement). La aplicacion de los metodos se ilustra en el calculo de pH del fluido geotermico en el yacimiento, considerando las muestras 10 y 11 como aguas separadas a las condiciones atmosfericas.
Energy Technology Data Exchange (ETDEWEB)
Vinai, Paolo [Paul Scherrer Institute, Villigen (Switzerland); Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland); Chalmers University of Technology, Goeteborg (Sweden); Macian-Juan, Rafael [Technische Universitaet Muenchen, Garching (Germany); Chawla, Rakesh [Paul Scherrer Institute, Villigen (Switzerland); Ecole Polytechnique Federale de Lausanne, Lausanne (Switzerland)
2008-07-01
The paper describes the propagation of void fraction uncertainty, as quantified by employing a novel methodology developed at PSI, in the RETRAN-3D simulation of the Peach Bottom turbine trip test. Since the transient considered is characterized by a strongly coupling between thermal-hydraulics and neutronics, the accuracy in the void fraction model has a very important influence on the prediction of the power history and, in particular, of the maximum power reached. It has been shown that the objective measures used for the void fraction uncertainty, based on the direct comparison between experimental and predicted values extracted from a database of appropriate separate-effect tests, provides power uncertainty bands that are narrower and more realistic than those based, for example, on expert opinion. The applicability of such an approach to NPP transient best estimate analysis has thus been demonstrated. (authors)
Mishra, S.; Schwab, Ch.; Šukys, J.
2016-05-01
We consider the very challenging problem of efficient uncertainty quantification for acoustic wave propagation in a highly heterogeneous, possibly layered, random medium, characterized by possibly anisotropic, piecewise log-exponentially distributed Gaussian random fields. A multi-level Monte Carlo finite volume method is proposed, along with a novel, bias-free upscaling technique that allows to represent the input random fields, generated using spectral FFT methods, efficiently. Combined together with a recently developed dynamic load balancing algorithm that scales to massively parallel computing architectures, the proposed method is able to robustly compute uncertainty for highly realistic random subsurface formations that can contain a very high number (millions) of sources of uncertainty. Numerical experiments, in both two and three space dimensions, illustrating the efficiency of the method are presented.
Malkov, M A
2016-01-01
An analytic solution for a Fokker-Planck equation that describes propagation of energetic particles through a scattering medium is obtained. The solution is found in terms of an infinite series of mixed moments of particle distribution. The spatial dispersion of a particle cloud released at t=0 evolves through three phases, ballistic (t>Tc), where Tc is the collision time.The ballistic phase is characterized by a decelerating expansion of the initial point source in form of "box" distribution with broadening walls. The next, transdiffusive phase is marked by the box walls broadened to its size and a noticeable slow down of expansion. Finally, the evolution enters the conventional diffusion phase.
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
An analytical validation for the attenuation of lateral propagating light in sea ice
Institute of Scientific and Technical Information of China (English)
ZHAO Jinping; LI Tao; EHN Jens; BARBER David
2015-01-01
The attenuation of lateral propagating light (LPL) in sea ice was measured using an artificial light source in the Canadian Arctic during the 2007/2008 winter. The apparent attenuation coefficientμ(λ)for lateral prop-agating light was obtained from the measured logarithmic relative variation rate. In this study an analytical solution based on the strict optical theories is developed to validate the measured result. There is a good consistency between theoretical solution and measured data, by which a quite simple but very rigorous relationship among the light source, measurement geometry, and measured irradiance is established. The attenuation coefficients acquired by measurement and theory are the diffusion attenuation as an apparent optical property of ice, independent of the light source and shining condition. The attenuation ability of sea ice should be caused by the microstructure of sea ice, such as crystal size, ice density, brine volume, air inclusion, etc. It also includes the leak from both interfaces by directional scattering. It is verified that the measuring approach is operational and accurate to measure the attenuation of the LPL. The solution from this study did not tell the connection among the extinction and the inclusions of sea ice theoretically be-cause of insufficient understanding.
NAJI, Noor Ezzulddin
2011-01-01
Presented is a derivation of an analytical expression for the mode-coherence coefficients of uniform-distributed wave propagating within different homogeneous media-as in the case of hyperbolic Gaussian beams-and a simple method involving the superposition of two such beams is proposed. The results obtained from this work are very applicable to study and analysis of Hermite-Gaussian beam propagation, especially in the problems of radiation-matter interaction, and laser beam propagatio...
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
Rose, K.; Bauer, J. R.; Baker, D. V.
2015-12-01
As big data computing capabilities are increasingly paired with spatial analytical tools and approaches, there is a need to ensure uncertainty associated with the datasets used in these analyses is adequately incorporated and portrayed in results. Often the products of spatial analyses, big data and otherwise, are developed using discontinuous, sparse, and often point-driven data to represent continuous phenomena. Results from these analyses are generally presented without clear explanations of the uncertainty associated with the interpolated values. The Variable Grid Method (VGM) offers users with a flexible approach designed for application to a variety of analyses where users there is a need to study, evaluate, and analyze spatial trends and patterns while maintaining connection to and communicating the uncertainty in the underlying spatial datasets. The VGM outputs a simultaneous visualization representative of the spatial data analyses and quantification of underlying uncertainties, which can be calculated using data related to sample density, sample variance, interpolation error, uncertainty calculated from multiple simulations. In this presentation we will show how we are utilizing Hadoop to store and perform spatial analysis through the development of custom Spark and MapReduce applications that incorporate ESRI Hadoop libraries. The team will present custom 'Big Data' geospatial applications that run on the Hadoop cluster and integrate with ESRI ArcMap with the team's probabilistic VGM approach. The VGM-Hadoop tool has been specially built as a multi-step MapReduce application running on the Hadoop cluster for the purpose of data reduction. This reduction is accomplished by generating multi-resolution, non-overlapping, attributed topology that is then further processed using ESRI's geostatistical analyst to convey a probabilistic model of a chosen study region. Finally, we will share our approach for implementation of data reduction and topology generation
Gudimetla, V S Rao; Holmes, Richard B; Riker, Jim F
2014-01-01
An analytical expression for the log-amplitude correlation function based on the Rytov approximation is derived for spherical wave propagation through an anisotropic non-Kolmogorov refractive turbulent atmosphere. The expression reduces correctly to the previously published analytic expressions for the case of spherical wave propagation through isotropic Kolmogorov turbulence. These results agree well with a wave-optics simulation based on the more general Fresnel approximation, as well as with numerical evaluations, for low-to-moderate strengths of turbulence. These results are useful for understanding the potential impact of deviations from the standard isotropic Kolmogorov spectrum.
Wave-like warp propagation in circumbinary discs - I. Analytic theory and numerical simulations
Facchini, Stefano; Lodato, Giuseppe; Price, Daniel J.
2013-08-01
In this paper we analyse the propagation of warps in protostellar circumbinary discs. We use these systems as a test environment in which to study warp propagation in the bending-wave regime, with the addition of an external torque due to the binary gravitational potential. In particular, we want to test the linear regime, for which an analytic theory has been developed. In order to do so, we first compute analytically the steady-state shape of an inviscid disc subject to the binary torques. The steady-state tilt is a monotonically increasing function of radius, but misalignment is found at the disc inner edge. In the absence of viscosity, the disc does not present any twist. Then, we compare the time-dependent evolution of the warped disc calculated via the known linearized equations both with the analytic solutions and with full 3D numerical simulations. The simulations have been performed with the PHANTOM smoothed particle hydrodynamics (SPH) code using two million particles. We find a good agreement both in the tilt and in the phase evolution for small inclinations, even at very low viscosities. Moreover, we have verified that the linearized equations are able to reproduce the diffusive behaviour when α > H/R, where α is the disc viscosity parameter. Finally, we have used the 3D simulations to explore the non-linear regime. We observe a strongly non-linear behaviour, which leads to the breaking of the disc. Then, the inner disc starts precessing with its own precessional frequency. This behaviour has already been observed with numerical simulations in accretion discs around spinning black holes. The evolution of circumstellar accretion discs strongly depends on the warp evolution. Therefore, the issue explored in this paper could be of fundamental importance in order to understand the evolution of accretion discs in crowded environments, when the gravitational interaction with other stars is highly likely, and in multiple systems. Moreover, the evolution of
International Nuclear Information System (INIS)
The control of uncertainties in the field of reactor physics and their propagation in best-estimate modeling are a major issue in safety analysis. In this framework, the CEA develops a methodology to perform multi-physics simulations including uncertainties analysis. The present paper aims to present and apply this methodology for the analysis of an accidental situation such as REA (Rod Ejection Accident). This accident is characterized by a strong interaction between the different areas of the reactor physics (neutronic, fuel thermal and thermal hydraulic). The modeling is performed with CRONOS2 code. The uncertainties analysis has been conducted with the URANIE platform developed by the CEA: For each identified response from the modeling (output) and considering a set of key parameters with their uncertainties (input), a surrogate model in the form of a neural network has been produced. The set of neural networks is then used to carry out a sensitivity analysis which consists on a global variance analysis with the determination of the Sobol indices for all responses. The sensitivity indices are obtained for the input parameters by an approach based on the use of polynomial chaos. The present exercise helped to develop a methodological flow scheme, to consolidate the use of URANIE tool in the framework of parallel calculations. Finally, the use of polynomial chaos allowed computing high order sensitivity indices and thus highlighting and classifying the influence of identified uncertainties on each response of the analysis (single and interaction effects). (authors)
DEFF Research Database (Denmark)
Diky, Vladimir; Chirico, Robert D.; Muzny, Chris;
property values and expert system for data analysis and generation of recommended property values at the specified conditions along with uncertainties on demand. The most recent extension of TDE covers solvent design and multi-component process stream property calculations with uncertainty analysis....... However, the accuracy of such calculations are generally unknown that often leads to overdesign of the operational units and results in significant additional cost. TDE provides a tool for the analysis of uncertainty of property calculations for multi-component streams. A process stream in TDE can...... variations). Predictions can be compared to the available experimental data, and uncertainties are estimated for all efficiency criteria. Calculations of the properties of multi-component streams including composition at phase equilibria (flash calculations) are at the heart of process simulation engines...
DEFF Research Database (Denmark)
He, Xiulan
parameters and model structures, which are the primary focuses of this PhD research. Parameter uncertainty was analyzed using an optimization tool (PEST: Parameter ESTimation) in combination with a random sampling method (LHS: Latin Hypercube Sampling). Model structure, namely geological architecture...... be compensated by model parameters, e.g. when hydraulic heads are considered. However, geological structure is the primary source of uncertainty with respect to simulations of groundwater age and capture zone. Operational MPS based software has been on stage for just around ten years; yet, issues regarding......Groundwater modeling plays an essential role in modern subsurface hydrology research. It’s generally recognized that simulations and predictions by groundwater models are associated with uncertainties that originate from various sources. The two major uncertainty sources are related to model...
Nuclear Data Uncertainty Propagation to Reactivity Coefficients of a Sodium Fast Reactor
Herrero, J. J.; Ochoa, R.; Martínez, J. S.; Díez, C. J.; García-Herranz, N.; Cabellos, O.
2014-04-01
The assessment of the uncertainty levels on the design and safety parameters for the innovative European Sodium Fast Reactor (ESFR) is mandatory. Some of these relevant safety quantities are the Doppler and void reactivity coefficients, whose uncertainties are quantified. Besides, the nuclear reaction data where an improvement will certainly benefit the design accuracy are identified. This work has been performed with the SCALE 6.1 codes suite and its multigroups cross sections library based on ENDF/B-VII.0 evaluation.
Terando, A. J.; Reich, B. J.; Pacifici, K.
2013-12-01
Fire is an important disturbance process in many coupled natural-human systems. Changes in the frequency and severity of fires due to anthropogenic climate change could have significant costs to society and the plant and animal communities that are adapted to a particular fire regime Planning for these changes requires a robust model of the relationship between climate and fire that accounts for multiple sources of uncertainty that are present when simulating ecological and climatological processes. Here we model how anthropogenic climate change could affect the wildfire regime for a region in the Southeast US whose natural ecosystems are dependent on frequent, low-intensity fires while humans are at risk from large catastrophic fires. We develop a modeling framework that incorporates three major sources of uncertainty: (1) uncertainty in the ecological drivers of expected monthly area burned, (2) uncertainty in the environmental drivers influencing the probability of an extreme fire event, and (3) structural uncertainty in different downscaled climate models. In addition we use two policy-relevant emission scenarios (climate stabilization and 'business-as-usual') to characterize the uncertainty in future greenhouse gas forcings. We use a Bayesian framework to incorporate different sources of uncertainty including simulation of predictive errors and Stochastic Search Variable Selection. Our results suggest that although the mean process remains stationary, the probability of extreme fires declines through time, owing to the persistence of high atmospheric moisture content during the peak fire season that dampens the effect of increasing temperatures. Including multiple sources of uncertainty leads to wide prediction intervals, but is potentially more useful for decision-makers that will require adaptation strategies that are robust to rapid but uncertain climate and ecological change.
Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model
Wang, Shitao; Iskandarani, Mohamed; Srinivasan, Ashwanth; Thacker, W. Carlisle; Winokur, Justin; Knio, Omar M.
2016-05-01
Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model's estimates of the plume's trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate's contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.
Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model
Wang, Shitao
2016-05-27
Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model\\'s estimates of the plume\\'s trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate\\'s contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.
International Nuclear Information System (INIS)
The prompt fission neutron spectrum (PFNS) uncertainties in the n+239Pu fission reaction are used to study the impact on several fast critical assemblies modeled in the MCNP6.1 code. The newly developed sensitivity capability in MCNP6.1 is used to compute the keff sensitivity coefficients with respect to the PFNS. In comparison, the covariance matrix given in the ENDF/B-VII.1 library is decomposed and randomly sampled realizations of the PFNS are propagated through the criticality calculation, preserving the PFNS covariance matrix. The information gathered from both approaches, including the overall keff uncertainty, is statistically analyzed. Overall, the forward and backward approaches agree as expected. The results from a new method appear to be limited by the process used to evaluate the PFNS and is not necessarily a flaw of the method itself. Final thoughts and directions for future work are suggested
Analytical propagation of errors in dynamic SPECT: estimators, degrading factors, bias and noise
International Nuclear Information System (INIS)
Dynamic SPECT is a relatively new technique that may potentially benefit many imaging applications. Though similar to dynamic PET, the accuracy and precision of dynamic SPECT parameter estimates are degraded by factors that differ from those encountered in PET. In this work we formulate a methodology for analytically studying the propagation of errors from dynamic projection data to kinetic parameter estimates. This methodology is used to study the relationships between reconstruction estimators, image degrading factors, bias and statistical noise for the application of dynamic cardiac imaging with 99mTc-teboroxime. Dynamic data were simulated for a torso phantom, and the effects of attenuation, detector response and scatter were successively included to produce several data sets. The data were reconstructed to obtain both weighted and unweighted least squares solutions, and the kinetic rate parameters for a two- compartment model were estimated. The expected values and standard deviations describing the statistical distribution of parameters that would be estimated from noisy data were calculated analytically. The results of this analysis present several interesting implications for dynamic SPECT. Statistically weighted estimators performed only marginally better than unweighted ones, implying that more computationally efficient unweighted estimators may be appropriate. This also suggests that it may be beneficial to focus future research efforts upon regularization methods with beneficial bias-variance trade-offs. Other aspects of the study describe the fundamental limits of the bias-variance trade-off regarding physical degrading factors and their compensation. The results characterize the effects of attenuation, detector response and scatter, and they are intended to guide future research into dynamic SPECT reconstruction and compensation methods. (author)
Xu, Yanlong
2015-08-01
The coupled mode theory with coupling of diffraction modes and waveguide modes is usually used on the calculations of transmission and reflection coefficients for electromagnetic waves traveling through periodic sub-wavelength structures. In this paper, I extend this method to derive analytical solutions of high-order dispersion relations for shear horizontal (SH) wave propagation in elastic plates with periodic stubs. In the long wavelength regime, the explicit expression is obtained by this theory and derived specially by employing an effective medium. This indicates that the periodical stubs are equivalent to an effective homogenous layer in the long wavelength. Notably, in the short wavelength regime, high-order diffraction modes in the plate and high-order waveguide modes in the stubs are considered with modes coupling to compute the band structures. Numerical results of the coupled mode theory fit pretty well with the results of the finite element method (FEM). In addition, the band structures\\' evolution with the height of the stubs and the thickness of the plate shows clearly that the method can predict well the Bragg band gaps, locally resonant band gaps and high-order symmetric and anti-symmetric thickness-twist modes for the periodically structured plates. © 2015 Elsevier B.V.
Calibration and Forward Uncertainty Propagation for Large-eddy Simulations of Engineering Flows
Energy Technology Data Exchange (ETDEWEB)
Templeton, Jeremy Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Blaylock, Myra L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Domino, Stefan P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hewson, John C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kumar, Pritvi Raj [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ling, Julia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Najm, Habib N. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ruiz, Anthony [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Safta, Cosmin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stewart, Alessia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wagner, Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-09-01
The objective of this work is to investigate the efficacy of using calibration strategies from Uncertainty Quantification (UQ) to determine model coefficients for LES. As the target methods are for engineering LES, uncertainty from numerical aspects of the model must also be quantified. 15 The ultimate goal of this research thread is to generate a cost versus accuracy curve for LES such that the cost could be minimized given an accuracy prescribed by an engineering need. Realization of this goal would enable LES to serve as a predictive simulation tool within the engineering design process.
Singh, A.; Serbin, S. P.; Kingdon, C.; Townsend, P. A.
2013-12-01
A major goal of remote sensing, and imaging spectroscopy in particular, is the development of generalizable algorithms to repeatedly and accurately map ecosystem properties such as canopy chemistry across space and time. Existing methods must therefore be tested across a range of measurement approaches to identify and overcome limits to the consistent retrieval of such properties from spectroscopic imagery. Here we illustrate a general approach for the estimation of key foliar biochemical and morphological traits from spectroscopic imagery derived from the AVIRIS instrument and the propagation of errors from the leaf to the image scale using partial least squares regression (PLSR) techniques. Our method involves the integration of three types of data representing different scales of observation: At the image scale, the images were normalized for atmospheric, illumination and BRDF effects. Spectra from field plot locations were extracted from the 51AVIRIS images and were averaged when the field plot was larger than a single pixel. At the plot level, the scaling was conducted using multiple replicates (1000) derived from the leaf-level uncertainty estimates to generate plot-level estimates with their associated uncertainties. Leaf-level estimates of foliar traits (%N, %C, %Fiber, %Cellulose, %Lignin, LMA) were scaled to the canopy based on relative species composition of each plot. Image spectra were iteratively split into 50/50 randomized calibration-validation datasets and multiple (500) trait-predictive PLSR models were generated, this time sampling from within the plot-level uncertainty distribution. This allowed the propagation of uncertainty from the leaf-level dependent variables to the plot level, and finally to models built using AVIRIS image spectra. Moreover, this method allows us to generate spatially explicit maps of uncertainty in our sampled traits. Both LMA and %N PLSR models had a R2 greater than 0.8, root mean square errors (RMSEs) for both
Directory of Open Access Journals (Sweden)
Elvis Joacir de França
2006-01-01
Full Text Available Instrumental neutron activation analysis (INAA is a measurement technique of high metrological level for the determination of chemical elements. In the context of BIOTA/FAPESP Program, leaves of trees have been evaluated by INAA for biomonitoring purposes of the Atlantic Forest. To assure the comparability of results in environmental studies, a leaf sample of Marlierea tomentosa (Myrtaceae family showing the lowest concentrations of chemical elements was selected for the evaluation of analytical quality of the determination under unfavorable conditions. Nevertheless, the homogeneity of chemical concentrations of sample at the 95% of confidence level has been achieved and INAA has presented repeatability of 2% for the determination of Br, Co, Cs, Fe, K, Na, Rb and Sr, the uncertainty could have been overestimated. For the evaluation of uncertainty due to the variability of chemical concentrations in the sample, Jackknife and Bootstrap methods were used to estimate the maximum expected percent standard deviation. The uncertainty budget was considered adequate for the reporting chemical concentrations of environmental samples determined by INAA.A análise por ativação neutrônica instrumental (INAA é uma técnica analítica de alto nível metrológico para a determinação de elementos químicos. No contexto do programa BIOTA/FAPESP, folhas de árvores vêm sendo avaliadas empregando-se INAA para a biomonitoração da Mata Atlântica. Para garantir a comparabilidade dos resultados em estudos ambientais, amostra de folhas de Marlierea tomentosa, cujas concentrações de elementos químicos obtidas foram as menores, foi selecionada para a avaliação da qualidade analítica na mais desfavorável situação. Esta avaliação levou em consideração a homogeneidade das concentrações de elementos e a estimativa da repetitividade analítica. Embora a homogeneidade das concentrações tenha sido detectada em nível de 95% de confiança e a INAA tenha
Institute of Scientific and Technical Information of China (English)
ZHAO Yan-Zhong; SUN Hua-Yan; ZHENG Yong-Hui
2011-01-01
Based on the generalized diffraction integral formula and the idea that the angle misalignment of the cat-eye optical lens can be transformed into the displacement misalignment,an approximate analytical propagation formula for Gaussian beams through a cat-eye optical lens under large incidence angle condition is derived.Numerical results show that the diffraction effect of the apertures of the cat-eye optical lens becomes stronger along with the increase in incidence angle. The results are also compared with those from using an angular spectrum diffraction integral and experiment to illustrate the applicability and validity of our theoretical formula.It is shown that the approximate extent is good enough for the application of a cat-eye optical lens with a radius of 20 mm and a propagation distance of 100m,and the approximate extent becomes better along with the increase in the radius of the cat-eye optical lens and the propagation distance.
Gustafsson, Johan; Brolin, Gustav; Cox, Maurice; Ljungberg, Michael; Johansson, Lena; Sjögreen Gleisner, Katarina
2015-11-01
A computer model of a patient-specific clinical 177Lu-DOTATATE therapy dosimetry system is constructed and used for investigating the variability of renal absorbed dose and biologically effective dose (BED) estimates. As patient models, three anthropomorphic computer phantoms coupled to a pharmacokinetic model of 177Lu-DOTATATE are used. Aspects included in the dosimetry-process model are the gamma-camera calibration via measurement of the system sensitivity, selection of imaging time points, generation of mass-density maps from CT, SPECT imaging, volume-of-interest delineation, calculation of absorbed-dose rate via a combination of local energy deposition for electrons and Monte Carlo simulations of photons, curve fitting and integration to absorbed dose and BED. By introducing variabilities in these steps the combined uncertainty in the output quantity is determined. The importance of different sources of uncertainty is assessed by observing the decrease in standard deviation when removing a particular source. The obtained absorbed dose and BED standard deviations are approximately 6% and slightly higher if considering the root mean square error. The most important sources of variability are the compensation for partial volume effects via a recovery coefficient and the gamma-camera calibration via the system sensitivity.
Gustafsson, Johan; Brolin, Gustav; Cox, Maurice; Ljungberg, Michael; Johansson, Lena; Gleisner, Katarina Sjögreen
2015-11-01
A computer model of a patient-specific clinical (177)Lu-DOTATATE therapy dosimetry system is constructed and used for investigating the variability of renal absorbed dose and biologically effective dose (BED) estimates. As patient models, three anthropomorphic computer phantoms coupled to a pharmacokinetic model of (177)Lu-DOTATATE are used. Aspects included in the dosimetry-process model are the gamma-camera calibration via measurement of the system sensitivity, selection of imaging time points, generation of mass-density maps from CT, SPECT imaging, volume-of-interest delineation, calculation of absorbed-dose rate via a combination of local energy deposition for electrons and Monte Carlo simulations of photons, curve fitting and integration to absorbed dose and BED. By introducing variabilities in these steps the combined uncertainty in the output quantity is determined. The importance of different sources of uncertainty is assessed by observing the decrease in standard deviation when removing a particular source. The obtained absorbed dose and BED standard deviations are approximately 6% and slightly higher if considering the root mean square error. The most important sources of variability are the compensation for partial volume effects via a recovery coefficient and the gamma-camera calibration via the system sensitivity. PMID:26458139
Alhassan, Erwin; Duan, Junfeng; Gustavsson, Cecilia; Koning, Arjan; Pomp, Stephan; Rochman, Dimitri; Österlund, Michael
2013-01-01
Analyses are carried out to assess the impact of nuclear data uncertainties on keff for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of Pu-239 random ENDF-formated libraries generated using the TALYS based system were processed into ACE format with NJOY99.336 code and used as input into the Serpent Monte Carlo neutron transport code to obtain distribution in keff. The keff distribution obtained was compared with the latest major nuclear data libraries - JEFF-3.1.2, ENDF/B-VII.1 and JENDL-4.0. A method is proposed for the selection of benchmarks for specific applications using the Total Monte Carlo approach. Finally, an accept/reject criterion was investigated based on chi square values obtained using the Pu-239 Jezebel criticality benchmark. It was observed that nuclear data uncertainties in keff were reduced considerably from 748 to 443 pcm by applying a more rigid acceptance criteria for accepting random files.
Gates, Robert L
2015-01-01
This work proposes a scheme for significantly reducing the computational complexity of discretized problems involving the non-smooth forward propagation of uncertainty by combining the adaptive hierarchical sparse grid stochastic collocation method (ALSGC) with a hierarchy of successively finer spatial discretizations (e.g. finite elements) of the underlying deterministic problem. To achieve this, we build strongly upon ideas from the Multilevel Monte Carlo method (MLMC), which represents a well-established technique for the reduction of computational complexity in problems affected by both deterministic and stochastic error contributions. The resulting approach is termed the Multilevel Adaptive Sparse Grid Collocation (MLASGC) method. Preliminary results for a low-dimensional, non-smooth parametric ODE problem are promising: the proposed MLASGC method exhibits an error/cost-relation of $\\varepsilon \\sim t^{-0.95}$ and therefore significantly outperforms the single-level ALSGC ($\\varepsilon \\sim t^{-0.65}$) a...
Applied Analytical Methods for Solving Some Problems of Wave Propagation in the Coastal Areas
Gagoshidze, Shalva; Kodua, Manoni
2016-04-01
Analytical methods, easy for application, are proposed for the solution of the following four classical problems of coastline hydro mechanics: 1. Refraction of waves on coast slopes of arbitrary steepness; 2. Wave propagation in tapering water areas; 3. Longitudinal waves in open channels; 4. Long waves on uniform and non-uniform flows of water. The first three of these problems are solved by the direct Galerkin-Kantorovich method with a choice , of basic functions which completely satisfy all boundary conditions. This approach leads to obtaining new evolutionary equations which can be asymptotically solved by the WKB method. The WKB solution of the first problem enables us to easily determine the three-dimensional field of velocities and to construct the refraction picture of the wave surface near the coast having an arbitrary angle of slope to the horizon varying from 0° to 180°. This solution, in particular for a vertical cliff, fully agrees with Stoker's particular but difficult solution. Moreover, it is shown for the first time that our Schrödinger type evolutionary equation leads to the formation of the so-called "potential wells" if the angle of coast slope to the horizon exceeds 45°, while the angle given at infinity (i.e. at a large distance from the shore) between the wave crests and the coastline exceeds 75°. This theoretical result expressed in terms of elementary functions is well consistent with the experimental observations and with lot of aerial photographs of waves in the coastal zones of the oceans [1,2]. For the second problem we introduce the notions of "wide" and "narrow" water areas. It is shown that Green's law on the wave height growth holds only for the narrow part of the water area, whereas in the wide part the tapering of the water area leads to an insignificant decrease of the wave height. For the third problem, the bank slopes of trapezoidal channels are assumed to have an arbitrary angle of steepness. So far we have known the
Bilionis, Ilias; Gonzalez, Marcial
2016-01-01
The prohibitive cost of performing Uncertainty Quantification (UQ) tasks with a very large number of input parameters can be addressed, if the response exhibits some special structure that can be discovered and exploited. Several physical responses exhibit a special structure known as an active subspace (AS), a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction with the AS represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the model, we design a two-step maximum likelihood optimization procedure that ensures the ...
Camici, Stefania; Tito Aronica, Giuseppe; Tarpanelli, Angelica; Moramarco, Tommaso
2013-04-01
Hydraulic models are an essential tool in many fields, e.g. civil engineering, flood hazard and risk assessments, evaluation of flood control measures, etc. Nowadays there are many models of different complexity regarding the mathematical foundation and spatial dimensions available, and most of them are comparatively easy to operate due to sophisticated tools for model setup and control. However, the calibration of these models is still underdeveloped in contrast to other models like e.g. hydrological models or models used in ecosystem analysis. This has basically two reasons. First, the lack of relevant data necessary for the model calibration. Indeed, flood events are very rarely monitored due to the disturbances inflicted by them and the lack of appropriate measuring equipment. The second reason is related to the choice of a suitable performance measures for calibrating and to evaluate model predictions in a credible and consistent way (and to reduce the uncertainty). This study takes a well documented flood event in November 2012 in Paglia river basin (Central Italy). For this area a detailed description of the main channel morphology, obtained from an accurate topographical surveys and by a DEM with spatial resolution of 2 m, and several points within the floodplain areas, in which the maximum water level has been measured, were available for the post-event analysis. On basis of these information two-dimensional inertial finite element hydraulic model was set up and calibrated using different performance measures. Manning roughness coefficients obtained from the different calibrations were then used for the delineation of inundation maps including also uncertainty. The water levels of three hydrometric stations and flooded area extensions, derived by video recording the day after the flood event, have been used for the validation of the model.
Mani, Ali; Zangle, Thomas A; Santiago, Juan G
2009-04-01
We develop two models to describe ion transport in variable-height micro- and nanochannels. For the first model, we obtain a one-dimensional (unsteady) partial differential equation governing flow and charge transport through a shallow and wide electrokinetic channel. In this model, the effects of electric double layer (EDL) on axial transport are taken into account using exact solutions of the Poisson-Boltzmann equation. The second simpler model, which is approachable analytically, assumes that the EDLs are confined to near-wall regions. Using a characteristics analysis, we show that the latter model captures concentration polarization (CP) effects and provides useful insight into its dynamics. Two distinct CP regimes are identified: CP with propagation in which enrichment and depletion shocks propagate outward, and CP without propagation where polarization effects stay local to micro- nanochannel interfaces. The existence of each regime is found to depend on a nanochannel Dukhin number and mobility of the co-ion nondimensionalized by electroosmotic mobility. Interestingly, microchannel dimensions and axial diffusion are found to play an insignificant role in determining whether CP propagates. The steady state condition of propagating CP is shown to be controlled by channel heights, surface chemistry, and co-ion mobility instead of the reservoir condition. Both models are validated against experimental results in Part II of this two-paper series.
Gosset, M.; Roca, R.
2012-04-01
The use of satellite based rainfall in research or operational Hydrological application is becoming more and more frequent. This is specially true in the Tropics where ground based gages (or radar) network are generally scarce and generally degrading. The new French-Indian satellite Mission Megha-Tropiques (MT) dedicated to the water and energy budget in the tropical atmosphere will contribute to a better monitoring of rainfall in the inter-tropical zone. As part of this mission, research is developed on the use of MT rainfall products for hydrological research or operational application such as flood monitoring. A key issue for such applications is how to account for rainfall products biases and uncertainties, and how to propagate them in the end user models ? Another important question is how to chose the best space-time resolution for the rainfall forcing, given that both model performances and rain-product uncertainties are resolution dependent. This talk will present on going investigations and perspectives on this subject, with examples from the Megha_tropiques Ground validation sites. Several sensitivity studies have been carried out in the Oueme Basin in Benin, West Africa, one the instrumented basin that will be used for MT products direct and hydrological validation.
Directory of Open Access Journals (Sweden)
Ramin Shamshiri
2014-01-01
Full Text Available Wave propagation and heat distribution are both governed by second order linear constant coefficient partial differential equations, however their solutions yields very different properties. This study presents a comprehensive comparison between hyperbolic wave equation and parabolic heat equation. Issues such as conservation of wave profile versus averaging, transporting information, finite versus infinite speed propagation, time reversibility versus irreversibility and propagation of singularities versus instantaneous smoothing have been addressed and followed by examples and graphical evidences from computer simulations to support the arguments.
International Nuclear Information System (INIS)
This paper summarizes the results of the dynamic response analysis of the Zion reactor containment building using three different soil-structure interaction (SSI) analytical procedures which are: the substructure method, CLASSI; the equivalent linear finite element approach, ALUSH; and the nonlinear finite element procedure, DYNA3D. Uncertainties in analyzing a soil-structure system due to SSI analysis procedures were investigated. Responses at selected locations in the structure were compared through peak accelerations and response spectra
DEFF Research Database (Denmark)
Jurado-Navas, Antonio
2015-01-01
Recently, a new and generalized statistical model, called Málaga or simply M distribution, has been proposed to characterize the irradiance fluctuations of an unbounded optical wavefront (plane and spherical waves) propagating through a turbulent medium under all irradiance fluctuation conditions...
Analytical approach of laser beam propagation in the hollow polygonal light pipe.
Zhu, Guangzhi; Zhu, Xiao; Zhu, Changhong
2013-08-10
An analytical method of researching the light distribution properties on the output end of a hollow n-sided polygonal light pipe and a light source with a Gaussian distribution is developed. The mirror transformation matrices and a special algorithm of removing void virtual images are created to acquire the location and direction vector of each effective virtual image on the entrance plane. The analytical method is demonstrated by Monte Carlo ray tracing. At the same time, four typical cases are discussed. The analytical results indicate that the uniformity of light distribution varies with the structural and optical parameters of the hollow n-sided polygonal light pipe and light source with a Gaussian distribution. The analytical approach will be useful to design and choose the hollow n-sided polygonal light pipe, especially for high-power laser beam homogenization techniques.
Uncertainty in soil-structure interaction analysis arising from differences in analytical techniques
International Nuclear Information System (INIS)
This study addresses uncertainties arising from variations in different modeling approaches to soil-structure interaction of massive structures at a nuclear power plant. To perform a comprehensive systems analysis, it is necessary to quantify, for each phase of the traditional analysis procedure, both the realistic seismic response and the uncertainties associated with them. In this study two linear soil-structure interaction techniques were used to analyze the Zion, Illinois nuclear power plant: a direct method using the FLUSH computer program and a substructure approach using the CLASSI family of computer programs. In-structure response from two earthquakes, one real and one synthetic, was compared. Structure configurations from relatively simple to complicated multi-structure cases were analyzed. The resulting variations help quantify uncertainty in structure response due to analysis procedures
Energy Technology Data Exchange (ETDEWEB)
Segre, S.E. [Rome Univ. 2. Tor Vergata, Rome (Italy). Istituto Nazionale Fisica della Materia, Dipartimento di Fisica
2001-07-01
The known analytic expressions for the evolution of the polarization of electromagnetic waves propagating in a plasma with uniformly sheared magnetic field are extended to the case where the shear is not constant. Exact analytic expressions are found for the case when the space variations of the medium are such that the magnetic field components and the plasma density satisfy a particular condition (eq. 13), possibly in a convenient reference frame of polarization space. [Italian] Le espressioni, gia' note, per l'evoluzione della polarizzazione di onde elettromagnetiche propaganti in un plasma magnetizzato con shear costante vengono estese a casi in cui questo non e' costante. Si trovano soluzioni analitiche esatte per il caso in cui le variazioni spaziali del mezzo sono tali da soddisfare una particolare condizione (eq. 13), eventualmente in un opportuno sistema di riferimento nello spazio della polarizzazione (lo spazio di Poincare').
Ortman, Robert L.; Carr, Domenic A.; James, Ryan; Long, Daniel; O'Shaughnessy, Matthew R.; Valenta, Christopher R.; Tuell, Grady H.
2016-05-01
We have developed a prototype real-time computer for a bathymetric lidar capable of producing point clouds attributed with total propagated uncertainty (TPU). This real-time computer employs a "mixed-mode" architecture comprised of an FPGA, CPU, and GPU. Noise reduction and ranging are performed in the digitizer's user-programmable FPGA, and coordinates and TPU are calculated on the GPU. A Keysight M9703A digitizer with user-programmable Xilinx Virtex 6 FPGAs digitizes as many as eight channels of lidar data, performs ranging, and delivers the data to the CPU via PCIe. The floating-point-intensive coordinate and TPU calculations are performed on an NVIDIA Tesla K20 GPU. Raw data and computed products are written to an SSD RAID, and an attributed point cloud is displayed to the user. This prototype computer has been tested using 7m-deep waveforms measured at a water tank on the Georgia Tech campus, and with simulated waveforms to a depth of 20m. Preliminary results show the system can compute, store, and display about 20 million points per second.
DEFF Research Database (Denmark)
Rasmussen, Anders Rønne; Sørensen, Mads Peter; Gaididei, Yuri Borisovich;
2008-01-01
of the fundamental fluid dynamical equations in the non-dissipative limit. An exact traveling front solution is obtained from a generalized traveling wave assumption. This solution is, in an overall sense, equivalent to the Taylor shock solution of the Burgers equation. However, in contrast to the Burgers equation......, the model equation considered here is capable to describe waves propagating in opposite directions. Owing to the Hamiltonian structure of the proposed model equation, the front solution is in agreement with the classical Rankine Hugoniot relations. The exact front solution propagates at supersonic speed......A wave equation, that governs nite amplitude acoustic disturbances in a thermoviscous Newtonian fluid, and includes nonlinear terms up to second order, is proposed. In contrast to the model known as the Kuznetsov equation, the proposed nonlinear wave equation preserves the Hamiltonian structure...
An Analytic Solution to the Propagation of Cylindrical Blast Waves in a Radiative Gas
Directory of Open Access Journals (Sweden)
B.G Verma
1977-01-01
Full Text Available In this paper, we have obtained a set of non-similarity in closed forms for the propagation of a cylindrical blast wave in a radiative gas. An explosion in a gas of constant density and pressure has been considered by assuming the existence of an initial uniform magnetic field in the axial direction. The disturbance is supposed to be headed by a shock surface of variable strength and the total energy of the wave varies with time.
Accounting for the analytical properties of the quark propagator from Dyson-Schwinger equation
Dorkin, S M; Kampfer, B
2014-01-01
An approach based on combined solutions of the Bethe-Salpeter (BS) and Dyson-Schwinger (DS) equations within the ladder-rainbow approximation in the presence of singularities is proposed to describe the meson spectrum as quark antiquark bound states. We consistently implement into the BS equation the quark propagator functions from the DS equation, with and without pole-like singularities, and show that, by knowing the precise positions of the poles and their residues, one is able to develop reliable methods of obtaining finite interaction BS kernels and to solve the BS equation numerically. We show that, for bound states with masses $M 1 $ GeV, however, the propagator functions reveal pole-like structures. Consequently, for each type of mesons (unflavored, strange and charmed) we analyze the relevant intervals of $M$ where the pole-like singularities of the corresponding quark propagator influence the solution of the BS equation and develop a framework within which they can be consistently accounted for. The...
Raupach, Rainer; Flohr, Thomas G
2011-04-01
We analyze the signal and noise propagation of differential phase-contrast computed tomography (PCT) compared with conventional attenuation-based computed tomography (CT) from a theoretical point of view. This work focuses on grating-based differential phase-contrast imaging. A mathematical framework is derived that is able to analytically predict the relative performance of both imaging techniques in the sense of the relative contrast-to-noise ratio for the contrast of any two materials. Two fundamentally different properties of PCT compared with CT are identified. First, the noise power spectra show qualitatively different characteristics implying a resolution-dependent performance ratio. The break-even point is derived analytically as a function of system parameters such as geometry and visibility. A superior performance of PCT compared with CT can only be achieved at a sufficiently high spatial resolution. Second, due to periodicity of phase information which is non-ambiguous only in a bounded interval statistical phase wrapping can occur. This effect causes a collapse of information propagation for low signals which limits the applicability of phase-contrast imaging at low dose. PMID:21403187
Energy Technology Data Exchange (ETDEWEB)
Raupach, Rainer; Flohr, Thomas G, E-mail: rainer.raupach@siemens.com [Siemens AG Healthcare Sector, H IM CT R and D PA, Siemensstrasse 1, D-91301 Forchheim (Germany)
2011-04-07
We analyze the signal and noise propagation of differential phase-contrast computed tomography (PCT) compared with conventional attenuation-based computed tomography (CT) from a theoretical point of view. This work focuses on grating-based differential phase-contrast imaging. A mathematical framework is derived that is able to analytically predict the relative performance of both imaging techniques in the sense of the relative contrast-to-noise ratio for the contrast of any two materials. Two fundamentally different properties of PCT compared with CT are identified. First, the noise power spectra show qualitatively different characteristics implying a resolution-dependent performance ratio. The break-even point is derived analytically as a function of system parameters such as geometry and visibility. A superior performance of PCT compared with CT can only be achieved at a sufficiently high spatial resolution. Second, due to periodicity of phase information which is non-ambiguous only in a bounded interval statistical phase wrapping can occur. This effect causes a collapse of information propagation for low signals which limits the applicability of phase-contrast imaging at low dose.
International Nuclear Information System (INIS)
A characteristic that sets radioactivity measurements apart from most spectrometries is that the precision of a single determination can be estimated from Poisson statistics. This easily calculated counting uncertainty permits the detection of other sources of uncertainty by comparing observed with a priori precision. A good way to test the many underlysing assumptions in radiochemical measurements is to strive for high accuracy. For example, a measurement by instrumental neutron activation analysis (INAA) of gold film thickness in our laboratory revealed the need for pulse pileup correction even at modest dead times. Recently, the International Organization for Standardization (ISO) and other international bodies have formalized the quantitative determination and statement of uncertainty so that the weaknesses of each measurement are exposed for improvement. In the INAA certification measurement of ion-implanted arsenic in silicon (Standard Reference Material 2134), we recently achieved an expanded (95 % confidence) relative uncertainly of 0.38 % for 90 ng of arsenic per sample. A complete quantitative error analysis was performed. This measurement meets the CCQM definition of a primary ratio method. (author)
International Nuclear Information System (INIS)
The relativistic dispersion relation of a nearly perpendicular injected electron cyclotron wave is solved in different regions. The coupling of the O-mode and the X-mode is described by a correct expression qualitatively different from that obtained from the non-relativistic approximation. The damping factor shows that wave absorption is due to two mechanisms: the relativistic O-mode damping and the coupled X-mode damping. Analytic expression for these damping is obtained
Directory of Open Access Journals (Sweden)
M. W. Rotach
2012-08-01
Full Text Available D-PHASE was a Forecast Demonstration Project of the World Weather Research Programme (WWRP related to the Mesoscale Alpine Programme (MAP. Its goal was to demonstrate the reliability and quality of operational forecasting of orographically influenced (determined precipitation in the Alps and its consequences on the distribution of run-off characteristics. A special focus was, of course, on heavy-precipitation events.
The D-PHASE Operations Period (DOP ran from June to November~2007, during which an end-to-end forecasting system was operated covering many individual catchments in the Alps, with their water authorities, civil protection organizations or other end users. The forecasting system's core piece was a Visualization Platform where precipitation and flood warnings from some 30 atmospheric and 7 hydrological models (both deterministic and probabilistic and corresponding model fields were displayed in uniform and comparable formats. Also, meteograms, nowcasting information and end user communication was made available to all the forecasters, users and end users. D-PHASE information was assessed and used by some 50 different groups ranging from atmospheric forecasters to civil protection authorities or water management bodies.
In the present contribution, D-PHASE is briefly presented along with its outstanding scientific results and, in particular, the lessons learnt with respect to uncertainty propagation. A focus is thereby on the transfer of ensemble prediction information into the hydrological community and its use with respect to other aspects of societal impact. Objective verification of forecast quality is contrasted to subjective quality assessments during the project (end user workshops, questionnaires and some general conclusions concerning forecast demonstration projects are drawn.
International Nuclear Information System (INIS)
Starting from the path-integral representation for the electron propagator without fermion loops in QED, we analytically investigate the strong-coupling behavior in an arbitrary background electromagnetic field through a series expansion in powers of 1/e. Contrary to the perturbation theory expansion in e the new series only contains positive powers of the derivative operator p. Due to infrared singularities in the path integral the series does not exist beyond the lowest orders, although one can build a systematic expansion in powers of p (not 1/e) which can be calculated up to any order. To handle infinities we regularize using a Pauli-Villars approach. The introduction of fermion loops would not correspond to higher orders in 1/e, so a priori our results are only pertinent to the sector of QED we have chosen. 17 refs., 1 fig
International Nuclear Information System (INIS)
Highlights: ► Response of RC structures to macrocell corrosion of a rebar is studied analytically. ► The problem is solved prior to the onset of microcrack propagation. ► Suitable Love's potential functions are used to study the steel-rust-concrete media. ► The role of crucial factors on the time of onset of concrete cracking is examined. ► The effect of vital factors on the maximum radial stress of concrete is explored. - Abstract: Assessment of the macrocell corrosion which deteriorates reinforced concrete (RC) structures have attracted the attention of many researchers during recent years. In this type of rebar corrosion, the reduction in cross-section of the rebar is significantly accelerated due to the large ratio of the cathode's area to the anode's area. In order to examine the problem, an analytical solution is proposed for prediction of the response of the RC structure from the time of steel depassivation to the stage just prior to the onset of microcrack propagation. To this end, a circular cylindrical RC member under axisymmetric macrocell corrosion of the reinforcement is considered. Both cases of the symmetric and asymmetric rebar corrosion along the length of the anode zone are studied. According to the experimentally observed data, corrosion products are modeled as a thin layer with a nonlinear stress–strain relation. The exact expressions of the elastic fields associated with the steel, and concrete media are obtained using Love's potential function. By imposing the boundary conditions, the resulting set of nonlinear equations are solved in each time step by Newton's method. The effects of the key parameters which have dominating role in the time of the onset of concrete cracking and maximum radial stress field of the concrete have been examined.
Rouze, Ned C; Palmeri, Mark L; Nightingale, Kathryn R
2015-08-01
Recent measurements of shear wave propagation in viscoelastic materials have been analyzed by constructing the two-dimensional Fourier transform (2D-FT) of the spatial-temporal shear wave signal and using an analysis procedure derived under the assumption the wave is described as a plane wave, or as the asymptotic form of a wave expanding radially from a cylindrically symmetric source. This study presents an exact, analytic expression for the 2D-FT description of shear wave propagation in viscoelastic materials following asymmetric Gaussian excitations and uses this expression to evaluate the bias in 2D-FT measurements obtained using the plane or cylindrical wave assumptions. A wide range of biases are observed depending on specific values of frequency, aspect ratio R of the source asymmetry, and material properties. These biases can be reduced significantly by weighting the shear wave signal in the spatial domain to correct for the geometric spreading of the shear wavefront using a factor of x(p). The optimal weighting power p is found to be near the theoretical value of 0.5 for the case of a cylindrical source with R = 1, and decreases for asymmetric sources with R > 1.
Directory of Open Access Journals (Sweden)
Sergey F Pravdin
Full Text Available We develop a numerical approach based on our recent analytical model of fiber structure in the left ventricle of the human heart. A special curvilinear coordinate system is proposed to analytically include realistic ventricular shape and myofiber directions. With this anatomical model, electrophysiological simulations can be performed on a rectangular coordinate grid. We apply our method to study the effect of fiber rotation and electrical anisotropy of cardiac tissue (i.e., the ratio of the conductivity coefficients along and across the myocardial fibers on wave propagation using the ten Tusscher-Panfilov (2006 ionic model for human ventricular cells. We show that fiber rotation increases the speed of cardiac activation and attenuates the effects of anisotropy. Our results show that the fiber rotation in the heart is an important factor underlying cardiac excitation. We also study scroll wave dynamics in our model and show the drift of a scroll wave filament whose velocity depends non-monotonically on the fiber rotation angle; the period of scroll wave rotation decreases with an increase of the fiber rotation angle; an increase in anisotropy may cause the breakup of a scroll wave, similar to the mother rotor mechanism of ventricular fibrillation.
Valier-Brasier, Tony; Conoir, Jean-Marc; Coulouvrat, François; Thomas, Jean-Louis
2015-10-01
Sound propagation in dilute suspensions of small spheres is studied using two models: a hydrodynamic model based on the coupled phase equations and an acoustic model based on the ECAH (ECAH: Epstein-Carhart-Allegra-Hawley) multiple scattering theory. The aim is to compare both models through the study of three fundamental kinds of particles: rigid particles, elastic spheres, and viscous droplets. The hydrodynamic model is based on a Rayleigh-Plesset-like equation generalized to elastic spheres and viscous droplets. The hydrodynamic forces for elastic spheres are introduced by analogy with those of droplets. The ECAH theory is also modified in order to take into account the velocity of rigid particles. Analytical calculations performed for long wavelength, low dilution, and weak absorption in the ambient fluid show that both models are strictly equivalent for the three kinds of particles studied. The analytical calculations show that dilatational and translational mechanisms are modeled in the same way by both models. The effective parameters of dilute suspensions are also calculated. PMID:26520342
Karakoylu, E.; Franz, B.
2016-01-01
First attempt at quantifying uncertainties in ocean remote sensing reflectance satellite measurements. Based on 1000 iterations of Monte Carlo. Data source is a SeaWiFS 4-day composite, 2003. The uncertainty is for remote sensing reflectance (Rrs) at 443 nm.
Post van der Burg, Max; Cullinane Thomas, Catherine; Holcombe, Tracy R.; Nelson, Richard D.
2016-01-01
The Landscape Conservation Cooperatives (LCCs) are a network of partnerships throughout North America that are tasked with integrating science and management to support more effective delivery of conservation at a landscape scale. In order to achieve this integration, some LCCs have adopted the approach of providing their partners with better scientific information in an effort to facilitate more effective and coordinated conservation decisions. Taking this approach has led many LCCs to begin funding research to provide the information for improved decision making. To ensure that funding goes to research projects with the highest likelihood of leading to more integrated broad scale conservation, some LCCs have also developed approaches for prioritizing which information needs will be of most benefit to their partnerships. We describe two case studies in which decision analytic tools were used to quantitatively assess the relative importance of information for decisions made by partners in the Plains and Prairie Potholes LCC. The results of the case studies point toward a few valuable lessons in terms of using these tools with LCCs. Decision analytic tools tend to help shift focus away from research oriented discussions and toward discussions about how information is used in making better decisions. However, many technical experts do not have enough knowledge about decision making contexts to fully inform the latter type of discussion. When assessed in the right decision context, however, decision analyses can point out where uncertainties actually affect optimal decisions and where they do not. This helps technical experts understand that not all research is valuable in improving decision making. But perhaps most importantly, our results suggest that decision analytic tools may be more useful for LCCs as way of developing integrated objectives for coordinating partner decisions across the landscape, rather than simply ranking research priorities.
Directory of Open Access Journals (Sweden)
B. Scherllin-Pirscher
2011-05-01
Full Text Available Due to the measurement principle of the radio occultation (RO technique, RO data are highly suitable for climate studies. Single RO profiles can be used to build climatological fields of different atmospheric parameters like bending angle, refractivity, density, pressure, geopotential height, and temperature. RO climatologies are affected by random (statistical errors, sampling errors, and systematic errors, yielding a total climatological error. Based on empirical error estimates, we provide a simple analytical error model for these error components, which accounts for vertical, latitudinal, and seasonal variations. The vertical structure of each error component is modeled constant around the tropopause region. Above this region the error increases exponentially, below the increase follows an inverse height power-law. The statistical error strongly depends on the number of measurements. It is found to be the smallest error component for monthly mean 10° zonal mean climatologies with more than 600 measurements per bin. Due to smallest atmospheric variability, the sampling error is found to be smallest at low latitudes equatorwards of 40°. Beyond 40°, this error increases roughly linearly, with a stronger increase in hemispheric winter than in hemispheric summer. The sampling error model accounts for this hemispheric asymmetry. However, we recommend to subtract the sampling error when using RO climatologies for climate research since the residual sampling error remaining after such subtraction is estimated to be 50 % of the sampling error for bending angle and 30 % or less for the other atmospheric parameters. The systematic error accounts for potential residual biases in the measurements as well as in the retrieval process and generally dominates the total climatological error. Overall the total error in monthly means is estimated to be smaller than 0.07 % in refractivity and 0.15 K in temperature at low to mid latitudes, increasing towards
Luridiana, Valentina; Aggarwal, Kanti; Bautista, Manuel; Bergemann, Maria; Delahaye, Franck; del Zanna, Giulio; Ferland, Gary; Lind, Karin; Manchado, Arturo; Mendoza, Claudio; Delgado, Adal Mesa; Díaz, Manuel Núñez; Shaw, Richard A; Wesson, Roger
2011-01-01
This workshop brought together scientists (including atomic physicists, theoretical astrophysicists and astronomers) concerned with the completeness and accuracy of atomic data for astrophysical applications. The topics covered in the workshop included the evaluation of uncertainties in atomic data, the propagation of such uncertainties in chemical abundances, and the feedback between observations and calculations. On a different level, we also discussed communication issues such as how to ensure that atomic data are correctly understood and used, and which forum is the best one for a fluid interaction between all communities involved in the production and use of atomic data. This paper reports on the discussions held during the workshop and introduces AstroAtom, a blog created as a platform for timely and open discussions on the needs and concerns over atomic data, and their effects on astronomical research. The complete proceedings will be published on http://astroatom.wordpress.com/.
International Nuclear Information System (INIS)
A new methodology, referred to as manufacturing and technological parameters uncertainty quantification (MTUQ), is under development at Paul Scherrer Institut (PSI). Based on uncertainty and global sensitivity analysis methods, MTUQ aims at advancing state-of-the-art for the treatment of geometrical/material uncertainties in light water reactor computations, using the MCNPX Monte Carlo neutron transport code. The development is currently focused primarily on criticality safety evaluations (CSE). In that context, the key components are a dedicated modular interface with the MCNPX code and a user-friendly interface to model functional relationship between system variables. A unique feature is an automatic capability to parameterize variables belonging to so-called “repeated structures” such as to allow for perturbations of each individual element of a given system modelled with MCNPX. Concerning the statistical analysis capabilities, these are currently implemented through an interface with the ROOT platform to handle the random sampling design. This paper presents the current status of the MTUQ methodology development and a first assessment of an ongoing organisation for economic cooperation and development/nuclear energy agency benchmark dedicated to uncertainty analyses for CSE. The presented results illustrate the overall capabilities of MTUQ and underline its relevance in predicting more realistic results compared to a methodology previously applied at PSI for this particular benchmark. (author)
International Nuclear Information System (INIS)
Leaks in pressurized tubes generate acoustic waves that propagate through the walls of these tubes, which can be captured by accelerometers or by acoustic emission sensors. The knowledge of how these walls can vibrate, or in another way, how these acoustic waves propagate in this material is fundamental in the detection and localization process of the leak source. In this work an analytic model was implemented, through the motion equations of a cylindrical shell, with the objective to understand the behavior of the tube surface excited by a point source. Since the cylindrical surface has a closed pattern in the circumferential direction, waves that are beginning their trajectory will meet with another that has already completed the turn over the cylindrical shell, in the clockwise direction as well as in the counter clockwise direction, generating constructive and destructive interferences. After enough time of propagation, peaks and valleys in the shell surface are formed, which can be visualized through a graphic representation of the analytic solution created. The theoretical results were proven through measures accomplished in an experimental setup composed of a steel tube finished in sand box, simulating the condition of infinite tube. To determine the location of the point source on the surface, the process of inverse solution was adopted, that is to say, known the signals of the sensor disposed in the tube surface , it is determined through the theoretical model where the source that generated these signals can be. (author)
Koch, Michael
Measurement uncertainty is one of the key issues in quality assurance. It became increasingly important for analytical chemistry laboratories with the accreditation to ISO/IEC 17025. The uncertainty of a measurement is the most important criterion for the decision whether a measurement result is fit for purpose. It also delivers help for the decision whether a specification limit is exceeded or not. Estimation of measurement uncertainty often is not trivial. Several strategies have been developed for this purpose that will shortly be described in this chapter. In addition the different possibilities to take into account the uncertainty in compliance assessment are explained.
Mendoza, Beltran M.A.; Heijungs, R.; Guinée, J.B.; Tukker, A.
2016-01-01
Purpose Despite efforts to treat uncertainty due to methodological choices in life cycle assessment (LCA) such as standardization, one-at-a-time (OAT) sensitivity analysis, and analytical and statistical methods, no method exists that propagate this source of uncertainty for all relevant processes s
Energy Technology Data Exchange (ETDEWEB)
Holland, Michael K. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); O' Rourke, Patrick E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-05-04
An SRNL H-Canyon Test Bed performance evaluation project was completed jointly by SRNL and LANL on a prototype monochromatic energy dispersive x-ray fluorescence instrument, the hiRX. A series of uncertainty propagations were generated based upon plutonium and uranium measurements performed using the alpha-prototype hiRX instrument. Data reduction and uncertainty modeling provided in this report were performed by the SRNL authors. Observations and lessons learned from this evaluation were also used to predict the expected uncertainties that should be achievable at multiple plutonium and uranium concentration levels provided instrument hardware and software upgrades being recommended by LANL and SRNL are performed.
Energy Technology Data Exchange (ETDEWEB)
Morales Prieto, M.; Ortega Saiz, P.
2011-07-01
Analysis of analytical uncertainties of the methodology of simulation of processes for obtaining isotopic ending inventory of spent fuel, the ARIANE experiment explores the part of simulation of burning.
Welton, Nicky J; Ades, A E
2005-01-01
Markov transition models are frequently used to model disease progression. The authors show how the solution to Kolmogorov's forward equations can be exploited to map between transition rates and probabilities from probability data in multistate models. They provide a uniform, Bayesian treatment of estimation and propagation of uncertainty of transition rates and probabilities when 1) observations are available on all transitions and exact time at risk in each state (fully observed data) and 2) observations are on initial state and final state after a fixed interval of time but not on the sequence of transitions (partially observed data). The authors show how underlying transition rates can be recovered from partially observed data using Markov chain Monte Carlo methods in WinBUGS, and they suggest diagnostics to investigate inconsistencies between evidence from different starting states. An illustrative example for a 3-state model is given, which shows how the methods extend to more complex Markov models using the software WBDiff to compute solutions. Finally, the authors illustrate how to statistically combine data from multiple sources, including partially observed data at several follow-up times and also how to calibrate a Markov model to be consistent with data from one specific study. PMID:16282214
Directory of Open Access Journals (Sweden)
Cinzia Caliendo
2015-01-01
Full Text Available The propagation of the fundamental symmetric Lamb mode S0 along wz-BN/AlN thin composite plates suitable for telecommunication and sensing applications is studied. The investigation of the acoustic field profile across the plate thickness revealed the presence of modes having longitudinal polarization, the Anisimkin Jr. plate modes (AMs, travelling at a phase velocity close to that of the wz-BN longitudinal bulk acoustic wave propagating in the same direction. The study of the S0 mode phase velocity and coupling coefficient (K2 dispersion curves, for different electrical boundary conditions, has shown that eight different coupling configurations are allowable that exhibit a K2 as high as about 4% and very high phase velocity (up to about 16,700 m/s. The effect of the thickness and material type of the metal floating electrode on the K2 dispersion curves has also been investigated, specifically addressing the design of an enhanced coupling device. The gravimetric sensitivity of the BN/AlN-based acoustic waveguides was then calculated for both the AMs and elliptically polarized S0 modes; the AM-based sensor velocity and attenuation shifts due to the viscosity of a surrounding liquid was theoretically predicted. The performed investigation suggests that wz-BN/AlN is a very promising substrate material suitable for developing GHz band devices with enhanced electroacoustic coupling efficiency and suitable for application in telecommunications and sensing fields.
DEFF Research Database (Denmark)
Heydorn, Kaj; Anglov, Thomas
2002-01-01
uncertainty was verified from independent measurements of the same sample by demonstrating statistical control of analytical results and the absence of bias. The proposed method takes into account uncertainties of the measurement, as well as of the amount of calibrant. It is applicable to all types......Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...
Directory of Open Access Journals (Sweden)
Arika Ligmann-Zielinska
Full Text Available Agent-based models (ABMs have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1 efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2 conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.
Cui, Linyan; Xue, Bindang; Zhou, Fugen
2013-11-01
The effects of moderate-to-strong non-Kolmogorov turbulence on the angle of arrival (AOA) fluctuations for plane and spherical waves are investigated in detail both analytically and numerically. New analytical expressions for the variance of AOA fluctuations are derived for moderate-to-strong non-Kolmogorov turbulence. The new expressions cover a wider range of non-Kolmogorov turbulence strength and reduce correctly to previously published analytic expressions for the cases of plane and spherical wave propagation through both weak non-Kolmogorov turbulence and moderate-to-strong Kolmogorov turbulence cases. The final results indicate that, as turbulence strength becomes greater, the expressions developed with the Rytov theory deviate from those given in this work. This deviation becomes greater with stronger turbulence, up to moderate-to-strong turbulence strengths. Furthermore, general spectral power law has significant influence on the variance of AOA fluctuations in non-Kolmogorov turbulence. These results are useful for understanding the potential impact of deviations from the standard Kolmogorv spectrum.
Hu, Huayu
2015-01-01
Nonperturbative calculation of QED processes participated by a strong electromagnetic field, especially provided by strong laser facilities at present and in the near future, generally resorts to the Furry picture with the usage of analytical solutions of the particle dynamical equation, such as the Klein-Gordon equation and Dirac equation. However only for limited field configurations such as a plane-wave field could the equations be solved analytically. Studies have shown significant interests in QED processes in a strong field composed of two counter-propagating laser waves, but the exact solutions in such a field is out of reach. In this paper, inspired by the observation of the structure of the solutions in a plane-wave field, we develop a new method and obtain the analytical solution for the Klein-Gordon equation and equivalently the action function of the solution for the Dirac equation in this field, under a largest dynamical parameter condition that there exists an inertial frame in which the particl...
Borecka, Marta; Białk-Bielińska, Anna; Siedlewicz, Grzegorz; Kornowska, Kinga; Kumirska, Jolanta; Stepnowski, Piotr; Pazdro, Ksenia
2013-08-23
Although the uncertainty estimate should be a necessary component of an analytical result, the presentation of measurements together with their uncertainties is still a serious problem, especially in the monitoring of the presence of pharmaceuticals in the environment. Here we discuss the estimation of expanded uncertainty in analytical procedures for determining residues of twelve pharmaceuticals in seawaters using solid-phase extraction (SPE) with H2O-Philic BAKERBOND speed disks and liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). Matrix effects, extraction efficiency and absolute recovery of the developed analytical method were determined. A validation was performed to obtain the method's linearity, precision, accuracy, limits of detection (LODs) and quantification (LOQs). The expanded uncertainty of the data obtained was estimated according to the Guide to the Expression of Uncertainty in Measurement and ISO 17025:2005 standard. We applied our method to the analysis of drugs in seawaters samples from the coastal area of the southern Baltic Sea. As a result, a new approach (concerning the uncertainty estimation as well as the development of analytical method) to the analysis of pharmaceutical residues in environmental samples is presented. The information given here should facilitate the introduction of uncertainty estimation in chromatographic measurements on a much greater scale than is currently the case. PMID:23885670
Plósz, Benedek Gy; De Clercq, Jeriffa; Nopens, Ingmar; Benedetti, Lorenzo; Vanrolleghem, Peter A
2011-01-01
In WWTP models, the accurate assessment of solids inventory in bioreactors equipped with solid-liquid separators, mostly described using one-dimensional (1-D) secondary settling tank (SST) models, is the most fundamental requirement of any calibration procedure. Scientific knowledge on characterising particulate organics in wastewater and on bacteria growth is well-established, whereas 1-D SST models and their impact on biomass concentration predictions are still poorly understood. A rigorous assessment of two 1-DSST models is thus presented: one based on hyperbolic (the widely used Takács-model) and one based on parabolic (the more recently presented Plósz-model) partial differential equations. The former model, using numerical approximation to yield realistic behaviour, is currently the most widely used by wastewater treatment process modellers. The latter is a convection-dispersion model that is solved in a numerically sound way. First, the explicit dispersion in the convection-dispersion model and the numerical dispersion for both SST models are calculated. Second, simulation results of effluent suspended solids concentration (XTSS,Eff), sludge recirculation stream (XTSS,RAS) and sludge blanket height (SBH) are used to demonstrate the distinct behaviour of the models. A thorough scenario analysis is carried out using SST feed flow rate, solids concentration, and overflow rate as degrees of freedom, spanning a broad loading spectrum. A comparison between the measurements and the simulation results demonstrates a considerably improved 1-D model realism using the convection-dispersion model in terms of SBH, XTSS,RAS and XTSS,Eff. Third, to assess the propagation of uncertainty derived from settler model structure to the biokinetic model, the impact of the SST model as sub-model in a plant-wide model on the general model performance is evaluated. A long-term simulation of a bulking event is conducted that spans temperature evolution throughout a summer
Casse, C.; Gosset, M.; Peugeot, C.; Boone, A.; Pedinotti, V.
2013-12-01
The use of satellite based rainfall in research or operational Hydrological application is becoming more and more frequent. This is specially true in the Tropics where ground based gauge (or radar) network are generally scarce and often degrading. Member of the GPM constellation, the new French-Indian satellite Mission Megha-Tropiques (MT) dedicated to the water and energy budget in the tropical atmosphere contributes to a better monitoring of rainfall in the inter-tropical zone. As part of this mission, research is developed on the use of MT rainfall products for hydrological research or operational application such as flood monitoring. A key issue for such applications is how to account for rainfall products biases and uncertainties, and how to propagate them in the end user models ? Another important question is how to choose the best space-time resolution for the rainfall forcing, given that both model performances and rain-product uncertainties are resolution dependent. This talk will present on going investigations and perspectives on this subject, with examples from the Megha_tropiques Ground validation sites in West Africa. The CNRM model Surfex-ISBA/TRIP has been set up to simulate the hydrological behavior of the Niger River. This modeling set up is being used to study the predictability of Niger Floods events in the city of Niamey and the performances of satellite rainfall products as forcing for such predictions. One of the interesting feature of the Niger outflow in Niamey is its double peak : a first peak attributed to 'local' rainfall falling in small to medium size basins situated in the region of Niamey, and a second peak linked to the rainfall falling in the upper par of the river, and slowly propagating through the river towards Niamey. The performances of rainfall products are found to differ between the wetter/upper part of the basin, and the local/sahelian areas. Both academic tests with artificially biased or 'perturbed' rainfield and actual
Energy Technology Data Exchange (ETDEWEB)
Romero-Garcia, V [Instituto de Ciencia de Materiales de Madrid, Consejo Superior de Investigaciones Cientificas (Spain); Sanchez-Perez, J V [Centro de Tecnologias Fisicas: Acustica, Materiales y Astrofisica, Universidad Politecnica de Valencia (Spain); Garcia-Raffi, L M, E-mail: virogar1@gmail.com [Instituto Universitario de Matematica Pura y Aplicada, Universidad Politecnica de Valencia (Spain)
2011-07-06
The use of sonic crystals (SCs) as environmental noise barriers has certain advantages from both the acoustical and the constructive points of view with regard to conventional ones. However, the interaction between the SCs and the ground has not been studied yet. In this work we are reporting a semi-analytical model, based on the multiple scattering theory and on the method of images, to study this interaction considering the ground as a finite impedance surface. The results obtained here show that this model could be used to design more effective noise barriers based on SCs because the excess attenuation of the ground could be modelled in order to improve the attenuation properties of the array of scatterers. The results are compared with experimental data and numerical predictions thus finding good agreement between them.
Energy Technology Data Exchange (ETDEWEB)
Vršnak, B.; Žic, T.; Dumbović, M. [Hvar Observatory, Faculty of Geodesy, University of Zagreb, Kačćeva 26, HR-10000 Zagreb (Croatia); Temmer, M.; Möstl, C.; Veronig, A. M. [Kanzelhöhe Observatory—IGAM, Institute of Physics, University of Graz, Universittsplatz 5, A-8010 Graz (Austria); Taktakishvili, A.; Mays, M. L. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Odstrčil, D., E-mail: bvrsnak@geof.hr, E-mail: tzic@geof.hr, E-mail: mdumbovic@geof.hr, E-mail: manuela.temmer@uni-graz.at, E-mail: christian.moestl@uni-graz.at, E-mail: astrid.veronig@uni-graz.at, E-mail: aleksandre.taktakishvili-1@nasa.gov, E-mail: m.leila.mays@nasa.gov, E-mail: dusan.odstrcil@nasa.gov [George Mason University, Fairfax, VA 22030 (United States)
2014-08-01
Real-time forecasting of the arrival of coronal mass ejections (CMEs) at Earth, based on remote solar observations, is one of the central issues of space-weather research. In this paper, we compare arrival-time predictions calculated applying the numerical ''WSA-ENLIL+Cone model'' and the analytical ''drag-based model'' (DBM). Both models use coronagraphic observations of CMEs as input data, thus providing an early space-weather forecast two to four days before the arrival of the disturbance at the Earth, depending on the CME speed. It is shown that both methods give very similar results if the drag parameter Γ = 0.1 is used in DBM in combination with a background solar-wind speed of w = 400 km s{sup –1}. For this combination, the mean value of the difference between arrival times calculated by ENLIL and DBM is Δ-bar =0.09±9.0 hr with an average of the absolute-value differences of |Δ|-bar =7.1 hr. Comparing the observed arrivals (O) with the calculated ones (C) for ENLIL gives O – C = –0.3 ± 16.9 hr and, analogously, O – C = +1.1 ± 19.1 hr for DBM. Applying Γ = 0.2 with w = 450 km s{sup –1} in DBM, one finds O – C = –1.7 ± 18.3 hr, with an average of the absolute-value differences of 14.8 hr, which is similar to that for ENLIL, 14.1 hr. Finally, we demonstrate that the prediction accuracy significantly degrades with increasing solar activity.
Evaluation of Measurement Uncertainty in Neutron Activation Analysis using Research Reactor
Energy Technology Data Exchange (ETDEWEB)
Chung, Y. S.; Moon, J. H.; Sun, G. M.; Kim, S. H.; Baek, S. Y.; Lim, J. M.; Lee, Y. N.; Kim, H. R
2007-02-15
This report was summarized a general and technical requirements, methods, results on the measurement uncertainty assessment for a maintenance of quality assurance and traceability which should be performed in NAA technique using the HANARO research reactor. It will be used as a basic information to support effectively an accredited analytical services in the future. That is, for the assessment of measurement uncertainty, environmental certified reference materials are used to apply the analytical results obtained from real experiment using ISO-GUM and Monte Carlo Simulation(MCS) methods. Firstly, standard uncertainty of predominant parameters in a NAA is evaluated for the measured values of elements quantitatively, and then combined uncertainty is calculated applying the rule of uncertainty propagation. In addition, the contribution of individual standard uncertainty for the combined uncertainty are estimated and the way for a minimization of them is reviewed.
Error propagation in polarimetric demodulation
Ramos, A Asensio
2008-01-01
The polarization analysis of the light is typically carried out using modulation schemes. The light of unknown polarization state is passed through a set of known modulation optics and a detector is used to measure the total intensity passing the system. The modulation optics is modified several times and, with the aid of such several measurements, the unknown polarization state of the light can be inferred. How to find the optimal demodulation process has been investigated in the past. However, since the modulation matrix has to be measured for a given instrument and the optical elements can present problems of repeatability, some uncertainty is present in the elements of the modulation matrix and/or covariances between these elements. We analyze in detail this issue, presenting analytical formulae for calculating the covariance matrix produced by the propagation of such uncertainties on the demodulation matrix, on the inferred Stokes parameters and on the efficiency of the modulation process. We demonstrate...
Barbara Mickowska; Anna Sadowska-Rociek; Ewa Cieślik
2013-01-01
The aim of this study was to assess the importance of validation and uncertainty estimation related to the results of amino acid analysis using the ion-exchange chromatography with post-column derivatization technique. The method was validated and the components of standard uncertainty were identified and quantified to recognize the major contributions to uncertainty of analysis. Estimated relative extended uncertainty (k=2, P=95%) varied in range from 9.03% to 12.68%. Quantification of the u...
Leśniewska, Barbara; Kisielewska, Katarzyna; Wiater, Józefa; Godlewska-Żyłkiewicz, Beata
2016-01-01
A new fast method for determination of mobile zinc fractions in soil is proposed in this work. The three-stage modified BCR procedure used for fractionation of zinc in soil was accelerated by using ultrasounds. The working parameters of an ultrasound probe, a power and a time of sonication, were optimized in order to acquire the content of analyte in soil extracts obtained by ultrasound-assisted sequential extraction (USE) consistent with that obtained by conventional modified Community Bureau of Reference (BCR) procedure. The content of zinc in extracts was determined by flame atomic absorption spectrometry. The developed USE procedure allowed for shortening the total extraction time from 48 h to 27 min in comparison to conventional modified BCR procedure. The method was fully validated, and the uncertainty budget was evaluated. The trueness and reproducibility of the developed method was confirmed by analysis of certified reference material of lake sediment BCR-701. The applicability of the procedure for fast, low costs and reliable determination of mobile zinc fraction in soil, which may be useful for assessing of anthropogenic impacts on natural resources and environmental monitoring purposes, was proved by analysis of different types of soil collected from Podlaskie Province (Poland). PMID:26666658
International Nuclear Information System (INIS)
More than 150 researchers and engineers from universities and the industrial world met to discuss on the new methodologies developed around assessing uncertainty. About 20 papers were presented and the main topics were: methods to study the propagation of uncertainties, sensitivity analysis, nuclear data covariances or multi-parameter optimisation. This report gathers the contributions of CEA researchers and engineers
Development of the Calculation Module for Uncertainty of Internal Dose Coefficients
International Nuclear Information System (INIS)
The ICRP (International Commission on Radiological Protection) provides the coefficients as point values without uncertainties, it is important to understand sources of uncertainty in the derivation of the coefficients. When internal dose coefficients are calculated, numerous factors are involved such as transfer rate in biokinetic models, absorption rates and deposition in respiratory tract model, fractional absorption in alimentary tract model, absorbed fractions (AF), nuclide information and organ mass. These factors have uncertainty respectively, which increases the uncertainty of internal dose coefficients by uncertainty propagation. Since the procedure of internal dose coefficients calculation is somewhat complicated, it is difficult to propagate the each uncertainty analytically. The development of module and calculation were performed by MATLAB. In this study, we developed the calculation module for uncertainty of the internal dose coefficient. In this module, uncertainty of various factor used to calculate the internal dose coefficient can be considered using the Monte Carlo sampling method. After developing the module, we calculated the internal dose coefficient for inhalation of 90Sr with the uncertainty and obtained the distribution and percentile values. It is expected that this study will contribute greatly to the uncertainty research on internal dosimetry. In the future, we will update the module to consider more uncertainties
Efficient Quantification of Uncertainties in Complex Computer Code Results Project
National Aeronautics and Space Administration — Propagation of parameter uncertainties through large computer models can be very resource intensive. Frameworks and tools for uncertainty quantification are...
Conroy, Charlie; White, Martin
2008-01-01
The stellar masses, mean ages, metallicities, and star formation histories of galaxies are now commonly estimated via stellar population synthesis (SPS) techniques. SPS relies on stellar evolution calculations from the main sequence to stellar death, stellar spectral libraries, phenomenological dust models, and stellar initial mass functions (IMFs). The present work is the first in a series that explores the impact of uncertainties in key phases of stellar evolution and the IMF on the derived physical properties of galaxies and the expected luminosity evolution for a passively evolving set of stars. A Monte-Carlo Markov-Chain approach is taken to fit near-UV through near-IR photometry of a representative sample of low- and high-redshift galaxies with this new SPS model. Significant results include the following: 1) including uncertainties in stellar evolution, stellar masses at z~0 carry errors of ~0.3 dex at 95% CL with little dependence on luminosity or color, while at z~2, the masses of bright red galaxies...
Rundel, R. D.; Butler, D. M.; Stolarski, R. S.
1978-01-01
The paper discusses the development of a concise stratospheric model which uses iteration to obtain coupling between interacting species. The one-dimensional, steady-state, diurnally-averaged model generates diffusion equations with appropriate sources and sinks for species odd oxygen, H2O, H2, CO, N2O, odd nitrogen, CH4, CH3Cl, CCl4, CF2Cl2, CFCl3, and odd chlorine. The model evaluates steady-state perturbations caused by injections of chlorine and NO(x) and may be used to predict ozone depletion. The model is used in a Monte Carlo study of the propagation of reaction-rate imprecisions by calculating an ozone perturbation caused by the addition of chlorine. Since the model is sensitive to only 10 of the more than 50 reaction rates considered, only about 1000 Monte Carlo cases are required to span the space of possible results.
Directory of Open Access Journals (Sweden)
S. Bönisch
2004-02-01
Full Text Available Este trabalho teve por objetivos utilizar krigagem por indicação para espacializar propriedades de solos expressas por atributos categóricos, gerar uma representação acompanhada de medida espacial de incerteza e modelar a propagação de incerteza pela álgebra de mapas por meio de procedimentos booleanos. Foram estudados os atributos: teores de potássio (K e de alumínio trocáveis, saturação por bases (V, soma de bases (S, capacidade de troca catiônica (CTC, textura (Tx e classes de relevo (CR, de profundidade efetiva do solo, de drenagem interna e de pedregosidade e, ou, rochosidade, extraídos de 222 perfis pedológicos e de 219 amostras extras, referentes a solos do estado de Santa Catarina. A espacialização das incertezas evidenciou a variabilidade espacial dos dados a qual foi relacionada com a origem das amostras e com o comportamento do atributo. Os atributos S, V, K e CR apresentaram grau de incerteza maior do que o de Tx e CTC e houve aumento da incerteza quando representações categóricas foram integradas.The objectives of this work were to generate a representation of soil properties expressed by categorical attributes by kriging indicator; to assess the uncertainty in estimates, and to model the uncertainty propagation of map algebra by means of boolean procedures. The studied attributes were potassium (K and aluminum (Al exchangeable contents, sum of bases (SB, cationic exchange capacity (CEC, base saturation (V, texture (Tx and relief classes (RC, effective soil depth, internal drainage, and stoniness and/or rockiness, extracted from 222 pedologic profiles and 219 extra samples from soils of the State of Santa Catarina, Brazil. The uncertainties evidenced the spatial variability of the data related to the samples' origin and attribute behavior. Attributes SB, V, K, and RC presented higher uncertainties than Tx and CEC, and there was an increase of uncertainty when categorical representations were integrated.
Energy Technology Data Exchange (ETDEWEB)
Andres, T.H
2002-05-01
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
International Nuclear Information System (INIS)
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
Verification of uncertainty budgets
DEFF Research Database (Denmark)
Heydorn, Kaj; Madsen, B.S.
2005-01-01
, and therefore it is essential that the applicability of the overall uncertainty budget to actual measurement results be verified on the basis of current experimental data. This should be carried out by replicate analysis of samples taken in accordance with the definition of the measurand, but representing......The quality of analytical results is expressed by their uncertainty, as it is estimated on the basis of an uncertainty budget; little effort is, however, often spent on ascertaining the quality of the uncertainty budget. The uncertainty budget is based on circumstantial or historical data...... the full range of matrices and concentrations for which the budget is assumed to be valid. In this way the assumptions made in the uncertainty budget can be experimentally verified, both as regards sources of variability that are assumed negligible, and dominant uncertainty components. Agreement between...
Samaniego, L. E.; Kumar, R.; Schaefer, D.; Huang, S.; Yang, T.; Mishra, V.; Eisner, S.; Vetter, T.; Pechlivanidis, I.; Liersch, S.; Flörke, M.; Krysanova, V.
2015-12-01
Droughts are creeping hydro-meteorological events that bring societiesand natural systems to their limits and inducing considerablesocio-economic losses. Currently it is hypothesized that climate changewill exacerbate current trends leading a more severe and extendeddroughts, as well as, larger than normal recovery periods. Currentassessments, however, lack of a consistent framework to deal withcompatible initial conditions for the impact models and a set ofstandardized historical and future forcings. The ISI-MIP project provides an unique opportunity to understand thepropagation of model and forcing uncertainty into century-long timeseries of drought characteristics using an ensemble of model predictionsacross a broad range of climate scenarios and regions. In the presentstudy, we analyze this issue using the hydrologic simulations carriedout with HYPE, mHM, SWIM, VIC, and WaterGAP3 in seven large continentalriver basins: Amazon, Blue Nile, Ganges, Niger, Mississippi, Rhine,Yellow. All models are calibrated against observed streamflow duringthe period 1971-2001 using the same forcings based on the WATCH datasets. These constrained models were then forced with bias correctedoutputs of five CMIP-5 GCMs under four RCP scenarios (i.e. 2.6, 4.5,6.0, and 8.5 W/m2) for the period 1971-2099. A non-parametric kernel density approach is used to estimate thetemporal evolution of a monthly runoff index based on simulatedstreamflow. Hydrologic simulations corresponding to each GCM during thehistoric period of 1981-2010 serve as reference for the estimation ofthe basin specific monthly probability distribution functions. GCMspecific reference pdfs are then used to recast the future hydrologicmodel outputs from different RCP scenarios. Based on these results,drought severity and duration are investigated during periods: 1)2006-2035, 2) 2036-2065 and 3) 2070-2099. Two main hypothesis areinvestigated: 1) model predictive uncertainty of drought indices amongdifferent hydrologic
Energy Technology Data Exchange (ETDEWEB)
Barrado, A. I.; Garcia, S.; Perez, R. M.
2013-06-01
This paper presents an evaluation of uncertainty associated to analytical measurement of eighteen polycyclic aromatic compounds (PACs) in ambient air by liquid chromatography with fluorescence detection (HPLC/FD). The study was focused on analyses of PM{sub 1}0, PM{sub 2}.5 and gas phase fractions. Main analytical uncertainty was estimated for eleven polycyclic aromatic hydrocarbons (PAHs), four nitro polycyclic aromatic hydrocarbons (nitro-PAHs) and two hydroxy polycyclic aromatic hydrocarbons (OH-PAHs) based on the analytical determination, reference material analysis and extraction step. Main contributions reached 15-30% and came from extraction process of real ambient samples, being those for nitro- PAHs the highest (20-30%). Range and mean concentration of PAC mass concentrations measured in gas phase and PM{sub 1}0/PM{sub 2}.5 particle fractions during a full year are also presented. Concentrations of OH-PAHs were about 2-4 orders of magnitude lower than their parent PAHs and comparable to those sparsely reported in literature. (Author) 7 refs.
Bursik, Marcus; Jones, Matthew; Carn, Simon; Dean, Ken; Patra, Abani; Pavolonis, Michael; Pitman, E. Bruce; Singh, Tarunraj; Singla, Puneet; Webley, Peter; Bjornsson, Halldor; Ripepe, Maurizio
2012-12-01
Data on source conditions for the 14 April 2010 paroxysmal phase of the Eyjafjallajökull eruption, Iceland, have been used as inputs to a trajectory-based eruption column model, bent. This model has in turn been adapted to generate output suitable as input to the volcanic ash transport and dispersal model, puff, which was used to propagate the paroxysmal ash cloud toward and over Europe over the following days. Some of the source parameters, specifically vent radius, vent source velocity, mean grain size of ejecta, and standard deviation of ejecta grain size have been assigned probability distributions based on our lack of knowledge of exact conditions at the source. These probability distributions for the input variables have been sampled in a Monte Carlo fashion using a technique that yields what we herein call the polynomial chaos quadrature weighted estimate (PCQWE) of output parameters from the ash transport and dispersal model. The advantage of PCQWE over Monte Carlo is that since it intelligently samples the input parameter space, fewer model runs are needed to yield estimates of moments and probabilities for the output variables. At each of these sample points for the input variables, a model run is performed. Output moments and probabilities are then computed by properly summing the weighted values of the output parameters of interest. Use of a computational eruption column model coupled with known weather conditions as given by radiosonde data gathered near the vent allows us to estimate that initial mass eruption rate on 14 April 2010 may have been as high as 108 kg/s and was almost certainly above 107 kg/s. This estimate is consistent with the probabilistic envelope computed by PCQWE for the downwind plume. The results furthermore show that statistical moments and probabilities can be computed in a reasonable time by using 94 = 6,561 PCQWE model runs as opposed to millions of model runs that might be required by standard Monte Carlo techniques. The
Facets of Uncertainty in Digital Elevation and Slope Modeling
Institute of Scientific and Technical Information of China (English)
ZHANG Jingxiong; LI Deren
2005-01-01
This paper investigates the differences that result from applying different approaches to uncertainty modeling and reports an experimental examining error estimation and propagation in elevation and slope,with the latter derived from the former. It is confirmed that significant differences exist between uncertainty descriptors, and propagation of uncertainty to end products is immensely affected by the specification of source uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Thomas, R.E.
1982-03-01
An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software.
Leśniewska, Barbara; Kisielewska, Katarzyna; Wiater, Józefa; Godlewska-Żyłkiewicz, Beata
2015-01-01
A new fast method for determination of mobile zinc fractions in soil is proposed in this work. The three-stage modified BCR procedure used for fractionation of zinc in soil was accelerated by using ultrasounds. The working parameters of an ultrasound probe, a power and a time of sonication, were optimized in order to acquire the content of analyte in soil extracts obtained by ultrasound-assisted sequential extraction (USE) consistent with that obtained by conventional modified Community Burea...
On the IR behaviour of the Landau-gauge ghost propagator
Boucaud, Ph; Le Yaouanc, A; Micheli, J; Pène, O; Rodríguez-Quintero, J
2008-01-01
We examine analytically the ghost propagator Dyson-Schwinger Equation (DSE) in the deep IR regime and prove that a finite ghost dressing function at vanishing momentum is an alternative solution (solution II) to the usually assumed divergent one (solution I). We furthermore find that the Slavnov-Taylor identities discriminate between these two classes of solutions and strongly support the solution II. The latter turns out to be also preferred by lattice simulations within numerical uncertainties.
Calculating uncertainties of safeguards indices: error propagation
International Nuclear Information System (INIS)
Statistical methods play an important role in making references about a MUF, shipper-receiver difference, operator-inspector difference, and other safeguards indices. This session considers the sources and types of measurement errors and treats a specific example to illustrate how the variance of MUF is calculated for the model plant
Fuzzy Uncertainty Evaluation for Fault Tree Analysis
International Nuclear Information System (INIS)
This traditional probabilistic approach can calculate relatively accurate results. However it requires a long time because of repetitive computation due to the MC method. In addition, when informative data for statistical analysis are not sufficient or some events are mainly caused by human error, the probabilistic approach may not be possible because uncertainties of these events are difficult to be expressed by probabilistic distributions. In order to reduce the computation time and quantify uncertainties of top events when basic events whose uncertainties are difficult to be expressed by probabilistic distributions exist, the fuzzy uncertainty propagation based on fuzzy set theory can be applied. In this paper, we develop a fuzzy uncertainty propagation code and apply the fault tree of the core damage accident after the large loss of coolant accident (LLOCA). The fuzzy uncertainty propagation code is implemented and tested for the fault tree of the radiation release accident. We apply this code to the fault tree of the core damage accident after the LLOCA in three cases and compare the results with those computed by the probabilistic uncertainty propagation using the MC method. The results obtained by the fuzzy uncertainty propagation can be calculated in relatively short time, covering the results obtained by the probabilistic uncertainty propagation
Fuzzy Uncertainty Evaluation for Fault Tree Analysis
Energy Technology Data Exchange (ETDEWEB)
Kim, Ki Beom; Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of); Jae, Moo Sung [Hanyang University, Seoul (Korea, Republic of)
2015-05-15
This traditional probabilistic approach can calculate relatively accurate results. However it requires a long time because of repetitive computation due to the MC method. In addition, when informative data for statistical analysis are not sufficient or some events are mainly caused by human error, the probabilistic approach may not be possible because uncertainties of these events are difficult to be expressed by probabilistic distributions. In order to reduce the computation time and quantify uncertainties of top events when basic events whose uncertainties are difficult to be expressed by probabilistic distributions exist, the fuzzy uncertainty propagation based on fuzzy set theory can be applied. In this paper, we develop a fuzzy uncertainty propagation code and apply the fault tree of the core damage accident after the large loss of coolant accident (LLOCA). The fuzzy uncertainty propagation code is implemented and tested for the fault tree of the radiation release accident. We apply this code to the fault tree of the core damage accident after the LLOCA in three cases and compare the results with those computed by the probabilistic uncertainty propagation using the MC method. The results obtained by the fuzzy uncertainty propagation can be calculated in relatively short time, covering the results obtained by the probabilistic uncertainty propagation.
Flood modelling: Parameterisation and inflow uncertainty
Mukolwe, M.M.; Di Baldassarre, G.; Werner, M.; Solomatine, D.P.
2014-01-01
This paper presents an analysis of uncertainty in hydraulic modelling of floods, focusing on the inaccuracy caused by inflow errors and parameter uncertainty. In particular, the study develops a method to propagate the uncertainty induced by, firstly, application of a stage–discharge rating curve an
Bartley, David; Lidén, Göran
2008-08-01
The reporting of measurement uncertainty has recently undergone a major harmonization whereby characteristics of a measurement method obtained during establishment and application are combined componentwise. For example, the sometimes-pesky systematic error is included. A bias component of uncertainty can be often easily established as the uncertainty in the bias. However, beyond simply arriving at a value for uncertainty, meaning to this uncertainty if needed can sometimes be developed in terms of prediction confidence in uncertainty-based intervals covering what is to be measured. To this end, a link between concepts of accuracy and uncertainty is established through a simple yet accurate approximation to a random variable known as the non-central Student's t-distribution. Without a measureless and perpetual uncertainty, the drama of human life would be destroyed. Winston Churchill.
Time-Varying Uncertainty in Shock and Vibration Applications Using the Impulse Response
Directory of Open Access Journals (Sweden)
J.B. Weathers
2012-01-01
Full Text Available Design of mechanical systems often necessitates the use of dynamic simulations to calculate the displacements (and their derivatives of the bodies in a system as a function of time in response to dynamic inputs. These types of simulations are especially prevalent in the shock and vibration community where simulations associated with models having complex inputs are routine. If the forcing functions as well as the parameters used in these simulations are subject to uncertainties, then these uncertainties will propagate through the models resulting in uncertainties in the outputs of interest. The uncertainty analysis procedure for these kinds of time-varying problems can be challenging, and in many instances, explicit data reduction equations (DRE's, i.e., analytical formulas, are not available because the outputs of interest are obtained from complex simulation software, e.g. FEA programs. Moreover, uncertainty propagation in systems modeled using nonlinear differential equations can prove to be difficult to analyze. However, if (1 the uncertainties propagate through the models in a linear manner, obeying the principle of superposition, then the complexity of the problem can be significantly simplified. If in addition, (2 the uncertainty in the model parameters do not change during the simulation and the manner in which the outputs of interest respond to small perturbations in the external input forces is not dependent on when the perturbations are applied, then the number of calculations required can be greatly reduced. Conditions (1 and (2 characterize a Linear Time Invariant (LTI uncertainty model. This paper seeks to explain one possible approach to obtain the uncertainty results based on these assumptions.
Validity of Parametrized Quark Propagator
Institute of Scientific and Technical Information of China (English)
ZHU Ji-Zhen; ZHOU Li-Juan; MA Wei-Xing
2005-01-01
Based on an extensively study of the Dyson-Schwinger equations for a fully dressed quark propagator in the "rainbow" approximation, a parametrized fully dressed quark propagator is proposed in this paper. The parametrized propagator describes a confining quark propagator in hadron since it is analytic everywhere in complex p2-plane and has no Lemmann representation. The validity of the new propagator is discussed by comparing its predictions on selfenergy functions Af(p2), Bf(p2) and effective mass Mf(p2) of quark with flavor f to their corresponding theoretical results produced by Dyson-Schwinger equations. Our comparison shows that the parametrized quark propagator is a good approximation to the fully dressed quark propagator given by the solutions of Dyson-Schwinger equations in the rainbow approximation and is convenient to use in any theoretical calculations.
Validity of Parametrized Quark Propagator
Institute of Scientific and Technical Information of China (English)
ZHUJi-Zhen; ZHOULi-Juan; MAWei-Xing
2005-01-01
Based on an extensively study of the Dyson-Schwinger equations for a fully dressed quark propagator in the “rainbow”approximation, a parametrized fully dressed quark propagator is proposed in this paper. The parametrized propagator describes a confining quark propagator in hadron since it is analytic everywhere in complex p2-plane and has no Lemmann representation. The validity of the new propagator is discussed by comparing its predictions on selfenergy functions A/(p2), Bl(p2) and effective mass M$(p2) of quark with flavor f to their corresponding theoretical results produced by Dyson-Schwinger equations. Our comparison shows that the parametrized quark propagator is a good approximation to the fully dressed quark propagator given by the solutions of Dyson-Schwinger equations in the rainbow approximation and is convenient to use in any theoretical calculations.
Directory of Open Access Journals (Sweden)
N. Eckert
2008-10-01
Full Text Available For snow avalanches, passive defense structures are generally designed by considering high return period events. In this paper, taking inspiration from other natural hazards, an alternative method based on the maximization of the economic benefit of the defense structure is proposed. A general Bayesian framework is described first. Special attention is given to the problem of taking the poor local information into account in the decision-making process. Therefore, simplifying assumptions are made. The avalanche hazard is represented by a Peak Over Threshold (POT model. The influence of the dam is quantified in terms of runout distance reduction with a simple relation derived from small-scale experiments using granular media. The costs corresponding to dam construction and the damage to the element at risk are roughly evaluated for each dam height-hazard value pair, with damage evaluation corresponding to the maximal expected loss. Both the classical and the Bayesian risk functions can then be computed analytically. The results are illustrated with a case study from the French avalanche database. A sensitivity analysis is performed and modelling assumptions are discussed in addition to possible further developments.
Estimation of sedimentary proxy records together with associated uncertainty
Directory of Open Access Journals (Sweden)
B. Goswami
2014-06-01
Full Text Available Sedimentary proxy records constitute a significant portion of the recorded evidence that allow us to investigate paleoclimatic conditions and variability. However, uncertainties in the dating of proxy archives limit our ability to fix the timing of past events and interpret proxy record inter-comparisons. While there are various age-modeling approaches to improve the estimation of the age-depth relations of archives, relatively less focus has been given to the propagation of the age (and radiocarbon calibration uncertainties into the final proxy record. We present a generic Bayesian framework to estimate proxy records along with their associated uncertainty starting with the radiometric age-depth and proxy-depth measurements, and a radiometric calibration curve if required. We provide analytical expressions for the posterior proxy probability distributions at any given calendar age, from which the expected proxy values and their uncertainty can be estimated. We illustrate our method using two synthetic datasets and then use it to construct the proxy records for groundwater inflow and surface erosion from Lonar lake in central India. Our analysis reveals interrelations between the uncertainty of the proxy record over time and the variance of proxy along the depth of the archive. For the Lonar lake proxies, we show that, rather than the age uncertainties, it is the proxy variance combined with calibration uncertainty that accounts for most of the final uncertainty. We represent the proxy records as probability distributions on a precise, error-free time scale that makes further time series analyses and inter-comparison of proxies relatively simpler and clearer. Our approach provides a coherent understanding of age uncertainties within sedimentary proxy records that involve radiometric dating. It can be potentially used within existing age modeling structures to bring forth a reliable and consistent framework for proxy record estimation.
Lindley, Dennis V
2013-01-01
Praise for the First Edition ""...a reference for everyone who is interested in knowing and handling uncertainty.""-Journal of Applied Statistics The critically acclaimed First Edition of Understanding Uncertainty provided a study of uncertainty addressed to scholars in all fields, showing that uncertainty could be measured by probability, and that probability obeyed three basic rules that enabled uncertainty to be handled sensibly in everyday life. These ideas were extended to embrace the scientific method and to show how decisions, containing an uncertain element, could be rationally made.
The Uncertainty of Measurement Results
International Nuclear Information System (INIS)
Factors affecting the uncertainty of measurement are explained, basic statistical formulae given, and the theoretical concept explained in the context of pesticide formulation analysis. Practical guidance is provided on how to determine individual uncertainty components within an analytical procedure. An extended and comprehensive table containing the relevant mathematical/statistical expressions elucidates the relevant underlying principles. Appendix I provides a practical elaborated example on measurement uncertainty estimation, above all utilizing experimental repeatability and reproducibility laboratory data. (author)
Quantum dynamics via a time propagator in Wigner's phase space
DEFF Research Database (Denmark)
Grønager, Michael; Henriksen, Niels Engholm
1995-01-01
We derive an expression for a short-time phase space propagator. We use it in a new propagation scheme and demonstrate that it works for a Morse potential. The propagation scheme is used to propagate classical distributions which do not obey the Heisenberg uncertainty principle. It is shown...
An Uncertainty-Aware Approach for Exploratory Microblog Retrieval.
Liu, Mengchen; Liu, Shixia; Zhu, Xizhou; Liao, Qinying; Wei, Furu; Pan, Shimei
2016-01-01
Although there has been a great deal of interest in analyzing customer opinions and breaking news in microblogs, progress has been hampered by the lack of an effective mechanism to discover and retrieve data of interest from microblogs. To address this problem, we have developed an uncertainty-aware visual analytics approach to retrieve salient posts, users, and hashtags. We extend an existing ranking technique to compute a multifaceted retrieval result: the mutual reinforcement rank of a graph node, the uncertainty of each rank, and the propagation of uncertainty among different graph nodes. To illustrate the three facets, we have also designed a composite visualization with three visual components: a graph visualization, an uncertainty glyph, and a flow map. The graph visualization with glyphs, the flow map, and the uncertainty analysis together enable analysts to effectively find the most uncertain results and interactively refine them. We have applied our approach to several Twitter datasets. Qualitative evaluation and two real-world case studies demonstrate the promise of our approach for retrieving high-quality microblog data. PMID:26529705
Impact of discharge data uncertainty on nutrient load uncertainty
Westerberg, Ida; Gustavsson, Hanna; Sonesten, Lars
2016-04-01
Uncertainty in the rating-curve model of the stage-discharge relationship leads to uncertainty in discharge time series. These uncertainties in turn affect many other analyses based on discharge data, such as nutrient load estimations. It is important to understand how large the impact of discharge data uncertainty is on such analyses, since they are often used as the basis to take important environmental management decisions. In the Baltic Sea basin, nutrient load estimates from river mouths are a central information basis for managing and reducing eutrophication in the Baltic Sea. In this study we investigated rating curve uncertainty and its propagation to discharge data uncertainty and thereafter to uncertainty in the load of phosphorous and nitrogen for twelve Swedish river mouths. We estimated rating curve uncertainty using the Voting Point method, which accounts for random and epistemic errors in the stage-discharge relation and allows drawing multiple rating-curve realisations consistent with the total uncertainty. We sampled 40,000 rating curves, and for each sampled curve we calculated a discharge time series from 15-minute water level data for the period 2005-2014. Each discharge time series was then aggregated to daily scale and used to calculate the load of phosphorous and nitrogen from linearly interpolated monthly water samples, following the currently used methodology for load estimation. Finally the yearly load estimates were calculated and we thus obtained distributions with 40,000 load realisations per year - one for each rating curve. We analysed how the rating curve uncertainty propagated to the discharge time series at different temporal resolutions, and its impact on the yearly load estimates. Two shorter periods of daily water quality sampling around the spring flood peak allowed a comparison of load uncertainty magnitudes resulting from discharge data with those resulting from the monthly water quality sampling.
Institute of Scientific and Technical Information of China (English)
孙可明; 张树翠
2016-01-01
Shale gas deposits in shale reservoirs, whose bedding structures are different from conventional heterogeneous reservoirs. This also makes the shale reservoirs have different crack propagation laws in hydraulic fracturing. In order to investigate the crack propagation laws of shale reservoir hydraulic fracturing, complex variable function and conformal transformation is adopted here to deduce the solutions of crack tip stress concentration. The fracture propagation criteri-ons have been put forward when the perpendicular to the minimum in-situ stress hydraulic fracture meets oblique bedding by considering shale reservoirs heterogeneity as well as strength anisotropy and then comparing fluid pressure satisfied fracture propagation in all directions. In order to show how diﬃcult the hydraulic fracture turning to bedding, the critical strength ratios of bedding and rock mass are defined respectively when the fracture just initiates in bedding and propagates along bedding. Based on these two critical strength ratios, the bedding strength range which fracture initiates in bedding and propagates along bedding can be obtained. Analytical analysis show as follows:the critical strength ratio of bedding crack initiation increases with the decrease of bedding strike angle and dip angle and the increase of min-principal stress and the rock strength. The critical strength ratio of bedding crack initiation increases when the max-principal stress de-creases and the middle-principal stress increase if the strike angle is less than 35.26◦. Otherwise the critical strength ratio of bedding crack initiation decreases when the max-principal stress decreases and the middle-principal stress increase if the strike angle is greater than 35.26◦. The critical strength ratio of bedding crack propagation increases when bed-ding strike angle, dip angle and in-situ stress difference decrease and the rock strength increase. Only when the critical conditions of bedding crack initiation and
On Uncertainty of Compton Backscattering Process
Mo, X H
2013-01-01
The uncertainty of Compton backscattering process is studied by virtue of analytical formulas, and the special effects of variant energy spread and energy drift on the systematic uncertainty estimation are also studied with Monte Carlo sampling technique. These quantitative conclusions are especially important for the understanding the uncertainty of beam energy measurement system.
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Energy Technology Data Exchange (ETDEWEB)
Haihua Zhao; Vincent A. Mousseau
2011-09-01
Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical
Characterizing Epistemic Uncertainty for Launch Vehicle Designs
Novack, Steven D.; Rogers, Jim; Hark, Frank; Al Hassan, Mohammad
2016-01-01
NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty are rendered obsolete since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods.This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper shows how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.
A monomial chaos approach for efficient uncertainty quantification on nonlinear problems
Witteveen, J.A.S.; Bijl, H.
2008-01-01
A monomial chaos approach is presented for efficient uncertainty quantification in nonlinear computational problems. Propagating uncertainty through nonlinear equations can be computationally intensive for existing uncertainty quantification methods. It usually results in a set of nonlinear equation
A Monomial Chaos Approach for Efficient Uncertainty Quantification in Computational Fluid Dynamics
Witteveen, J.A.S.; Bijl, H.
2006-01-01
A monomial chaos approach is proposed for efficient uncertainty quantification in nonlinear computational problems. Propagating uncertainty through nonlinear equations can still be computationally intensive for existing uncertainty quantification methods. It usually results in a set of nonlinear equ
Solomatine, Dimitri
2016-04-01
When speaking about model uncertainty many authors implicitly assume the data uncertainty (mainly in parameters or inputs) which is probabilistically described by distributions. Often however it is look also into the residual uncertainty as well. It is hence reasonable to classify the main approaches to uncertainty analysis with respect to the two main types of model uncertainty that can be distinguished: A. The residual uncertainty of models. In this case the model parameters and/or model inputs are considered to be fixed (deterministic), i.e. the model is considered to be optimal (calibrated) and deterministic. Model error is considered as the manifestation of uncertainty. If there is enough past data about the model errors (i.e. it uncertainty), it is possible to build a statistical or machine learning model of uncertainty trained on this data. The following methods can be mentioned: (a) quantile regression (QR) method by Koenker and Basset in which linear regression is used to build predictive models for distribution quantiles [1] (b) a more recent approach that takes into account the input variables influencing such uncertainty and uses more advanced machine learning (non-linear) methods (neural networks, model trees etc.) - the UNEEC method [2,3,7] (c) and even more recent DUBRAUE method (Dynamic Uncertainty Model By Regression on Absolute Error), a autoregressive model of model residuals (it corrects the model residual first and then carries out the uncertainty prediction by a autoregressive statistical model) [5] B. The data uncertainty (parametric and/or input) - in this case we study the propagation of uncertainty (presented typically probabilistically) from parameters or inputs to the model outputs. In case of simple functions representing models analytical approaches can be used, or approximation methods (e.g., first-order second moment method). However, for real complex non-linear models implemented in software there is no other choice except using
Liu, Baoding
2015-01-01
When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, c...
Solomatine, Dimitri
2016-04-01
When speaking about model uncertainty many authors implicitly assume the data uncertainty (mainly in parameters or inputs) which is probabilistically described by distributions. Often however it is look also into the residual uncertainty as well. It is hence reasonable to classify the main approaches to uncertainty analysis with respect to the two main types of model uncertainty that can be distinguished: A. The residual uncertainty of models. In this case the model parameters and/or model inputs are considered to be fixed (deterministic), i.e. the model is considered to be optimal (calibrated) and deterministic. Model error is considered as the manifestation of uncertainty. If there is enough past data about the model errors (i.e. it uncertainty), it is possible to build a statistical or machine learning model of uncertainty trained on this data. The following methods can be mentioned: (a) quantile regression (QR) method by Koenker and Basset in which linear regression is used to build predictive models for distribution quantiles [1] (b) a more recent approach that takes into account the input variables influencing such uncertainty and uses more advanced machine learning (non-linear) methods (neural networks, model trees etc.) - the UNEEC method [2,3,7] (c) and even more recent DUBRAUE method (Dynamic Uncertainty Model By Regression on Absolute Error), a autoregressive model of model residuals (it corrects the model residual first and then carries out the uncertainty prediction by a autoregressive statistical model) [5] B. The data uncertainty (parametric and/or input) - in this case we study the propagation of uncertainty (presented typically probabilistically) from parameters or inputs to the model outputs. In case of simple functions representing models analytical approaches can be used, or approximation methods (e.g., first-order second moment method). However, for real complex non-linear models implemented in software there is no other choice except using
Climate model uncertainty vs. conceptual geological uncertainty in hydrological modeling
Directory of Open Access Journals (Sweden)
T. O. Sonnenborg
2015-04-01
Full Text Available Projections of climate change impact are associated with a cascade of uncertainties including CO2 emission scenario, climate model, downscaling and impact model. The relative importance of the individual uncertainty sources is expected to depend on several factors including the quantity that is projected. In the present study the impacts of climate model uncertainty and geological model uncertainty on hydraulic head, stream flow, travel time and capture zones are evaluated. Six versions of a physically based and distributed hydrological model, each containing a unique interpretation of the geological structure of the model area, are forced by 11 climate model projections. Each projection of future climate is a result of a GCM-RCM model combination (from the ENSEMBLES project forced by the same CO2 scenario (A1B. The changes from the reference period (1991–2010 to the future period (2081–2100 in projected hydrological variables are evaluated and the effects of geological model and climate model uncertainties are quantified. The results show that uncertainty propagation is context dependent. While the geological conceptualization is the dominating uncertainty source for projection of travel time and capture zones, the uncertainty on the climate models is more important for groundwater hydraulic heads and stream flow.
Uncertainty of empirical correlation equations
Feistel, R.; Lovell-Smith, J. W.; Saunders, P.; Seitz, S.
2016-08-01
The International Association for the Properties of Water and Steam (IAPWS) has published a set of empirical reference equations of state, forming the basis of the 2010 Thermodynamic Equation of Seawater (TEOS-10), from which all thermodynamic properties of seawater, ice, and humid air can be derived in a thermodynamically consistent manner. For each of the equations of state, the parameters have been found by simultaneously fitting equations for a range of different derived quantities using large sets of measurements of these quantities. In some cases, uncertainties in these fitted equations have been assigned based on the uncertainties of the measurement results. However, because uncertainties in the parameter values have not been determined, it is not possible to estimate the uncertainty in many of the useful quantities that can be calculated using the parameters. In this paper we demonstrate how the method of generalised least squares (GLS), in which the covariance of the input data is propagated into the values calculated by the fitted equation, and in particular into the covariance matrix of the fitted parameters, can be applied to one of the TEOS-10 equations of state, namely IAPWS-95 for fluid pure water. Using the calculated parameter covariance matrix, we provide some preliminary estimates of the uncertainties in derived quantities, namely the second and third virial coefficients for water. We recommend further investigation of the GLS method for use as a standard method for calculating and propagating the uncertainties of values computed from empirical equations.
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Energy Technology Data Exchange (ETDEWEB)
Haihua Zhao; Vincent A. Mousseau
2008-09-01
This report presents the forward sensitivity analysis method as a means for quantification of uncertainty in system analysis. The traditional approach to uncertainty quantification is based on a “black box” approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. This approach requires large number of simulation runs and therefore has high computational cost. Contrary to the “black box” method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In this approach equations for the propagation of uncertainty are constructed and the sensitivity is solved for as variables in the same simulation. This “glass box” method can generate similar sensitivity information as the above “black box” approach with couples of runs to cover a large uncertainty region. Because only small numbers of runs are required, those runs can be done with a high accuracy in space and time ensuring that the uncertainty of the physical model is being measured and not simply the numerical error caused by the coarse discretization. In the forward sensitivity method, the model is differentiated with respect to each parameter to yield an additional system of the same size as the original one, the result of which is the solution sensitivity. The sensitivity of any output variable can then be directly obtained from these sensitivities by applying the chain rule of differentiation. We extend the forward sensitivity method to include time and spatial steps as special parameters so that the numerical errors can be quantified against other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty analysis. By knowing the relative sensitivity of time and space steps with other
Whitepaper on Uncertainty Quantification for MPACT
Energy Technology Data Exchange (ETDEWEB)
Williams, Mark L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-12-17
The MPACT code provides the ability to perform high-fidelity deterministic calculations to obtain a wide variety of detailed results for very complex reactor core models. However MPACT currently does not have the capability to propagate the effects of input data uncertainties to provide uncertainties in the calculated results. This white paper discusses a potential method for MPACT uncertainty quantification (UQ) based on stochastic sampling.
Courtney, H; Kirkland, J; Viguerie, P
1997-01-01
At the heart of the traditional approach to strategy lies the assumption that by applying a set of powerful analytic tools, executives can predict the future of any business accurately enough to allow them to choose a clear strategic direction. But what happens when the environment is so uncertain that no amount of analysis will allow us to predict the future? What makes for a good strategy in highly uncertain business environments? The authors, consultants at McKinsey & Company, argue that uncertainty requires a new way of thinking about strategy. All too often, they say, executives take a binary view: either they underestimate uncertainty to come up with the forecasts required by their companies' planning or capital-budging processes, or they overestimate it, abandon all analysis, and go with their gut instinct. The authors outline a new approach that begins by making a crucial distinction among four discrete levels of uncertainty that any company might face. They then explain how a set of generic strategies--shaping the market, adapting to it, or reserving the right to play at a later time--can be used in each of the four levels. And they illustrate how these strategies can be implemented through a combination of three basic types of actions: big bets, options, and no-regrets moves. The framework can help managers determine which analytic tools can inform decision making under uncertainty--and which cannot. At a broader level, it offers executives a discipline for thinking rigorously and systematically about uncertainty and its implications for strategy.
Orbital State Uncertainty Realism
Horwood, J.; Poore, A. B.
2012-09-01
Fundamental to the success of the space situational awareness (SSA) mission is the rigorous inclusion of uncertainty in the space surveillance network. The *proper characterization of uncertainty* in the orbital state of a space object is a common requirement to many SSA functions including tracking and data association, resolution of uncorrelated tracks (UCTs), conjunction analysis and probability of collision, sensor resource management, and anomaly detection. While tracking environments, such as air and missile defense, make extensive use of Gaussian and local linearity assumptions within algorithms for uncertainty management, space surveillance is inherently different due to long time gaps between updates, high misdetection rates, nonlinear and non-conservative dynamics, and non-Gaussian phenomena. The latter implies that "covariance realism" is not always sufficient. SSA also requires "uncertainty realism"; the proper characterization of both the state and covariance and all non-zero higher-order cumulants. In other words, a proper characterization of a space object's full state *probability density function (PDF)* is required. In order to provide a more statistically rigorous treatment of uncertainty in the space surveillance tracking environment and to better support the aforementioned SSA functions, a new class of multivariate PDFs are formulated which more accurately characterize the uncertainty of a space object's state or orbit. The new distribution contains a parameter set controlling the higher-order cumulants which gives the level sets a distinctive "banana" or "boomerang" shape and degenerates to a Gaussian in a suitable limit. Using the new class of PDFs within the general Bayesian nonlinear filter, the resulting filter prediction step (i.e., uncertainty propagation) is shown to have the *same computational cost as the traditional unscented Kalman filter* with the former able to maintain a proper characterization of the uncertainty for up to *ten
Duerdoth, Ian
2009-01-01
The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…
Ferrarese, Giorgio
2011-01-01
Lectures: A. Jeffrey: Lectures on nonlinear wave propagation.- Y. Choquet-Bruhat: Ondes asymptotiques.- G. Boillat: Urti.- Seminars: D. Graffi: Sulla teoria dell'ottica non-lineare.- G. Grioli: Sulla propagazione del calore nei mezzi continui.- T. Manacorda: Onde nei solidi con vincoli interni.- T. Ruggeri: "Entropy principle" and main field for a non linear covariant system.- B. Straughan: Singular surfaces in dipolar materials and possible consequences for continuum mechanics
Directory of Open Access Journals (Sweden)
D. D. Lucas
2004-10-01
Full Text Available A study of the current significant uncertainties in dimethylsulfide (DMS gas-phase chemistry provides insight into additional research needed to decrease these uncertainties. The DMS oxidation cycle in the remote marine boundary layer is simulated using a diurnally-varying box model with 56 uncertain chemical and physical parameters. Two analytical methods (direct integration and probabilistic collocation are used to determine the most influential parameters (sensitivity analysis and sources of uncertainty (uncertainty analysis affecting the concentrations of DMS, SO_{2}, methanesulfonic acid (MSA, and H_{2}SO_{4}. The key parameters identified by the sensitivity analysis are associated with DMS emissions, mixing in to and out of the boundary layer, heterogeneous removal of soluble sulfur-containing compounds, and the DMS+OH addition and abstraction reactions. MSA and H_{2}SO_{4} are also sensitive to the rate constants of numerous other reactions, which limits the effectiveness of mechanism reduction techniques. Propagating the parameter uncertainties through the model leads to concentrations that are uncertain by factors of 1.8 to 3.0. The main sources of uncertainty are from DMS emissions and heterogeneous scavenging. Uncertain chemical rate constants, however, collectively account for up to 50–60% of the net uncertainties in MSA and H_{2}SO_{4}. The concentration uncertainties are also calculated at different temperatures, where they vary mainly due to temperature-dependent chemistry. With changing temperature, the uncertainties of DMS and SO_{2} remain steady, while the uncertainties of MSA and H_{2}SO_{4} vary by factors of 2 to 4.
Energy Technology Data Exchange (ETDEWEB)
Wendelberger, James G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-08-10
These are slides from a presentation made by a researcher from Los Alamos National Laboratory. The following topics are covered: sources of error for NDA gamma measurements, precision and accuracy are two important characteristics of measurements, four items processed in a material balance area during the inventory time period, inventory difference and propagation of variance, sum in quadrature, and overview of the ID/POV process.
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
Deterministic uncertainty analysis
International Nuclear Information System (INIS)
This paper presents a deterministic uncertainty analysis (DUA) method for calculating uncertainties that has the potential to significantly reduce the number of computer runs compared to conventional statistical analysis. The method is based upon the availability of derivative and sensitivity data such as that calculated using the well known direct or adjoint sensitivity analysis techniques. Formation of response surfaces using derivative data and the propagation of input probability distributions are discussed relative to their role in the DUA method. A sample problem that models the flow of water through a borehole is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. Propogation of uncertainties by the DUA method is compared for ten cases in which the number of reference model runs was varied from one to ten. The DUA method gives a more accurate representation of the true cumulative distribution of the flow rate based upon as few as two model executions compared to fifty model executions using a statistical approach. 16 refs., 4 figs., 5 tabs
Clyde, Merlise; George, Edward I.
2004-01-01
The evolution of Bayesian approaches for model uncertainty over the past decade has been remarkable. Catalyzed by advances in methods and technology for posterior computation, the scope of these methods has widened substantially. Major thrusts of these developments have included new methods for semiautomatic prior specification and posterior exploration. To illustrate key aspects of this evolution, the highlights of some of these developments are described.
A new algorithm for five-hole probe calibration, data reduction, and uncertainty analysis
Reichert, Bruce A.; Wendt, Bruce J.
1994-01-01
A new algorithm for five-hole probe calibration and data reduction using a non-nulling method is developed. The significant features of the algorithm are: (1) two components of the unit vector in the flow direction replace pitch and yaw angles as flow direction variables; and (2) symmetry rules are developed that greatly simplify Taylor's series representations of the calibration data. In data reduction, four pressure coefficients allow total pressure, static pressure, and flow direction to be calculated directly. The new algorithm's simplicity permits an analytical treatment of the propagation of uncertainty in five-hole probe measurement. The objectives of the uncertainty analysis are to quantify uncertainty of five-hole results (e.g., total pressure, static pressure, and flow direction) and determine the dependence of the result uncertainty on the uncertainty of all underlying experimental and calibration measurands. This study outlines a general procedure that other researchers may use to determine five-hole probe result uncertainty and provides guidance to improve measurement technique. The new algorithm is applied to calibrate and reduce data from a rake of five-hole probes. Here, ten individual probes are mounted on a single probe shaft and used simultaneously. Use of this probe is made practical by the simplicity afforded by this algorithm.
Measurement uncertainty of isotopologue fractions in fluxomics determined via mass spectrometry.
Guerrasio, R; Haberhauer-Troyer, C; Steiger, M; Sauer, M; Mattanovich, D; Koellensperger, G; Hann, S
2013-06-01
Metabolic flux analysis implies mass isotopomer distribution analysis and determination of mass isotopologue fractions (IFs) of proteinogenic amino acids of cell cultures. In this work, for the first time, this type of analysis is comprehensively investigated in terms of measurement uncertainty by calculating and comparing budgets for different mass spectrometric techniques. The calculations addressed amino acids of Pichia pastoris grown on 10% uniformly (13)C labeled glucose. Typically, such experiments revealed an enrichment of (13)C by at least one order of magnitude in all proteinogenic amino acids. Liquid chromatography-time-of-flight mass spectrometry (LC-TOFMS), liquid chromatography-tandem mass spectrometry (LC-MS/MS) and gas chromatography-mass spectrometry (GC-MS) analyses were performed. The samples were diluted to fit the linear dynamic range of the mass spectrometers used (10 μM amino acid concentration). The total combined uncertainties of IFs as well as the major uncertainty contributions affecting the IFs were determined for phenylalanine, which was selected as exemplary model compound. A bottom-up uncertainty propagation was performed according to Quantifying Uncertainty in Analytical Measurement and using the Monte Carlo method by considering all factors leading to an IF, i.e., the process of measurement and the addition of (13)C-glucose. Excellent relative expanded uncertainties (k = 1) of 0.32, 0.75, and 0.96% were obtained for an IF value of 0.7 by LC-MS/MS, GC-MS, and LC-TOFMS, respectively. The major source of uncertainty, with a relative contribution of 20-80% of the total uncertainty, was attributed to the signal intensity (absolute counts) uncertainty calculated according to Poisson counting statistics, regardless which of the mass spectrometry platforms was used. Uncertainty due to measurement repeatability was of importance in LC-MS/MS, showing a relative contribution up to 47% of the total uncertainty, whereas for GC-MS and LC
International Nuclear Information System (INIS)
The main part of this thesis consists of 15 published papers, in which the numerical Beam Propagating Method (BPM) is investigated, verified and used in a number of applications. In the introduction a derivation of the nonlinear Schroedinger equation is presented to connect the beginning of the soliton papers with Maxwell's equations including a nonlinear polarization. This thesis focuses on the wide use of the BPM for numerical simulations of propagating light and particle beams through different types of structures such as waveguides, fibers, tapers, Y-junctions, laser arrays and crystalline solids. We verify the BPM in the above listed problems against other numerical methods for example the Finite-element Method, perturbation methods and Runge-Kutta integration. Further, the BPM is shown to be a simple and effective way to numerically set up the Green's function in matrix form for periodic structures. The Green's function matrix can then be diagonalized with matrix methods yielding the eigensolutions of the structure. The BPM inherent transverse periodicity can be untied, if desired, by for example including an absorptive refractive index at the computational window edges. The interaction of two first-order soliton pulses is strongly dependent on the phase relationship between the individual solitons. When optical phase shift keying is used in coherent one-carrier wavelength communication, the fiber attenuation will suppress or delay the nonlinear instability. (orig.)
Pole solutions for flame front propagation
Kupervasser, Oleg
2015-01-01
This book deals with solving mathematically the unsteady flame propagation equations. New original mathematical methods for solving complex non-linear equations and investigating their properties are presented. Pole solutions for flame front propagation are developed. Premixed flames and filtration combustion have remarkable properties: the complex nonlinear integro-differential equations for these problems have exact analytical solutions described by the motion of poles in a complex plane. Instead of complex equations, a finite set of ordinary differential equations is applied. These solutions help to investigate analytically and numerically properties of the flame front propagation equations.
DEFF Research Database (Denmark)
Nguyen, Daniel Xuyen
This paper presents a model of trade that explains why firms wait to export and why many exporters fail. Firms face uncertain demands that are only realized after the firm enters the destination. The model retools the timing of uncertainty resolution found in productivity heterogeneity models...... the high rate of exit seen in the first years of exporting. Finally, when faced with multiple countries in which to export, some firms will choose to sequentially export in order to slowly learn more about its chances for success in untested markets....
Uncertainty Relations for General Unitary Operators
Bagchi, Shrobona; Pati, Arun Kumar
2015-01-01
We derive several uncertainty relations for two arbitrary unitary operators acting on physical states of any Hilbert space (finite or infinite dimensional). We show that our bounds are tighter in various cases than the ones existing in the current literature. With regard to the minimum uncertainty state for the cases of both the finite as well as the infinite dimensional unitary operators, we derive the minimum uncertainty state equation by the analytic method. As an application of this, we f...
Bayesian Mars for uncertainty quantification in stochastic transport problems
International Nuclear Information System (INIS)
We present a method for estimating solutions to partial differential equations with uncertain parameters using a modification of the Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator. The BMARS algorithm uses Markov chain Monte Carlo (MCMC) to construct a basis function composed of polynomial spline functions, for which derivatives and integrals are straightforward to compute. We use these calculations and a modification of the curve-fitting BMARS algorithm to search for a basis function (response surface) which, in combination with its derivatives/integrals, satisfies a governing differential equation and specified boundary condition. We further show that this fit can be improved by enforcing a conservation or other physics-based constraint. Our results indicate that estimates to solutions of simple first order partial differential equations (without uncertainty) can be efficiently computed with very little regression error. We then extend the method to estimate uncertainties in the solution to a pure absorber transport problem in a medium with uncertain cross-section. We describe and compare two strategies for propagating the uncertain cross-section through the BMARS algorithm; the results from each method are in close comparison with analytic results. We discuss the scalability of the algorithm to parallel architectures and the applicability of the two strategies to larger problems with more degrees of uncertainty. (author)
Constrained quantities in uncertainty quantification. Ambiguity and tips to follow
International Nuclear Information System (INIS)
The nuclear community relies heavily on computer codes and numerical tools. The results of such computations can only be trusted if they are augmented by proper sensitivity and uncertainty (S and U) studies. This paper presents some aspects of S and U analysis when constrained quantities are involved, such as the fission spectrum or the isotopic distribution of elements. A consistent theory is given for the derivation and interpretation of constrained sensitivities as well as the corresponding covariance matrix normalization procedures. It is shown that if the covariance matrix violates the “generic zero column and row sum” condition, normalizing it is equivalent to constraining the sensitivities, but since both can be done in many ways different sensitivity coefficients and uncertainties can be derived. This makes results ambiguous, underlining the need for proper covariance data. It is also highlighted that the use of constrained sensitivity coefficients derived with a constraining procedure that is not idempotent can lead to biased results in uncertainty propagation. The presented theory is demonstrated on an analytical case and a numerical example involving the fission spectrum, both confirming the main conclusions of this research. (author)
Uncertainties in land use data
Directory of Open Access Journals (Sweden)
G. Castilla
2007-11-01
Full Text Available This paper deals with the description and assessment of uncertainties in land use data derived from Remote Sensing observations, in the context of hydrological studies. Land use is a categorical regionalised variable reporting the main socio-economic role each location has, where the role is inferred from the pattern of occupation of land. The properties of this pattern that are relevant to hydrological processes have to be known with some accuracy in order to obtain reliable results; hence, uncertainty in land use data may lead to uncertainty in model predictions. There are two main uncertainties surrounding land use data, positional and categorical. The first one is briefly addressed and the second one is explored in more depth, including the factors that influence it. We (1 argue that the conventional method used to assess categorical uncertainty, the confusion matrix, is insufficient to propagate uncertainty through distributed hydrologic models; (2 report some alternative methods to tackle this and other insufficiencies; (3 stress the role of metadata as a more reliable means to assess the degree of distrust with which these data should be used; and (4 suggest some practical recommendations.
Assonov, Sergey; Groening, Manfred; Fajgelj, Ales
2016-04-01
The worldwide metrological comparability of stable isotope measurement results is presently achieved by linking them to the conventional delta scales. Delta scales are realized by scale defining reference materials, most of them being supplied by the IAEA (examples are VSMOW2 & SLAP2). In fact, these reference materials are artefacts, characterized by a network of laboratories using the current best measurement practice. In reality any measurement result is linked to the scale via reference materials (RMs) in use. Any RMs is traceable to the highest-level RMs which define the scale; this is valid not only for international RMs (mostly secondary RMs like IAEA-CH-7, NBS22) but for any lab standard calibrated by users. This is a basic of measurement traceability. The traceability scheme allows addressing both the comparability and long-term compatibility of measurement results. Correspondingly, the uncertainty of any measurement result has to be propagated up to the scale level. The uncertainty evaluation should include (i) the uncertainty of the RMs in use; (ii) the analytical uncertainty of the materials used in calibration runs performed at the user laboratory; (iii) the reproducibility on results obtained on sample material; (iv) the uncertainty of corrections applied (memory, drift, etc). Besides these, there may be other uncertainty components of to be considered. The presentation will illustrate the metrological concepts involved (comparability, traceability etc) and give a generic scheme for the uncertainty evaluation.
Energy Technology Data Exchange (ETDEWEB)
Davis, C B
1987-08-01
The uncertainties of calculations of loss-of-feedwater transients at Davis-Besse Unit 1 were determined to address concerns of the US Nuclear Regulatory Commission relative to the effectiveness of feed and bleed cooling. Davis-Besse Unit 1 is a pressurized water reactor of the raised-loop Babcock and Wilcox design. A detailed, quality-assured RELAP5/MOD2 model of Davis-Besse was developed at the Idaho National Engineering Laboratory. The model was used to perform an analysis of the loss-of-feedwater transient that occurred at Davis-Besse on June 9, 1985. A loss-of-feedwater transient followed by feed and bleed cooling was also calculated. The evaluation of uncertainty was based on the comparisons of calculations and data, comparisons of different calculations of the same transient, sensitivity calculations, and the propagation of the estimated uncertainty in initial and boundary conditions to the final calculated results.
Uncertainty quantification theory, implementation, and applications
Smith, Ralph C
2014-01-01
The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...
Uncertainty Quantification with Applications to Engineering Problems
DEFF Research Database (Denmark)
Bigoni, Daniele
in measurements, predictions and manufacturing, and we can say that any dynamical system used in engineering is subject to some of these uncertainties. The first part of this work presents an overview of the mathematical framework used in Uncertainty Quantification (UQ) analysis and introduces the spectral tensor......The systematic quantification of the uncertainties affecting dynamical systems and the characterization of the uncertainty of their outcomes is critical for engineering design and analysis, where risks must be reduced as much as possible. Uncertainties stem naturally from our limitations......-train (STT) decomposition, a novel high-order method for the effective propagation of uncertainties which aims at providing an exponential convergence rate while tackling the curse of dimensionality. The curse of dimensionality is a problem that afflicts many methods based on meta-models, for which...
Quantifying Mixed Uncertainties in Cyber Attacker Payoffs
Energy Technology Data Exchange (ETDEWEB)
Chatterjee, Samrat; Halappanavar, Mahantesh; Tipireddy, Ramakrishna; Oster, Matthew R.; Saha, Sudip
2015-04-15
Representation and propagation of uncertainty in cyber attacker payoffs is a key aspect of security games. Past research has primarily focused on representing the defender’s beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and intervals. Within cyber-settings, continuous probability distributions may still be appropriate for addressing statistical (aleatory) uncertainties where the defender may assume that the attacker’s payoffs differ over time. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information about the attacker’s payoff generation mechanism. Such epistemic uncertainties are more suitably represented as probability boxes with intervals. In this study, we explore the mathematical treatment of such mixed payoff uncertainties.
Propagation Mechanism Analysis Before the Break Point Inside Tunnels
Guan, Ke; Zhong, Zhangdui; Bo, Ai; Briso Rodriguez, Cesar
2011-01-01
There is no unanimous consensus yet on the propagation mechanism before the break point inside tunnels. Some deem that the propagation mechanism follows the free space model, others argue that it should be described by the multimode waveguide model. Firstly, this paper analyzes the propagation loss in two mechanisms. Then, by conjunctively using the propagation theory and the three-dimensional solid geometry, a generic analytical model for the boundary between the free space mechanism and the...
Using Nuclear Theory, Data and Uncertainties in Monte Carlo Transport Applications
Energy Technology Data Exchange (ETDEWEB)
Rising, Michael Evan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-11-03
These are slides for a presentation on using nuclear theory, data and uncertainties in Monte Carlo transport applications. The following topics are covered: nuclear data (experimental data versus theoretical models, data evaluation and uncertainty quantification), fission multiplicity models (fixed source applications, criticality calculations), uncertainties and their impact (integral quantities, sensitivity analysis, uncertainty propagation).
Energy Technology Data Exchange (ETDEWEB)
NONE
1996-06-01
Literature on uncertainty assessment for risk-analytical purposes has been compiled. Databases Inspec, Compendex, Energy Science and Technology, Chemical Abstracts, Chemical Safety Newsbase, HSEline and MathSci were searched. Roughly 80 references have been selected from these databases and divided according to the following uncertainty classes: 1. Statistical uncertainty; 2. Data uncertainty; 3. Presumption uncertainty; 4. Uncertainty of consequence models; 5. Cognitive uncertainty. (EG)
Experimental uncertainty estimation and statistics for data having interval uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)
2007-05-01
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Estimating uncertainties in complex joint inverse problems
Afonso, Juan Carlos
2016-04-01
to the forward and statistical models, I will also address other uncertainties associated with data and uncertainty propagation.
Milton, Graeme W
2016-01-01
The theory of inhomogeneous analytic materials is developed. These are materials where the coefficients entering the equations involve analytic functions. Three types of analytic materials are identified. The first two types involve an integer $p$. If $p$ takes its maximum value then we have a complete analytic material. Otherwise it is incomplete analytic material of rank $p$. For two-dimensional materials further progress can be made in the identification of analytic materials by using the well-known fact that a $90^\\circ$ rotation applied to a divergence free field in a simply connected domain yields a curl-free field, and this can then be expressed as the gradient of a potential. Other exact results for the fields in inhomogeneous media are reviewed. Also reviewed is the subject of metamaterials, as these materials provide a way of realizing desirable coefficients in the equations.
Bruce, William J; Maxwell, E A; Sneddon, I N
1963-01-01
Analytic Trigonometry details the fundamental concepts and underlying principle of analytic geometry. The title aims to address the shortcomings in the instruction of trigonometry by considering basic theories of learning and pedagogy. The text first covers the essential elements from elementary algebra, plane geometry, and analytic geometry. Next, the selection tackles the trigonometric functions of angles in general, basic identities, and solutions of equations. The text also deals with the trigonometric functions of real numbers. The fifth chapter details the inverse trigonometric functions
Premixed flame propagation in vertical tubes
Kazakov, Kirill A
2015-01-01
Analytical treatment of premixed flame propagation in vertical tubes with smooth walls is given. Using the on-shell flame description, equations describing quasi-steady flame with a small but finite front thickness are obtained and solved numerically. It is found that near the limits of inflammability, solutions describing upward flame propagation come in pairs having close propagation speeds, and that the effect of gravity is to reverse the burnt gas velocity profile generated by the flame. On the basis of these results, a theory of partial flame propagation driven by the gravitational field is developed. A complete explanation is given of the intricate observed behavior of limit flames, including dependence of the inflammability range on the size of the combustion domain, the large distances of partial flame propagation, and the progression of flame extinction. The role of the finite front-thickness effects is discussed in detail. Also, various mechanisms governing flame acceleration in smooth tubes are ide...
Intense electron beam propagation into vacuum
International Nuclear Information System (INIS)
The authors have performed experimental and theoretical studies of the propagation of an intense electron beam (1 MeV, 27 kA, 30 ns) into a long evacuated drift cube. In one case the beam propagates because an applied axial magnetic field immerses the entire system. In the second case a localized source of ions for charge neutralization enables the beam is propagate. In the case of a magnetically confined beam, experimental results for current propagation as a function of uniform applied magnetic field (0-1.2 Tesla) are presented for various drift tube diameters, cathode geometries, and anode aperture sizes. An analytic model of laminar beam flow is presented which predicts the space charge limited current of a solid intense relativistic electron beam (IREB) propagating in a grounded drift tube as a function of tube and diode sizes and applied magnetic field. Comparisons between the experimental and theoretical results are discussed
NLO error propagation exercise: statistical results
International Nuclear Information System (INIS)
Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or 235U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, 235U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and 235U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods
Methodology for characterizing modeling and discretization uncertainties in computational simulation
Energy Technology Data Exchange (ETDEWEB)
ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.
2000-03-01
This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.
Angular Operators Violating the Heisenberg Uncertainty Principle
Pereira, Tiago
2008-01-01
The description of a quantum system in terms of angle variables may violate Heisenberg uncertainty principle. The familiar case is the azimutal angle $\\phi$ and its canonical moment $L_z$. Although this problem was foreseen almost a century ago, up to the present days there are no criteria to precisely characterize the violation. In this paper, we present a theorem which provides necessary and sufficient conditions for the violation of the Heisenberg uncertainty principle. We illustrate our results with analytical examples.
Pulse Wave Propagation in the Arterial Tree
van de Vosse, Frans N.; Stergiopulos, Nikos
2011-01-01
The beating heart creates blood pressure and flow pulsations that propagate as waves through the arterial tree that are reflected at transitions in arterial geometry and elasticity. Waves carry information about the matter in which they propagate. Therefore, modeling of arterial wave propagation extends our knowledge about the functioning of the cardiovascular system and provides a means to diagnose disorders and predict the outcome of medical interventions. In this review we focus on the physical and mathematical modeling of pulse wave propagation, based on general fluid dynamical principles. In addition we present potential applications in cardiovascular research and clinical practice. Models of short- and long-term adaptation of the arterial system and methods that deal with uncertainties in personalized model parameters and boundary conditions are briefly discussed, as they are believed to be major topics for further study and will boost the significance of arterial pulse wave modeling even more.
A surrogate-based uncertainty quantification with quantifiable errors
Energy Technology Data Exchange (ETDEWEB)
Bang, Y.; Abdel-Khalik, H. S. [North Carolina State Univ., Raleigh, NC 27695 (United States)
2012-07-01
Surrogate models are often employed to reduce the computational cost required to complete uncertainty quantification, where one is interested in propagating input parameters uncertainties throughout a complex engineering model to estimate responses uncertainties. An improved surrogate construction approach is introduced here which places a premium on reducing the associated computational cost. Unlike existing methods where the surrogate is constructed first, then employed to propagate uncertainties, the new approach combines both sensitivity and uncertainty information to render further reduction in the computational cost. Mathematically, the reduction is described by a range finding algorithm that identifies a subspace in the parameters space, whereby parameters uncertainties orthogonal to the subspace contribute negligible amount to the propagated uncertainties. Moreover, the error resulting from the reduction can be upper-bounded. The new approach is demonstrated using a realistic nuclear assembly model and compared to existing methods in terms of computational cost and accuracy of uncertainties. Although we believe the algorithm is general, it will be applied here for linear-based surrogates and Gaussian parameters uncertainties. The generalization to nonlinear models will be detailed in a separate article. (authors)
Analytical design of soliton molecules in fibers
Moubissi, A.-B.; Nse Biyoghe, S.; Mback, C. B. L.; Ekogo, T. B.; Ben-Bolie, G. H.; Kofane, T. C.; Tchofo Dinda, P.
2016-09-01
We present an analytical method for designing fiber systems for a highly stable propagation of soliton molecules. This analytical design uses the variational equations of the soliton molecule to determine the parameters of the most suitable fiber system for any desired soliton, thus reducing dramatically the cost of the whole procedure of design, for both the appropriate fiber system and the desired soliton molecule.
High-level waste qualification: Managing uncertainty
International Nuclear Information System (INIS)
A vitrification facility is being developed by the U.S. Department of Energy (DOE) at the West Valley Demonstration Plant (WVDP) near Buffalo, New York, where approximately 300 canisters of high-level nuclear waste glass will be produced. To assure that the produced waste form is acceptable, uncertainty must be managed. Statistical issues arise due to sampling, waste variations, processing uncertainties, and analytical variations. This paper presents elements of a strategy to characterize and manage the uncertainties associated with demonstrating that an acceptable waste form product is achieved. Specific examples are provided within the context of statistical work performed by Pacific Northwest Laboratory (PNL)
French, N.; L. Gabrielli
2005-01-01
Valuation is often said to be “an art not a science” but this relates to the techniques employed to calculate value not to the underlying concept itself. Valuation is the process of estimating price in the market place. Yet, such an estimation will be affected by uncertainties. Uncertainty in the comparable information available; uncertainty in the current and future market conditions and uncertainty in the specific inputs for the subject property. These input uncertainties will translate int...
Onatski, Alexei; Williams, Noah
2003-01-01
Recently there has been much interest in studying monetary policy under model uncertainty. We develop methods to analyze different sources of uncertainty in one coherent structure useful for policy decisions. We show how to estimate the size of the uncertainty based on time series data, and incorporate this uncertainty in policy optimization. We propose two different approaches to modeling model uncertainty. The first is model error modeling, which imposes additional structure on the errors o...
Institute of Scientific and Technical Information of China (English)
Yue-ping XU; Harriette HOLZHAUER; Martijn J.BOOIJ; Hong-yue SUN
2008-01-01
For river basin management,the reliability of the rating curves mainly depends on the accuracy and time period of the observed discharge and water level data.In the Elbe decision suPPort system(DSS),the rating curves are combined with the HEC-6 model to investigate the effects Ofriver engineering measures on the Elbe River system.In such situations,the uncertainty originating from the HEC-6 model is of significant importance for the reliability of the rating curves and the corresponding DSS results.This paper proposes a two-step approach to analyze the uncertainty in the rating curves and propagate it into the Elbe DSS:analytic method and Latin Hypercube simulation.Via this approach the uncertainty and sensitivity of model outputs to input parameters are successfuIly investigated.The results show that the proposed approach is very efficient in investigating the effect of uncertainty and can play an important role in improving decision-making under uncertainty.
Federal Laboratory Consortium — NETL’s analytical laboratories in Pittsburgh, PA, and Albany, OR, give researchers access to the equipment they need to thoroughly study the properties of materials...
Modelling delay propagation within an airport network
Pyrgiotis, N.; Malone, K.M.; Odoni, A.
2013-01-01
We describe an analytical queuing and network decomposition model developed to study the complex phenomenon of the propagation of delays within a large network of major airports. The Approximate Network Delays (AND) model computes the delays due to local congestion at individual airports and capture
Radio Channel Modelling Using Stochastic Propagation Graphs
DEFF Research Database (Denmark)
Pedersen, Troels; Fleury, Bernard Henri
2007-01-01
, we develop a closed form analytical expression for the transfer matrix of the propagation graph. It is shown by simulation that impulse response and the delay-power spectrum of the graph exhibit exponentially decaying power as a result of the recursive scattering structure of the graph. The impulse...
Nijhof, Marten Jozef Johannes
2010-01-01
In this work, the accuracy, efficiency and range of applicability of various (approximate) models for viscothermal wave propagation are investigated. Models for viscothermal wave propagation describe thewave behavior of fluids including viscous and thermal effects. Cases where viscothermal effects a
Uncertainties in radiation flow experiments
Fryer, C. L.; Dodd, E.; Even, W.; Fontes, C. J.; Greeff, C.; Hungerford, A.; Kline, J.; Mussack, K.; Tregillis, I.; Workman, J. B.; Benstead, J.; Guymer, T. M.; Moore, A. S.; Morton, J.
2016-03-01
Although the fundamental physics behind radiation and matter flow is understood, many uncertainties remain in the exact behavior of macroscopic fluids in systems ranging from pure turbulence to coupled radiation hydrodynamics. Laboratory experiments play an important role in studying this physics to allow scientists to test their macroscopic models of these phenomena. However, because the fundamental physics is well understood, precision experiments are required to validate existing codes already tested by a suite of analytic, manufactured and convergence solutions. To conduct such high-precision experiments requires a detailed understanding of the experimental errors and the nature of their uncertainties on the observed diagnostics. In this paper, we study the uncertainties plaguing many radiation-flow experiments, focusing on those using a hohlraum (dynamic or laser-driven) source and a foam-density target. This study focuses on the effect these uncertainties have on the breakout time of the radiation front. We find that, even if the errors in the initial conditions and numerical methods are Gaussian, the errors in the breakout time are asymmetric, leading to a systematic bias in the observed data. We must understand these systematics to produce the high-precision experimental results needed to study this physics.
On the diffusive propagation of warps in thin accretion discs
LODATO G; Price, D.
2010-01-01
In this paper we revisit the issue of the propagation of warps in thin and viscous accretion discs. In this regime warps are know to propagate diffusively, with a diffusion coefficient approximately inversely proportional to the disc viscosity. Previous numerical investigations of this problem (Lodato & Pringle 2007) did not find a good agreement between the numerical results and the predictions of the analytic theories of warp propagation, both in the linear and in the non-linear case. Here,...
On the worst case uncertainty and its evaluation
Fabbiano, L.; Giaquinto, N.; Savino, M.; Vacca, G.
2016-02-01
The paper is a review on the worst case uncertainty (WCU) concept, neglected in the Guide to the Expression of Uncertainty in Measurements (GUM), but necessary for a correct uncertainty assessment in a number of practical cases involving distribution with compact support. First, it is highlighted that the knowledge of the WCU is necessary to choose a sensible coverage factor, associated to a sensible coverage probability: the Maximum Acceptable Coverage Factor (MACF) is introduced as a convenient index to guide this choice. Second, propagation rules for the worst-case uncertainty are provided in matrix and scalar form. It is highlighted that when WCU propagation cannot be computed, the Monte Carlo approach is the only way to obtain a correct expanded uncertainty assessment, in contrast to what can be inferred from the GUM. Third, examples of applications of the formulae to ordinary instruments and measurements are given. Also an example taken from the GUM is discussed, underlining some inconsistencies in it.
Asymptotic analysis of outwardly propagating spherical flames
Institute of Scientific and Technical Information of China (English)
Yun-Chao Wu; Zheng Chen
2012-01-01
Asymptotic analysis is conducted for outwardly propagating spherical flames with large activation energy.The spherical flame structure consists of the preheat zone,reaction zone,and equilibrium zone.Analytical solutions are separately obtained in these three zones and then asymptotically matched.In the asymptotic analysis,we derive a correlation describing the spherical flame temperature and propagation speed changing with the flame radius.This correlation is compared with previous results derived in the limit of infinite value of activation energy.Based on this correlation,the properties of spherical flame propagation are investigated and the effects of Lewis number on spherical flame propagation speed and extinction stretch rate are assessed.Moreover,the accuracy and performance of different models used in the spherical flame method are examined.It is found that in order to get accurate laminar flame speed and Markstein length,non-linear models should be used.
Ferroukhi, H.; Leray, O.; Hursin, M.; Vasiliev, A.; Perret, G.; Pautz, A.
2014-04-01
At the Paul Scherrer Institut (PSI), a methodology for nuclear data uncertainty propagation in CASMO-5M (C5M) assembly calculations is under development. This paper presents a preliminary application of this methodology to C5M decay heat calculations. Applying a stochastic sampling method, nuclear decay data uncertainties are first propagated for the cooling phase only. Thereafter, the uncertainty propagation is enlarged to gradually account for cross-section as well as fission yield uncertainties during the depletion phase. On that basis, assembly heat load uncertainties as well as total uncertainty for the entire pool are quantified for cooling times up to one year. The relative contributions from the various types of nuclear data uncertainties are in this context also estimated.
Uncertainty estimation of ultrasonic thickness measurement
International Nuclear Information System (INIS)
The most important factor that should be taken into consideration when selecting ultrasonic thickness measurement technique is its reliability. Only when the uncertainty of a measurement results is known, it may be judged if the result is adequate for intended purpose. The objective of this study is to model the ultrasonic thickness measurement function, to identify the most contributing input uncertainty components, and to estimate the uncertainty of the ultrasonic thickness measurement results. We assumed that there are five error sources significantly contribute to the final error, these sources are calibration velocity, transit time, zero offset, measurement repeatability and resolution, by applying the propagation of uncertainty law to the model function, a combined uncertainty of the ultrasonic thickness measurement was obtained. In this study the modeling function of ultrasonic thickness measurement was derived. By using this model the estimation of the uncertainty of the final output result was found to be reliable. It was also found that the most contributing input uncertainty components are calibration velocity, transit time linearity and zero offset. (author)
Dealing with uncertainties - communication between disciplines
Overbeek, Bernadet; Bessembinder, Janette
2013-04-01
Climate adaptation research inevitably involves uncertainty issues - whether people are building a model, using climate scenarios, or evaluating policy processes. However, do they know which uncertainties are relevant in their field of work? And which uncertainties exist in the data from other disciplines that they use (e.g. climate data, land use, hydrological data) and how they propagate? From experiences in Dutch research programmes on climate change in the Netherlands we know that disciplines often deal differently with uncertainties. This complicates communication between disciplines and also with the various users of data and information on climate change and its impacts. In October 2012 an autumn school was organized within the Knowledge for Climate Research Programme in the Netherlands with as central theme dealing with and communicating about uncertainties, in climate- and socio-economic scenarios, in impact models and in the decision making process. The lectures and discussions contributed to the development of a common frame of reference (CFR) for dealing with uncertainties. The common frame contains the following: 1. Common definitions (typology of uncertainties, robustness); 2. Common understanding (why do we consider it important to take uncertainties into account) and aspects on which we disagree (how far should scientists go in communication?); 3. Documents that are considered important by all participants; 4. Do's and don'ts in dealing with uncertainties and communicating about uncertainties (e.g. know your audience, check how your figures are interpreted); 5. Recommendations for further actions (e.g. need for a platform to exchange experiences). The CFR is meant to help researchers in climate adaptation to work together and communicate together on climate change (better interaction between disciplines). It is also meant to help researchers to explain to others (e.g. decision makers) why and when researchers agree and when and why they disagree
Reducing uncertainties in a wind-tunnel experiment using Bayesian updating
Boon, D.J.; Dwight, R.P.; Sterenborg, J.J.H.M.; Bijl, H.
2012-01-01
We perform a fully stochastic analysis of an experiment in aerodynamics. Given estimated uncertainties on the principle input parameters of the experiment, including uncertainties on the shape of the model, we apply uncertainty propagation methods to a suitable CFD model of the experimental setup. T
Spain, Barry; Ulam, S; Stark, M
1960-01-01
Analytical Quadrics focuses on the analytical geometry of three dimensions. The book first discusses the theory of the plane, sphere, cone, cylinder, straight line, and central quadrics in their standard forms. The idea of the plane at infinity is introduced through the homogenous Cartesian coordinates and applied to the nature of the intersection of three planes and to the circular sections of quadrics. The text also focuses on paraboloid, including polar properties, center of a section, axes of plane section, and generators of hyperbolic paraboloid. The book also touches on homogenous coordi
Groves, Curtis Edward
2014-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional validation by test only mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions.Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations. This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions
Groves, Curtis Edward
2014-01-01
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional "validation by test only" mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics
Pappas, Marjorie L.
1995-01-01
Discusses analytical searching, a process that enables searchers of electronic resources to develop a planned strategy by combining words or phrases with Boolean operators. Defines simple and complex searching, and describes search strategies developed with Boolean logic and truncation. Provides guidelines for teaching students analytical…
On reasoning in networks with qualitative uncertainty
Parsons, Simon; Mamdani, E. H.
2013-01-01
In this paper some initial work towards a new approach to qualitative reasoning under uncertainty is presented. This method is not only applicable to qualitative probabilistic reasoning, as is the case with other methods, but also allows the qualitative propagation within networks of values based upon possibility theory and Dempster-Shafer evidence theory. The method is applied to two simple networks from which a large class of directed graphs may be constructed. The results of this analysis ...
Pore Velocity Estimation Uncertainties
Devary, J. L.; Doctor, P. G.
1982-08-01
Geostatistical data analysis techniques were used to stochastically model the spatial variability of groundwater pore velocity in a potential waste repository site. Kriging algorithms were applied to Hanford Reservation data to estimate hydraulic conductivities, hydraulic head gradients, and pore velocities. A first-order Taylor series expansion for pore velocity was used to statistically combine hydraulic conductivity, hydraulic head gradient, and effective porosity surfaces and uncertainties to characterize the pore velocity uncertainty. Use of these techniques permits the estimation of pore velocity uncertainties when pore velocity measurements do not exist. Large pore velocity estimation uncertainties were found to be located in the region where the hydraulic head gradient relative uncertainty was maximal.
Constraint Propagation as Information Maximization
Abdallah, A Nait
2012-01-01
Dana Scott used the partial order among partial functions for his mathematical model of recursively defined functions. He interpreted the partial order as one of information content. In this paper we elaborate on Scott's suggestion of regarding computation as a process of information maximization by applying it to the solution of constraint satisfaction problems. Here the method of constraint propagation can be interpreted as decreasing uncertainty about the solution -- that is, as gain in information about the solution. As illustrative example we choose numerical constraint satisfaction problems to be solved by interval constraints. To facilitate this approach to constraint solving we formulate constraint satisfaction problems as formulas in predicate logic. This necessitates extending the usual semantics for predicate logic so that meaning is assigned not only to sentences but also to formulas with free variables.
Uncertainty of Doppler reactivity worth due to uncertainties of JENDL-3.2 resonance parameters
Energy Technology Data Exchange (ETDEWEB)
Zukeran, Atsushi [Hitachi Ltd., Hitachi, Ibaraki (Japan). Power and Industrial System R and D Div.; Hanaki, Hiroshi; Nakagawa, Tuneo; Shibata, Keiichi; Ishikawa, Makoto
1998-03-01
Analytical formula of Resonance Self-shielding Factor (f-factor) is derived from the resonance integral (J-function) based on NR approximation and the analytical expression for Doppler reactivity worth ({rho}) is also obtained by using the result. Uncertainties of the f-factor and Doppler reactivity worth are evaluated on the basis of sensitivity coefficients to the resonance parameters. The uncertainty of the Doppler reactivity worth at 487{sup 0}K is about 4 % for the PNC Large Fast Breeder Reactor. (author)
Towards "Propagation = Logic + Control"
Brand, Sebastian; Yap, Roland H. C.
2006-01-01
Constraint propagation algorithms implement logical inference. For efficiency, it is essential to control whether and in what order basic inference steps are taken. We provide a high-level framework that clearly differentiates between information needed for controlling propagation versus that needed for the logical semantics of complex constraints composed from primitive ones. We argue for the appropriateness of our controlled propagation framework by showing that it c...
Uncertainty and Cognitive Control
Directory of Open Access Journals (Sweden)
Faisal eMushtaq
2011-10-01
Full Text Available A growing trend of neuroimaging, behavioural and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1 There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2 There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3 The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the need for control; (4 Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders.
Directory of Open Access Journals (Sweden)
M. Hajek
2006-04-01
Full Text Available The propagation of ultra wide band (UWB signals through walls is analyzed. For this propagation studies, it is necessary to consider not only propagation at a single frequency but in the whole band. The UWB radar output signal is formed by both transmitter and antenna. The effects of antenna receiving and transmitting responses for various antenna types (such as small and aperture antennas are studied in the frequency as well as time domain. Moreover, UWB radar output signals can be substantially affected due to electromagnetic wave propagation through walls and multipath effects.
Hopkins, Philip F; Bundy, Kevin; Khochfar, Sadegh; Bosch, Frank van den; Somerville, Rachel S; Wetzel, Andrew; Keres, Dusan; Hernquist, Lars; Stewart, Kyle; Younger, Joshua D; Genel, Shy; Ma, Chung-Pei
2010-01-01
Different methodologies lead to order-of-magnitude variations in predicted galaxy merger rates. We examine and quantify the dominant uncertainties. Different halo merger rates and subhalo 'destruction' rates agree to within a factor ~2 given proper care in definitions. If however (sub)halo masses are not appropriately defined or are under-resolved, the major merger rate can be dramatically suppressed. The dominant differences in galaxy merger rates owe to baryonic physics. Hydrodynamic simulations without feedback and older models that do not agree with the observed galaxy mass function propagate factor ~5 bias in the resulting merger rates. However, if the model matches the galaxy mass function, properties of central galaxies are sufficiently converged to give small differences in merger rates. But variations in baryonic physics of satellites have the most dramatic effect. The known problem of satellite 'over-quenching' in most semi-analytic models (SAMs), whereby SAM satellites are too efficiently stripped ...
Predictability of Acoustic Propagation in Shallow Water Using Parabolic Approximation Models
Cederberg, Robert John
Accuracy of relative intensity, interference wavelength and horizontal wavenumber predictions from parabolic approximation models of shallow water, low frequency (less than 100 Hz) acoustic propagation problems is examined. The investigation is directed toward environmental parameter values corresponding generally to those near the site of a recent New Jersey shelf experiment. First, typical parameter uncertainties in the environment of the experiment site are used to determine effects of parameter sensitivities on the accuracy of two -layer isospeed model predictions. Also, analytic expressions for rates of change of wavenumbers with respect to parameters are used to compute wavenumber and interference wavelength changes caused by parameter variations corresponding to the uncertainties. It is found that channel depth variations cause the largest change in intensity, while water sound speed variations have the greatest effect on wavenumbers. Variability of the parameter sensitivities in regions about the base parameter sets is also examined, with rates of change generally staying of the same order of magnitude throughout the regions considered. Effects of input parameter uncertainties and basic approximations in depth and range dependent PE models are also investigated. Parameter uncertainties are found to impose the greatest limitations on prediction accuracy, with sediment sound speed uncertainties causing the largest restrictions. Models of bottom sound speed profiles that take into account sediment consolidation are developed, and their effects on propagation model predictions in range dependent environments are examined. Using borehole density data and Biot-Stoll theory, a functional form for porosity in a homogeneous consolidated sediment is derived. Corresponding sound speed profiles for different sediment types are then constructed. Predictions from models consisting of consolidated sediment layers throughout the bottom are compared with results from cases
Uncertainty in peak cooling load calculations
Energy Technology Data Exchange (ETDEWEB)
Dominguez-Munoz, Fernando; Cejudo-Lopez, Jose M.; Carrillo-Andres, Antonio [Grupo de Energetica, ETS Ingenieros Industriales, Universidad de Malaga, Calle Dr. Ortiz Ramos, 29071 Malaga (Spain)
2010-07-15
Peak cooling loads are usually calculated at early stages of the building project, when large uncertainties affect the input data. Uncertainties arise from a variety of sources like the lack of information, random components and the approximate nature of the building mathematical model. Unfortunately, these uncertainties are normally large enough to make the result of the calculation very dependent on starting assumptions about the value of input data. HVAC engineers deal with uncertainties through worst-case scenarios and/or safety factors. In this paper, a new approach is proposed based on stochastic simulation methods. Uncertainty bands are applied to the input data and propagated through the building model in order to determine their impact on the peak cooling load. The result of this calculation is a probability distribution that quantifies the whole range of possible peak loads and the probability of each interval. The stochastic solution is compared with the conventional one, and a global sensitivity analysis is undertaken to identify the most important uncertainties. (author)
International Nuclear Information System (INIS)
This book is comprised of nineteen chapters, which describes introduction of analytical chemistry, experimental error and statistics, chemistry equilibrium and solubility, gravimetric analysis with mechanism of precipitation, range and calculation of the result, volume analysis on general principle, sedimentation method on types and titration curve, acid base balance, acid base titration curve, complex and firing reaction, introduction of chemical electro analysis, acid-base titration curve, electrode and potentiometry, electrolysis and conductometry, voltammetry and polarographic spectrophotometry, atomic spectrometry, solvent extraction, chromatograph and experiments.
Hollow Gaussian Schell-model beam and its propagation
Wang, Li-Gang
2007-01-01
In this paper, we present a new model, hollow Gaussian-Schell model beams (HGSMBs), to describe the practical dark hollow beams. An analytical propagation formula for HGSMBs passing through a paraxial first-order optical system is derived based on the theory of coherence. Based on the derived formula, an application example showing the influence of spatial coherence on the propagation of beams is illustrated. It is found that the beam propagating properties of HGSMBs will be greatly affected by their spatial coherence. Our model provides a very convenient way for analyzing the propagation properties of partially coherent dark hollow beams.
When to carry out analytic continuation?
Zuo, J M
1998-01-01
This paper discusses the analytic continuation in the thermal field theory by using the theory of $\\eta-\\xi$ spacetime. Taking a simple model as example, the $2\\times 2$ matrix real-time propagator is solved from the equation obtained through continuation of the equation for the imaginary-time propagator. The geometry of the $\\eta-\\xi$ spacetime plays important role in the discussion.
Uncertainty in the environmental modelling process – A framework and guidance
Refsgaard, J. C.; Van Der Sluijs, J P; Hojberg, A.L.; Vanrolleghem, P.
2007-01-01
A terminology and typology of uncertainty is presented together with a framework for the modelling process, its interaction with the broader water management process and the role of uncertainty at different stages in the modelling processes. Brief reviews have been made of 14 different (partly complementary) methods commonly used in uncertainty assessment and characterisation: data uncertainty engine (DUE), error propagation equations, expert elicitation, extended peer review, inverse modelli...
Institute of Scientific and Technical Information of China (English)
路远发; 朱家平; 汪群英
2011-01-01
"不确定度连续传递模型"是建立在"约克方程"基础上的双误差标准曲线回归新理论.文章简单介绍了这一理论的基本思路与数学模型以及根据该模型设计出的计算机程序.该程序可进行标准曲线的一次曲线、二次曲线、三次曲线、指数曲线和对数曲线的简单回归、一次曲线的Y-单误差(含相对误差和绝对误差)回归,一次曲线x,y双误差(含相对误差和绝对误差)回归,并可根据回归曲线计算出给定允许最大误差时的检出限.程序功能齐全、界面友好、使用方便,是化学检测实验室必备的工具软件之一.%This continuous propagation model of uncertainty is a new two-error standard curve regression theory, which is based on "York's algorithm". A brief introduction of the basic idea of the theory , mathematical model, and computer program, which is in accordance with the model is discussed in this paper. The program can perform simple standard curve straight line, quadratic curve, cubic curve, exponential curve, logarithmic curve; Y-single-error and x, y two error ( containing relative and absolute error) regression for standard curves. Moreover, detection limit can be calculated in terms of regression curve when given the allowed maximum error. This program is complete in function, friendly in interface and convenient in use, and is one of necessary tool softwares in chemical testing laboratory.
Premixed flame propagation in vertical tubes
Kazakov, Kirill A.
2016-04-01
Analytical treatment of the premixed flame propagation in vertical tubes with smooth walls is given. Using the on-shell flame description, equations for a quasi-steady flame with a small but finite front thickness are obtained and solved numerically. It is found that near the limits of inflammability, solutions describing upward flame propagation come in pairs having close propagation speeds and that the effect of gravity is to reverse the burnt gas velocity profile generated by the flame. On the basis of these results, a theory of partial flame propagation driven by a strong gravitational field is developed. A complete explanation is given of the intricate observed behavior of limit flames, including dependence of the inflammability range on the size of the combustion domain, the large distances of partial flame propagation, and the progression of flame extinction. The role of the finite front-thickness effects is discussed in detail. Also, various mechanisms governing flame acceleration in smooth tubes are identified. Acceleration of methane-air flames in open tubes is shown to be a combined effect of the hydrostatic pressure difference produced by the ambient cold air and the difference of dynamic gas pressure at the tube ends. On the other hand, a strong spontaneous acceleration of the fast methane-oxygen flames at the initial stage of their evolution in open-closed tubes is conditioned by metastability of the quasi-steady propagation regimes. An extensive comparison of the obtained results with the experimental data is made.
Sustainable Process Design under uncertainty analysis: targeting environmental indicators
DEFF Research Database (Denmark)
L. Gargalo, Carina; Gani, Rafiqul
2015-01-01
This study focuses on uncertainty analysis of environmental indicators used to support sustainable process design efforts. To this end, the Life Cycle Assessment methodology is extended with a comprehensive uncertainty analysis to propagate the uncertainties in input LCA data to the environmental...... extended LCA procedure is flexible and generic and can handle various sources of uncertainties in environmental impact analysis. This is expected to contribute to more reliable calculation of impact categories and robust sustainable process design....... from algae biomass is used as a case study. The results indicate there are considerable uncertainties in the calculated environmental indicators as revealed by CDFs. The underlying sources of these uncertainties are indeed the significant variation in the databases used for the LCA analysis. The...
A generalized photon propagator
Itin, Yakov
2007-01-01
A covariant gauge independent derivation of the generalized dispersion relation of electromagnetic waves in a medium with local and linear constitutive law is presented. A generalized photon propagator is derived. For Maxwell constitutive tensor, the standard light cone structure and the standard Feynman propagator are reinstated.
NASA propagation information center
Smith, Ernest K.; Flock, Warren L.
1990-07-01
The NASA Propagation Information Center became formally operational in July 1988. It is located in the Department of Electrical and Computer Engineering of the University of Colorado at Boulder. The center is several things: a communications medium for the propagation with the outside world, a mechanism for internal communication within the program, and an aid to management.
Measurement uncertainty in pharmaceutical analysis and its application
Institute of Scientific and Technical Information of China (English)
Marcus Augusto Lyrio Traple; Alessandro Morais Saviano; Fabiane Lacerda Francisco; Felipe Rebello Lourençon
2014-01-01
The measurement uncertainty provides complete information about an analytical result. This is very important because several decisions of compliance or non-compliance are based on analytical results in pharmaceutical industries. The aim of this work was to evaluate and discuss the estimation of uncertainty in pharmaceutical analysis. The uncertainty is a useful tool in the assessment of compliance or non-compliance of in-process and final pharmaceutical products as well as in the assessment of pharmaceutical equivalence and stability study of drug products.
Positrons from dark matter annihilation in the galactic halo: uncertainties
Fornengo, N; Lineros, R; Donato, F; Salati, P
2007-01-01
Indirect detection signals from dark matter annihilation are studied in the positron channel. We discuss in detail the positron propagation inside the galactic medium: we present novel solutions of the diffusion and propagation equations and we focus on the determination of the astrophysical uncertainties which affect the positron dark matter signal. We show that, especially in the low energy tail of the positron spectra at Earth, the uncertainty is sizeable and we quantify the effect. Comparison of our predictions with current available and foreseen experimental data are derived.
Measurement uncertainty of lactase-containing tablets analyzed with FTIR.
Paakkunainen, Maaret; Kohonen, Jarno; Reinikainen, Satu-Pia
2014-01-01
Uncertainty is one of the most critical aspects in determination of measurement reliability. In order to ensure accurate measurements, results need to be traceable and uncertainty measurable. In this study, homogeneity of FTIR samples is determined with a combination of variographic and multivariate approach. An approach for estimation of uncertainty within individual sample, as well as, within repeated samples is introduced. FTIR samples containing two commercial pharmaceutical lactase products (LactaNON and Lactrase) are applied as an example of the procedure. The results showed that the approach is suitable for the purpose. The sample pellets were quite homogeneous, since the total uncertainty of each pellet varied between 1.5% and 2.5%. The heterogeneity within a tablet strip was found to be dominant, as 15-20 tablets has to be analyzed in order to achieve uncertainty level. Uncertainty arising from the FTIR instrument was uncertainty estimates are computed directly from FTIR spectra without any concentration information of the analyte.
Measurement Uncertainty and Probability
Willink, Robin
2013-02-01
Part I. Principles: 1. Introduction; 2. Foundational ideas in measurement; 3. Components of error or uncertainty; 4. Foundational ideas in probability and statistics; 5. The randomization of systematic errors; 6. Beyond the standard confidence interval; Part II. Evaluation of Uncertainty: 7. Final preparation; 8. Evaluation using the linear approximation; 9. Evaluation without the linear approximations; 10. Uncertainty information fit for purpose; Part III. Related Topics: 11. Measurement of vectors and functions; 12. Why take part in a measurement comparison?; 13. Other philosophies; 14. An assessment of objective Bayesian methods; 15. A guide to the expression of uncertainty in measurement; 16. Measurement near a limit - an insoluble problem?; References; Index.
Uncertainty in artificial intelligence
Kanal, LN
1986-01-01
How to deal with uncertainty is a subject of much controversy in Artificial Intelligence. This volume brings together a wide range of perspectives on uncertainty, many of the contributors being the principal proponents in the controversy.Some of the notable issues which emerge from these papers revolve around an interval-based calculus of uncertainty, the Dempster-Shafer Theory, and probability as the best numeric model for uncertainty. There remain strong dissenting opinions not only about probability but even about the utility of any numeric method in this context.
The uncertainty of modeled soil carbon stock change for Finland
Lehtonen, Aleksi; Heikkinen, Juha
2013-04-01
Countries should report soil carbon stock changes of forests for Kyoto Protocol. Under Kyoto Protocol one can omit reporting of a carbon pool by verifying that the pool is not a source of carbon, which is especially tempting for the soil pool. However, verifying that soils of a nation are not a source of carbon in given year seems to be nearly impossible. The Yasso07 model was parametrized against various decomposition data using MCMC method. Soil carbon change in Finland between 1972 and 2011 were simulated with Yasso07 model using litter input data derived from the National Forest Inventory (NFI) and fellings time series. The uncertainties of biomass models, litter turnoverrates, NFI sampling and Yasso07 model were propagated with Monte Carlo simulations. Due to biomass estimation methods, uncertainties of various litter input sources (e.g. living trees, natural mortality and fellings) correlate strongly between each other. We show how original covariance matrices can be analytically combined and the amount of simulated components reduce greatly. While doing simulations we found that proper handling correlations may be even more essential than accurate estimates of standard errors. As a preliminary results, from the analysis we found that both Southern- and Northern Finland were soil carbon sinks, coefficient of variations (CV) varying 10%-25% when model was driven with long term constant weather data. When we applied annual weather data, soils were both sinks and sources of carbon and CVs varied from 10%-90%. This implies that the success of soil carbon sink verification depends on the weather data applied with models. Due to this fact IPCC should provide clear guidance for the weather data applied with soil carbon models and also for soil carbon sink verification. In the UNFCCC reporting carbon sinks of forest biomass have been typically averaged for five years - similar period for soil model weather data would be logical.
Uncertainty in hydrological signatures
McMillan, Hilary; Westerberg, Ida
2015-04-01
Information that summarises the hydrological behaviour or flow regime of a catchment is essential for comparing responses of different catchments to understand catchment organisation and similarity, and for many other modelling and water-management applications. Such information types derived as an index value from observed data are known as hydrological signatures, and can include descriptors of high flows (e.g. mean annual flood), low flows (e.g. mean annual low flow, recession shape), the flow variability, flow duration curve, and runoff ratio. Because the hydrological signatures are calculated from observed data such as rainfall and flow records, they are affected by uncertainty in those data. Subjective choices in the method used to calculate the signatures create a further source of uncertainty. Uncertainties in the signatures may affect our ability to compare different locations, to detect changes, or to compare future water resource management scenarios. The aim of this study was to contribute to the hydrological community's awareness and knowledge of data uncertainty in hydrological signatures, including typical sources, magnitude and methods for its assessment. We proposed a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrated it for a variety of commonly used signatures. The study was made for two data rich catchments, the 50 km2 Mahurangi catchment in New Zealand and the 135 km2 Brue catchment in the UK. For rainfall data the uncertainty sources included point measurement uncertainty, the number of gauges used in calculation of the catchment spatial average, and uncertainties relating to lack of quality control. For flow data the uncertainty sources included uncertainties in stage/discharge measurement and in the approximation of the true stage-discharge relation by a rating curve. The resulting uncertainties were compared across the different signatures and catchments, to quantify uncertainty
PROPAGATING COLLECTIVE EXCITATIONS IN MOLTEN SALTS
Directory of Open Access Journals (Sweden)
T.Bryk
2003-01-01
Full Text Available Longitudinal as well as transverse dynamics of a molten salt beyond the hydrodynamic region is studied within the generalized collective modes (GCM approach. An analytical three-variable model is applied to the treatment of the coupled long- and short-time charge fluctuations. Dispersion laws of propagating kinetic collective excitations such as optic phonon-like modes, heat and shear waves are obtained and analyzed for the case of molten NaCl within the eight-variable GCM scheme, combining the analytical methods and molecular dynamics simulations.
Spatial Uncertainty Model for Visual Features Using a Kinect™ Sensor
Directory of Open Access Journals (Sweden)
Jae-Han Park
2012-06-01
Full Text Available This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.
Uncertainty and filtering of hidden Markov models in discrete time
Cohen, Samuel N.
2016-01-01
We consider the problem of filtering an unseen Markov chain from noisy observations, in the presence of uncertainty of the parameters of the processes involved. Using the theory of nonlinear expectations, we describe the uncertainty in terms of a penalty function, which can be propagated forward in time alongside the filter,while maintaining dynamic consistency. We then study the approximation of this penalty, and give a simple numerical example.
Uncertainty in flood risk mapping
Gonçalves, Luisa M. S.; Fonte, Cidália C.; Gomes, Ricardo
2014-05-01
A flood refers to a sharp increase of water level or volume in rivers and seas caused by sudden rainstorms or melting ice due to natural factors. In this paper, the flooding of riverside urban areas caused by sudden rainstorms will be studied. In this context, flooding occurs when the water runs above the level of the minor river bed and enters the major river bed. The level of the major bed determines the magnitude and risk of the flooding. The prediction of the flooding extent is usually deterministic, and corresponds to the expected limit of the flooded area. However, there are many sources of uncertainty in the process of obtaining these limits, which influence the obtained flood maps used for watershed management or as instruments for territorial and emergency planning. In addition, small variations in the delineation of the flooded area can be translated into erroneous risk prediction. Therefore, maps that reflect the uncertainty associated with the flood modeling process have started to be developed, associating a degree of likelihood with the boundaries of the flooded areas. In this paper an approach is presented that enables the influence of the parameters uncertainty to be evaluated, dependent on the type of Land Cover Map (LCM) and Digital Elevation Model (DEM), on the estimated values of the peak flow and the delineation of flooded areas (different peak flows correspond to different flood areas). The approach requires modeling the DEM uncertainty and its propagation to the catchment delineation. The results obtained in this step enable a catchment with fuzzy geographical extent to be generated, where a degree of possibility of belonging to the basin is assigned to each elementary spatial unit. Since the fuzzy basin may be considered as a fuzzy set, the fuzzy area of the basin may be computed, generating a fuzzy number. The catchment peak flow is then evaluated using fuzzy arithmetic. With this methodology a fuzzy number is obtained for the peak flow
Applicability of Parametrized Form of Fully Dressed Quark Propagator
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
According to extensive study of the Dyson-Schwinger equations for a fully dressed quark propagator in the "rainbow" approximation with an effective gluon propagator, a parametrized fully dressed confining quark propagator is suggested in this paper. The parametrized quark propagator describes a confined quark propagation in hadron, and is analytic everywhere in complex p2-plane and has no Lehmann representation. The vector and scalar self-energy functions [1 - Af(p2)] and [Bf(p2) - mf], dynamically running effective mass of quark Mf(p2) and the structure of non-local quark vacuum condensates as well as local quark vacuum condensates are predicted by use of the parametrized quark propagator. The results are compatible with other theoretical calculations.
Photon propagation in slowly varying electromagnetic fields
Karbstein, Felix
2016-01-01
We study the effective theory of soft photons in slowly varying electromagnetic background fields at one-loop order in QED. This is of relevance for the study of all-optical signatures of quantum vacuum nonlinearity in realistic electromagnetic background fields as provided by high-intensity lasers. The central result derived in this article is a new analytical expression for the photon polarization tensor in two linearly polarized counter-propagating pulsed Gaussian laser beams. As we treat ...
Space Propagation of Instabilities in Zakharov Equations
Metivier, Guy
2007-01-01
39 p International audience In this paper we study an initial boundary value problem for Zakharov's equations, describing the space propagation of a laser beam entering in a plasma. We prove a strong instability result and prove that the mathematical problem is ill-posed in Sobolev spaces. We also show that it is well posed in spaces of analytic functions. Several consequences for the physical consistency of the model are discussed.
Wave Propagation in Bimodular Geomaterials
Kuznetsova, Maria; Pasternak, Elena; Dyskin, Arcady; Pelinovsky, Efim
2016-04-01
Observations and laboratory experiments show that fragmented or layered geomaterials have the mechanical response dependent on the sign of the load. The most adequate model accounting for this effect is the theory of bimodular (bilinear) elasticity - a hyperelastic model with different elastic moduli for tension and compression. For most of geo- and structural materials (cohesionless soils, rocks, concrete, etc.) the difference between elastic moduli is such that their modulus in compression is considerably higher than that in tension. This feature has a profound effect on oscillations [1]; however, its effect on wave propagation has not been comprehensively investigated. It is believed that incorporation of bilinear elastic constitutive equations within theory of wave dynamics will bring a deeper insight to the study of mechanical behaviour of many geomaterials. The aim of this paper is to construct a mathematical model and develop analytical methods and numerical algorithms for analysing wave propagation in bimodular materials. Geophysical and exploration applications and applications in structural engineering are envisaged. The FEM modelling of wave propagation in a 1D semi-infinite bimodular material has been performed with the use of Marlow potential [2]. In the case of the initial load expressed by a harmonic pulse loading strong dependence on the pulse sign is observed: when tension is applied before compression, the phenomenon of disappearance of negative (compressive) strains takes place. References 1. Dyskin, A., Pasternak, E., & Pelinovsky, E. (2012). Periodic motions and resonances of impact oscillators. Journal of Sound and Vibration, 331(12), 2856-2873. 2. Marlow, R. S. (2008). A Second-Invariant Extension of the Marlow Model: Representing Tension and Compression Data Exactly. In ABAQUS Users' Conference.
Van Nooyen, R.R.P.; Hrachowitz, M.; Kolechkina, A.G.
2014-01-01
Even without uncertainty about the model structure or parameters, the output of a hydrological model run still contains several sources of uncertainty. These are: measurement errors affecting the input, the transition from continuous time and space to discrete time and space, which causes loss of in
Capel, H.W.; Cramer, J.S.; Estevez-Uscanga, O.
1995-01-01
'Uncertainty and chance' is a subject with a broad span, in that there is no academic discipline or walk of life that is not beset by uncertainty and chance. In this book a range of approaches is represented by authors from varied disciplines: natural sciences, mathematics, social sciences and medic
Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses
Energy Technology Data Exchange (ETDEWEB)
Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Curtis E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-08-01
We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.
Physical Uncertainty Bounds (PUB)
Energy Technology Data Exchange (ETDEWEB)
Vaughan, Diane Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Dean L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.
Nanoparticles: Uncertainty Risk Analysis
DEFF Research Database (Denmark)
Grieger, Khara Deanne; Hansen, Steffen Foss; Baun, Anders
2012-01-01
approaches. To date, there have been a number of different approaches to assess uncertainty of environmental risks in general, and some have also been proposed in the case of nanoparticles and nanomaterials. In recent years, others have also proposed that broader assessments of uncertainty are also needed......Scientific uncertainty plays a major role in assessing the potential environmental risks of nanoparticles. Moreover, there is uncertainty within fundamental data and information regarding the potential environmental and health risks of nanoparticles, hampering risk assessments based on standard...... in order to handle the complex potential risks of nanoparticles, including more descriptive characterizations of uncertainty. Some of these approaches are presented and discussed herein, in which the potential strengths and limitations of these approaches are identified along with further challenges...
Measurement uncertainty and probability
Willink, Robin
2013-01-01
A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.
Economic uncertainty and econophysics
Schinckus, Christophe
2009-10-01
The objective of this paper is to provide a methodological link between econophysics and economics. I will study a key notion of both fields: uncertainty and the ways of thinking about it developed by the two disciplines. After having presented the main economic theories of uncertainty (provided by Knight, Keynes and Hayek), I show how this notion is paradoxically excluded from the economic field. In economics, uncertainty is totally reduced by an a priori Gaussian framework-in contrast to econophysics, which does not use a priori models because it works directly on data. Uncertainty is then not shaped by a specific model, and is partially and temporally reduced as models improve. This way of thinking about uncertainty has echoes in the economic literature. By presenting econophysics as a Knightian method, and a complementary approach to a Hayekian framework, this paper shows that econophysics can be methodologically justified from an economic point of view.
Complex-Mass Definition and the Structure of Unstable Particle’s Propagator
Directory of Open Access Journals (Sweden)
Vladimir Kuksa
2015-01-01
Full Text Available The propagators of unstable particles are considered in framework of the convolution representation. Spectral function is found for a special case when the propagator of scalar unstable particle has Breit-Wigner form. The expressions for the dressed propagators of unstable vector and spinor fields are derived in an analytical way for this case. We obtain the propagators in modified Breit-Wigner forms which correspond to the complex-mass definition.
Conclusions on measurement uncertainty in microbiology.
Forster, Lynne I
2009-01-01
Since its first issue in 1999, testing laboratories wishing to comply with all the requirements of ISO/IEC 17025 have been collecting data for estimating uncertainty of measurement for quantitative determinations. In the microbiological field of testing, some debate has arisen as to whether uncertainty needs to be estimated for each method performed in the laboratory for each type of sample matrix tested. Queries also arise concerning the estimation of uncertainty when plate/membrane filter colony counts are below recommended method counting range limits. A selection of water samples (with low to high contamination) was tested in replicate with the associated uncertainty of measurement being estimated from the analytical results obtained. The analyses performed on the water samples included total coliforms, fecal coliforms, fecal streptococci by membrane filtration, and heterotrophic plate counts by the pour plate technique. For those samples where plate/membrane filter colony counts were > or =20, uncertainty estimates at a 95% confidence level were very similar for the methods, being estimated as 0.13, 0.14, 0.14, and 0.12, respectively. For those samples where plate/membrane filter colony counts were uncertainty values for each sample showed close agreement with published confidence limits established using a Poisson distribution approach.
Assessing uncertainties in surface water security: An empirical multimodel approach
Rodrigues, Dulce B. B.; Gupta, Hoshin V.; Mendiondo, Eduardo M.; Oliveira, Paulo Tarso S.
2015-11-01
Various uncertainties are involved in the representation of processes that characterize interactions among societal needs, ecosystem functioning, and hydrological conditions. Here we develop an empirical uncertainty assessment of water security indicators that characterize scarcity and vulnerability, based on a multimodel and resampling framework. We consider several uncertainty sources including those related to (i) observed streamflow data; (ii) hydrological model structure; (iii) residual analysis; (iv) the method for defining Environmental Flow Requirement; (v) the definition of critical conditions for water provision; and (vi) the critical demand imposed by human activities. We estimate the overall hydrological model uncertainty by means of a residual bootstrap resampling approach, and by uncertainty propagation through different methodological arrangements applied to a 291 km2 agricultural basin within the Cantareira water supply system in Brazil. Together, the two-component hydrograph residual analysis and the block bootstrap resampling approach result in a more accurate and precise estimate of the uncertainty (95% confidence intervals) in the simulated time series. We then compare the uncertainty estimates associated with water security indicators using a multimodel framework and the uncertainty estimates provided by each model uncertainty estimation approach. The range of values obtained for the water security indicators suggests that the models/methods are robust and performs well in a range of plausible situations. The method is general and can be easily extended, thereby forming the basis for meaningful support to end-users facing water resource challenges by enabling them to incorporate a viable uncertainty analysis into a robust decision-making process.
Javed, A.; Kamphues, E.; Hartuc, T.; Pecnik, R.; Van Buijtenen, J.P.
2015-01-01
The compressor impellers for mass-produced turbochargers are generally die-casted and machined to their final configuration. Manufacturing uncertainties are inherently introduced as stochastic dimensional deviations in the impeller geometry. These deviations eventually propagate into the compressor
Uncertainty in wind climate parameters and their influence on wind turbine fatigue loads
DEFF Research Database (Denmark)
Toft, Henrik Stensgaard; Svenningsen, Lasse; Sørensen, John Dalsgaard;
2016-01-01
Highlights • Probabilistic framework for reliability assessment of site specific wind turbines. • Uncertainty in wind climate parameters propagated to structural loads directly. • Sensitivity analysis to estimate wind climate parameters influence on reliability.......Highlights • Probabilistic framework for reliability assessment of site specific wind turbines. • Uncertainty in wind climate parameters propagated to structural loads directly. • Sensitivity analysis to estimate wind climate parameters influence on reliability....
Working fluid selection for organic Rankine cycles - Impact of uncertainty of fluid properties
DEFF Research Database (Denmark)
Frutiger, Jerome; Andreasen, Jesper Graa; Liu, Wei;
2016-01-01
This study presents a generic methodology to select working fluids for ORC (Organic Rankine Cycles)taking into account property uncertainties of the working fluids. A Monte Carlo procedure is described as a tool to propagate the influence of the input uncertainty of the fluid parameters on the ORC...... of processmodels and constraints 2) selection of property models, i.e. Penge Robinson equation of state 3)screening of 1965 possible working fluid candidates including identification of optimal process parametersbased on Monte Carlo sampling 4) propagating uncertainty of fluid parameters to the ORC netpower output....... The net power outputs of all the feasible working fluids were ranked including their uncertainties. The method could propagate and quantify the input property uncertainty of the fluidproperty parameters to the ORC model, giving an additional dimension to the fluid selection process. In the given analysis...
Optimal Universal Uncertainty Relations
Li, Tao; Xiao, Yunlong; Ma, Teng; Fei, Shao-Ming; Jing, Naihuan; Li-Jost, Xianqing; Wang, Zhi-Xi
2016-01-01
We study universal uncertainty relations and present a method called joint probability distribution diagram to improve the majorization bounds constructed independently in [Phys. Rev. Lett. 111, 230401 (2013)] and [J. Phys. A. 46, 272002 (2013)]. The results give rise to state independent uncertainty relations satisfied by any nonnegative Schur-concave functions. On the other hand, a remarkable recent result of entropic uncertainty relation is the direct-sum majorization relation. In this paper, we illustrate our bounds by showing how they provide a complement to that in [Phys. Rev. A. 89, 052115 (2014)]. PMID:27775010
Indian Academy of Sciences (India)
Rituparna Chutia; Supahi Mahanta; D Datta
2014-04-01
The parameters associated to a environmental dispersion model may include different kinds of variability, imprecision and uncertainty. More often, it is seen that available information is interpreted in probabilistic sense. Probability theory is a well-established theory to measure such kind of variability. However, not all available information, data or model parameters affected by variability, imprecision and uncertainty, can be handled by traditional probability theory. Uncertainty or imprecision may occur due to incomplete information or data, measurement error or data obtained from expert judgement or subjective interpretation of available data or information. Thus for model parameters, data may be affected by subjective uncertainty. Traditional probability theory is inappropriate to represent subjective uncertainty. Possibility theory is used as a tool to describe parameters with insufficient knowledge. Based on the polynomial chaos expansion, stochastic response surface method has been utilized in this article for the uncertainty propagation of atmospheric dispersion model under consideration of both probabilistic and possibility information. The proposed method has been demonstrated through a hypothetical case study of atmospheric dispersion.
Farrance, Ian; Frenkel, Robert
2014-02-01
The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship
Assessment of errors and uncertainty patterns in GIA modeling
DEFF Research Database (Denmark)
Barletta, Valentina Roberta; Spada, G.
2012-01-01
, such as time-evolving shorelines and paleo coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...
Assessment of errors and uncertainty patterns in GIA modeling
DEFF Research Database (Denmark)
Barletta, Valentina Roberta; Spada, G.
, such as time-evolving shorelines and paleo-coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...
Quantification of Modelling Uncertainties in Turbulent Flow Simulations
Edeling, W.N.
2015-01-01
The goal of this thesis is to make predictive simulations with Reynolds-Averaged Navier-Stokes (RANS) turbulence models, i.e. simulations with a systematic treatment of model and data uncertainties and their propagation through a computational model to produce predictions of quantities of interest w
Uncertainty in biodiversity science, policy and management: a conceptual overview
Directory of Open Access Journals (Sweden)
Yrjö Haila
2014-10-01
Full Text Available The protection of biodiversity is a complex societal, political and ultimately practical imperative of current global society. The imperative builds upon scientific knowledge on human dependence on the life-support systems of the Earth. This paper aims at introducing main types of uncertainty inherent in biodiversity science, policy and management, as an introduction to a companion paper summarizing practical experiences of scientists and scholars (Haila et al. 2014. Uncertainty is a cluster concept: the actual nature of uncertainty is inherently context-bound. We use semantic space as a conceptual device to identify key dimensions of uncertainty in the context of biodiversity protection; these relate to [i] data; [ii] proxies; [iii] concepts; [iv] policy and management; and [v] normative goals. Semantic space offers an analytic perspective for drawing critical distinctions between types of uncertainty, identifying fruitful resonances that help to cope with the uncertainties, and building up collaboration between different specialists to support mutual social learning.
A Web tool for calculating k0-NAA uncertainties
International Nuclear Information System (INIS)
The calculation of uncertainty budgets is becoming a standard step in reporting analytical results. This gives rise to the need for simple, easily accessed tools to calculate uncertainty budgets. An example of such a tool is the Excel spreadsheet approach of Robouch et al. An internet application which calculates uncertainty budgets for k0-NAA is presented. The Web application has built in 'Literature' values for standard isotopes and accepts as inputs fixed information such as the thermal to epithermal neutron flux ratio, as well as experiment specific data such as the mass of the sample. The application calculates and displays intermediate uncertainties as well as the final combined uncertainty of the element concentration in the sample. The interface only requires access to a standard browser and is thus easily accessible to researchers and laboratories. This may facilitate and standardize the calculation of k0-NAA uncertainty budgets. (author)
DEFF Research Database (Denmark)
Christensen, Hanne Bjerre; Poulsen, Mette Erecius; Pedersen, Mikael
2003-01-01
The estimation of uncertainty of an analytical result has become important in analytical chemistry. It is especially difficult to determine uncertainties for multiresidue methods, e.g. for pesticides in fruit and vegetables, as the varieties of pesticide/commodity combinations are many....... In the present study, recommendations from the International Organisation for Standardisation's (ISO) Guide to the Expression of Uncertainty and the EURACHEM/CITAC guide Quantifying Uncertainty in Analytical Measurements were followed to estimate the expanded uncertainties for 153 pesticides in fruit...
Institute of Scientific and Technical Information of China (English)
LIDian-qing; ZHANGSheng-kun
2004-01-01
The classical probability theory cannot effectively quantify the parameter uncertainty in probability of detection.Furthermore,the conventional data analytic method and expert judgment method fail to handle the problem of model uncertainty updating with the information from nondestructive inspection.To overcome these disadvantages,a Bayesian approach was proposed to quantify the parameter uncertainty in probability of detection.Furthermore,the formulae of the multiplication factors to measure the statistical uncertainties in the probability of detection following the Weibull distribution were derived.A Bayesian updating method was applied to compute the posterior probabilities of model weights and the posterior probability density functions of distribution parameters of probability of detection.A total probability model method was proposed to analyze the problem of multi-layered model uncertainty updating.This method was then applied to the problem of multilayered corrosion model uncertainty updating for ship structures.The results indicate that the proposed method is very effective in analyzing the problem of multi-layered model uncertainty updating.
Tsunami generation by ocean floor rupture front propagation: Hamiltonian description
Directory of Open Access Journals (Sweden)
V. I. Pavlov
2009-02-01
Full Text Available The Hamiltonian method is applied to the problem of tsunami generation caused by a propagating rupture front and deformation of the ocean floor. The method establishes an alternative framework for analyzing the tsunami generation process and produces analytical expressions for the power and directivity of tsunami radiation (in the far-field for two illustrative cases, with constant and gradually varying speeds of rupture front propagation.
Propagation characteristics of electromagnetic waves along a dense plasma filament
Energy Technology Data Exchange (ETDEWEB)
Nowakowska, H.; Zakrzewski, Z. [Institute of Fluid-Flow Machinery, Polish Academy of Sciences, Gdansk (Poland); Moisan, M. [Departement de Physique, Universite de Montreal, Montreal, PQ (Canada)
2001-05-21
The characteristics of electromagnetic waves propagating along dense plasma filaments, as encountered in atmospheric pressure discharges, are examined in the microwave frequency range; they turn out to be surface waves. Results of numerical calculations of the dependence of the phase and attenuation coefficients on the plasma parameters are presented. In the limit of large electron densities, this guided wave is akin to a Sommerfeld wave and the propagation can be described in an analytical form. (author)
Introduction to uncertainty quantification
Sullivan, T J
2015-01-01
Uncertainty quantification is a topic of increasing practical importance at the intersection of applied mathematics, statistics, computation, and numerous application areas in science and engineering. This text provides a framework in which the main objectives of the field of uncertainty quantification are defined, and an overview of the range of mathematical methods by which they can be achieved. Complete with exercises throughout, the book will equip readers with both theoretical understanding and practical experience of the key mathematical and algorithmic tools underlying the treatment of uncertainty in modern applied mathematics. Students and readers alike are encouraged to apply the mathematical methods discussed in this book to their own favourite problems to understand their strengths and weaknesses, also making the text suitable as a self-study. This text is designed as an introduction to uncertainty quantification for senior undergraduate and graduate students with a mathematical or statistical back...
International Nuclear Information System (INIS)
This report presents an oil project valuation under uncertainty by means of two well-known financial techniques: The Capital Asset Pricing Model (CAPM) and The Black-Scholes Option Pricing Formula. CAPM gives a linear positive relationship between expected rate of return and risk but does not take into consideration the aspect of flexibility which is crucial for an irreversible investment as an oil price is. Introduction of investment decision flexibility by using real options can increase the oil project value substantially. Some simple tests for the importance of uncertainty in stock market for oil investments are performed. Uncertainty in stock returns is correlated with aggregate product market uncertainty according to Pindyck (1991). The results of the tests are not satisfactory due to the short data series but introducing two other explanatory variables the interest rate and Gross Domestic Product make the situation better. 36 refs., 18 figs., 6 tabs
Evaluating prediction uncertainty
Energy Technology Data Exchange (ETDEWEB)
McKay, M.D. [Los Alamos National Lab., NM (United States)
1995-03-01
The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented.
Commonplaces and social uncertainty
DEFF Research Database (Denmark)
Lassen, Inger
2008-01-01
This article explores the concept of uncertainty in four focus group discussions about genetically modified food. In the discussions, members of the general public interact with food biotechnology scientists while negotiating their attitudes towards genetic engineering. Their discussions offer...
Trajectories without quantum uncertainties
Polzik, Eugene S
2014-01-01
A common knowledge suggests that trajectories of particles in quantum mechanics always have quantum uncertainties. These quantum uncertainties set by the Heisenberg uncertainty principle limit precision of measurements of fields and forces, and ultimately give rise to the standard quantum limit in metrology. With the rapid developments of sensitivity of measurements these limits have been approached in various types of measurements including measurements of fields and acceleration. Here we show that a quantum trajectory of one system measured relatively to the other "reference system" with an effective negative mass can be quantum uncertainty--free. The method crucially relies on the generation of an Einstein-Podolsky-Rosen entangled state of two objects, one of which has an effective negative mass. From a practical perspective these ideas open the way towards force and acceleration measurements at new levels of sensitivity far below the standard quantum limit.
Uncertainty, rationality, and agency
Hoek, Wiebe van der
2006-01-01
Goes across 'classical' borderlines of disciplinesUnifies logic, game theory, and epistemics and studies them in an agent-settingCombines classical and novel approaches to uncertainty, rationality, and agency
Communicating spatial uncertainty to non-experts using R
Luzzi, Damiano; Sawicka, Kasia; Heuvelink, Gerard; de Bruin, Sytze
2016-04-01
Effective visualisation methods are important for the efficient use of uncertainty information for various groups of users. Uncertainty propagation analysis is often used with spatial environmental models to quantify the uncertainty within the information. A challenge arises when trying to effectively communicate the uncertainty information to non-experts (not statisticians) in a wide range of cases. Due to the growing popularity and applicability of the open source programming language R, we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. The package has implemented Monte Carlo algorithms for uncertainty propagation, the output of which is represented by an ensemble of model outputs (i.e. a sample from a probability distribution). Numerous visualisation methods exist that aim to present such spatial uncertainty information both statically, dynamically and interactively. To provide the most universal visualisation tools for non-experts, we conducted a survey on a group of 20 university students and assessed the effectiveness of selected static and interactive methods for visualising uncertainty in spatial variables such as DEM and land cover. The static methods included adjacent maps and glyphs for continuous variables. Both allow for displaying maps with information about the ensemble mean, variance/standard deviation and prediction intervals. Adjacent maps were also used for categorical data, displaying maps of the most probable class, as well as its associated probability. The interactive methods included a graphical user interface, which in addition to displaying the previously mentioned variables also allowed for comparison of joint uncertainties at multiple locations. The survey indicated that users could understand the basics of the uncertainty information displayed in the static maps, with the interactive interface allowing for more in-depth information. Subsequently, the R
Sustainability and uncertainty
DEFF Research Database (Denmark)
Jensen, Karsten Klint
2007-01-01
The widely used concept of sustainability is seldom precisely defined, and its clarification involves making up one's mind about a range of difficult questions. One line of research (bottom-up) takes sustaining a system over time as its starting point and then infers prescriptions from this requi...... are decisions under uncertainty. There might be different judgments on likelihoods; but even given some set of probabilities, there might be disagreement on the right level of precaution in face of the uncertainty....
Uncertainty in Environmental Economics
Robert S. Pindyck
2006-01-01
In a world of certainty, the design of environmental policy is relatively straightforward, and boils down to maximizing the present value of the flow of social benefits minus costs. But the real world is one of considerable uncertainty -- over the physical and ecological impact of pollution, over the economic costs and benefits of reducing it, and over the discount rates that should be used to compute present values. The implications of uncertainty are complicated by the fact that most enviro...
Conundrums with uncertainty factors.
Cooke, Roger
2010-03-01
The practice of uncertainty factors as applied to noncancer endpoints in the IRIS database harkens back to traditional safety factors. In the era before risk quantification, these were used to build in a "margin of safety." As risk quantification takes hold, the safety factor methods yield to quantitative risk calculations to guarantee safety. Many authors believe that uncertainty factors can be given a probabilistic interpretation as ratios of response rates, and that the reference values computed according to the IRIS methodology can thus be converted to random variables whose distributions can be computed with Monte Carlo methods, based on the distributions of the uncertainty factors. Recent proposals from the National Research Council echo this view. Based on probabilistic arguments, several authors claim that the current practice of uncertainty factors is overprotective. When interpreted probabilistically, uncertainty factors entail very strong assumptions on the underlying response rates. For example, the factor for extrapolating from animal to human is the same whether the dosage is chronic or subchronic. Together with independence assumptions, these assumptions entail that the covariance matrix of the logged response rates is singular. In other words, the accumulated assumptions entail a log-linear dependence between the response rates. This in turn means that any uncertainty analysis based on these assumptions is ill-conditioned; it effectively computes uncertainty conditional on a set of zero probability. The practice of uncertainty factors is due for a thorough review. Two directions are briefly sketched, one based on standard regression models, and one based on nonparametric continuous Bayesian belief nets. PMID:20030767
Uncertainty calculations made easier
Energy Technology Data Exchange (ETDEWEB)
Hogenbirk, A.
1994-07-01
The results are presented of a neutron cross section sensitivity/uncertainty analysis performed in a complicated 2D model of the NET shielding blanket design inside the ITER torus design, surrounded by the cryostat/biological shield as planned for ITER. The calculations were performed with a code system developed at ECN Petten, with which sensitivity/uncertainty calculations become relatively simple. In order to check the deterministic neutron transport calculations (performed with DORT), calculations were also performed with the Monte Carlo code MCNP. Care was taken to model the 2.0 cm wide gaps between two blanket segments, as the neutron flux behind the vacuum vessel is largely determined by neutrons streaming through these gaps. The resulting neutron flux spectra are in excellent agreement up to the end of the cryostat. It is noted, that at this position the attenuation of the neutron flux is about 1 l orders of magnitude. The uncertainty in the energy integrated flux at the beginning of the vacuum vessel and at the beginning of the cryostat was determined in the calculations. The uncertainty appears to be strongly dependent on the exact geometry: if the gaps are filled with stainless steel, the neutron spectrum changes strongly, which results in an uncertainty of 70% in the energy integrated flux at the beginning of the cryostat in the no-gap-geometry, compared to an uncertainty of only 5% in the gap-geometry. Therefore, it is essential to take into account the exact geometry in sensitivity/uncertainty calculations. Furthermore, this study shows that an improvement of the covariance data is urgently needed in order to obtain reliable estimates of the uncertainties in response parameters in neutron transport calculations. (orig./GL).
Uncertainty calculations made easier
International Nuclear Information System (INIS)
The results are presented of a neutron cross section sensitivity/uncertainty analysis performed in a complicated 2D model of the NET shielding blanket design inside the ITER torus design, surrounded by the cryostat/biological shield as planned for ITER. The calculations were performed with a code system developed at ECN Petten, with which sensitivity/uncertainty calculations become relatively simple. In order to check the deterministic neutron transport calculations (performed with DORT), calculations were also performed with the Monte Carlo code MCNP. Care was taken to model the 2.0 cm wide gaps between two blanket segments, as the neutron flux behind the vacuum vessel is largely determined by neutrons streaming through these gaps. The resulting neutron flux spectra are in excellent agreement up to the end of the cryostat. It is noted, that at this position the attenuation of the neutron flux is about 1 l orders of magnitude. The uncertainty in the energy integrated flux at the beginning of the vacuum vessel and at the beginning of the cryostat was determined in the calculations. The uncertainty appears to be strongly dependent on the exact geometry: if the gaps are filled with stainless steel, the neutron spectrum changes strongly, which results in an uncertainty of 70% in the energy integrated flux at the beginning of the cryostat in the no-gap-geometry, compared to an uncertainty of only 5% in the gap-geometry. Therefore, it is essential to take into account the exact geometry in sensitivity/uncertainty calculations. Furthermore, this study shows that an improvement of the covariance data is urgently needed in order to obtain reliable estimates of the uncertainties in response parameters in neutron transport calculations. (orig./GL)
Including uncertainty in hazard analysis through fuzzy measures
International Nuclear Information System (INIS)
This paper presents a method for capturing the uncertainty expressed by an Hazard Analysis (HA) expert team when estimating the frequencies and consequences of accident sequences and provides a sound mathematical framework for propagating this uncertainty to the risk estimates for these accident sequences. The uncertainty is readily expressed as distributions that can visually aid the analyst in determining the extent and source of risk uncertainty in HA accident sequences. The results also can be expressed as single statistics of the distribution in a manner analogous to expressing a probabilistic distribution as a point-value statistic such as a mean or median. The study discussed here used data collected during the elicitation portion of an HA on a high-level waste transfer process to demonstrate the techniques for capturing uncertainty. These data came from observations of the uncertainty that HA team members expressed in assigning frequencies and consequences to accident sequences during an actual HA. This uncertainty was captured and manipulated using ideas from possibility theory. The result of this study is a practical method for displaying and assessing the uncertainty in the HA team estimates of the frequency and consequences for accident sequences. This uncertainty provides potentially valuable information about accident sequences that typically is lost in the HA process
Uncertainty analysis in the applications of nuclear probabilistic risk assessment
International Nuclear Information System (INIS)
The aim of this thesis is to propose an approach to model parameter and model uncertainties affecting the results of risk indicators used in the applications of nuclear Probabilistic Risk assessment (PRA). After studying the limitations of the traditional probabilistic approach to represent uncertainty in PRA model, a new approach based on the Dempster-Shafer theory has been proposed. The uncertainty analysis process of the proposed approach consists in five main steps. The first step aims to model input parameter uncertainties by belief and plausibility functions according to the data PRA model. The second step involves the propagation of parameter uncertainties through the risk model to lay out the uncertainties associated with output risk indicators. The model uncertainty is then taken into account in the third step by considering possible alternative risk models. The fourth step is intended firstly to provide decision makers with information needed for decision making under uncertainty (parametric and model) and secondly to identify the input parameters that have significant uncertainty contributions on the result. The final step allows the process to be continued in loop by studying the updating of beliefs functions given new data. The proposed methodology was implemented on a real but simplified application of PRA model. (author)
Uncertainties in the simulation of groundwater recharge at different scales
Directory of Open Access Journals (Sweden)
H. Bogena
2005-01-01
Full Text Available Digital spatial data always imply some kind of uncertainty. The source of this uncertainty can be found in their compilation as well as the conceptual design that causes a more or less exact abstraction of the real world, depending on the scale under consideration. Within the framework of hydrological modelling, in which numerous data sets from diverse sources of uneven quality are combined, the various uncertainties are accumulated. In this study, the GROWA model is taken as an example to examine the effects of different types of uncertainties on the calculated groundwater recharge. Distributed input errors are determined for the parameters' slope and aspect using a Monte Carlo approach. Landcover classification uncertainties are analysed by using the conditional probabilities of a remote sensing classification procedure. The uncertainties of data ensembles at different scales and study areas are discussed. The present uncertainty analysis showed that the Gaussian error propagation method is a useful technique for analysing the influence of input data on the simulated groundwater recharge. The uncertainties involved in the land use classification procedure and the digital elevation model can be significant in some parts of the study area. However, for the specific model used in this study it was shown that the precipitation uncertainties have the greatest impact on the total groundwater recharge error.
Generalized uncertainty principles
Machluf, Ronny
2008-01-01
The phenomenon in the essence of classical uncertainty principles is well known since the thirties of the last century. We introduce a new phenomenon which is in the essence of a new notion that we introduce: "Generalized Uncertainty Principles". We show the relation between classical uncertainty principles and generalized uncertainty principles. We generalized "Landau-Pollak-Slepian" uncertainty principle. Our generalization relates the following two quantities and two scaling parameters: 1) The weighted time spreading $\\int_{-\\infty}^\\infty |f(x)|^2w_1(x)dx$, ($w_1(x)$ is a non-negative function). 2) The weighted frequency spreading $\\int_{-\\infty}^\\infty |\\hat{f}(\\omega)|^2w_2(\\omega)d\\omega$. 3) The time weight scale $a$, ${w_1}_a(x)=w_1(xa^{-1})$ and 4) The frequency weight scale $b$, ${w_2}_b(\\omega)=w_2(\\omega b^{-1})$. "Generalized Uncertainty Principle" is an inequality that summarizes the constraints on the relations between the two spreading quantities and two scaling parameters. For any two reason...
Directory of Open Access Journals (Sweden)
Tommasi J.
2010-10-01
Full Text Available In the [eV;MeV] energy range, modelling of the neutron induced reactions are based on nuclear reaction models having parameters. Estimation of co-variances on cross sections or on nuclear reaction model parameters is a recurrent puzzle in nuclear data evaluation. Major breakthroughs were asked by nuclear reactor physicists to assess proper uncertainties to be used in applications. In this paper, mathematical methods developped in the CONRAD code[2] will be presented to explain the treatment of all type of uncertainties, including experimental ones (statistical and systematic and propagate them to nuclear reaction model parameters or cross sections. Marginalization procedure will thus be exposed using analytical or Monte-Carlo solutions. Furthermore, one major drawback found by reactor physicist is the fact that integral or analytical experiments (reactor mock-up or simple integral experiment, e.g. ICSBEP, … were not taken into account sufficiently soon in the evaluation process to remove discrepancies. In this paper, we will describe a mathematical framework to take into account properly this kind of information.
Hierarchical Affinity Propagation
Givoni, Inmar; Frey, Brendan J
2012-01-01
Affinity propagation is an exemplar-based clustering algorithm that finds a set of data-points that best exemplify the data, and associates each datapoint with one exemplar. We extend affinity propagation in a principled way to solve the hierarchical clustering problem, which arises in a variety of domains including biology, sensor networks and decision making in operational research. We derive an inference algorithm that operates by propagating information up and down the hierarchy, and is efficient despite the high-order potentials required for the graphical model formulation. We demonstrate that our method outperforms greedy techniques that cluster one layer at a time. We show that on an artificial dataset designed to mimic the HIV-strain mutation dynamics, our method outperforms related methods. For real HIV sequences, where the ground truth is not available, we show our method achieves better results, in terms of the underlying objective function, and show the results correspond meaningfully to geographi...
On Uncertainty Quantification in Particle Accelerators Modelling
Adelmann, Andreas
2015-01-01
Using a cyclotron based model problem, we demonstrate for the first time the applicability and usefulness of a uncertainty quantification (UQ) approach in order to construct surrogate models for quantities such as emittance, energy spread but also the halo parameter, and construct a global sensitivity analysis together with error propagation and $L_{2}$ error analysis. The model problem is selected in a way that it represents a template for general high intensity particle accelerator modelling tasks. The presented physics problem has to be seen as hypothetical, with the aim to demonstrate the usefulness and applicability of the presented UQ approach and not solving a particulate problem. The proposed UQ approach is based on sparse polynomial chaos expansions and relies on a small number of high fidelity particle accelerator simulations. Within this UQ framework, the identification of most important uncertainty sources is achieved by performing a global sensitivity analysis via computing the so-called Sobols' ...
Spatio-temporal propagation of cascading overload failures
Zhao, Jichang; Sanhedrai, Hillel; Cohen, Reuven; Havlin, Shlomo
2015-01-01
Different from the direct contact in epidemics spread, overload failures propagate through hidden functional dependencies. Many studies focused on the critical conditions and catastrophic consequences of cascading failures. However, to understand the network vulnerability and mitigate the cascading overload failures, the knowledge of how the failures propagate in time and space is essential but still missing. Here we study the spatio-temporal propagation behavior of cascading overload failures analytically and numerically. The cascading overload failures are found to spread radially from the center of the initial failure with an approximately constant velocity. The propagation velocity decreases with increasing tolerance, and can be well predicted by our theoretical framework with one single correction for all the tolerance values. This propagation velocity is found similar in various model networks and real network structures. Our findings may help to predict and mitigate the dynamics of cascading overload f...
Compensation of On-call and Fixed-term Employment: the Role of Uncertainty
de Graaf-Zijl, Marloes
2005-01-01
In this paper I analyse the use and compensation of fixed-term and on-call employment contracts in the Netherlands. I use an analytical framework in which wage differentials result from two types of uncertainty. Quantity uncertainty originates from imperfect foresight in future product demand. I argue that workers who take over part of the quantity uncertainty from the employer get higher payments. Quality uncertainty on the other hand originates from the fact that employers are ex-ante unabl...
David, P
2013-01-01
Propagation of Waves focuses on the wave propagation around the earth, which is influenced by its curvature, surface irregularities, and by passage through atmospheric layers that may be refracting, absorbing, or ionized. This book begins by outlining the behavior of waves in the various media and at their interfaces, which simplifies the basic phenomena, such as absorption, refraction, reflection, and interference. Applications to the case of the terrestrial sphere are also discussed as a natural generalization. Following the deliberation on the diffraction of the "ground? wave around the ear
Temporal scaling in information propagation
Junming Huang; Chao Li; Wen-Qiang Wang; Hua-Wei Shen; Guojie Li; Xue-Qi Cheng
2014-01-01
For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite ...
Uncertainty in Seismic Capacity of Masonry Buildings
Directory of Open Access Journals (Sweden)
Nicola Augenti
2012-07-01
Full Text Available Seismic assessment of masonry structures is plagued by both inherent randomness and model uncertainty. The former is referred to as aleatory uncertainty, the latter as epistemic uncertainty because it depends on the knowledge level. Pioneering studies on reinforced concrete buildings have revealed a significant influence of modeling parameters on seismic vulnerability. However, confidence in mechanical properties of existing masonry buildings is much lower than in the case of reinforcing steel and concrete. This paper is aimed at assessing whether and how uncertainty propagates from material properties to seismic capacity of an entire masonry structure. A typical two-story unreinforced masonry building is analyzed. Based on previous statistical characterization of mechanical properties of existing masonry types, the following random variables have been considered in this study: unit weight, uniaxial compressive strength, shear strength at zero confining stress, Young’s modulus, shear modulus, and available ductility in shear. Probability density functions were implemented to generate a significant number of realizations and static pushover analysis of the case-study building was performed for each vector of realizations, load combination and lateral load pattern. Analysis results show a large dispersion in displacement capacity and lower dispersion in spectral acceleration capacity. This can directly affect decision-making because both design and retrofit solutions depend on seismic capacity predictions. Therefore, engineering judgment should always be used when assessing structural safety of existing masonry constructions against design earthquakes, based on a series of seismic analyses under uncertain parameters.
Working fluid selection for organic Rankine cycles - Impact of uncertainty of fluid properties
DEFF Research Database (Denmark)
Frutiger, Jerome; Andreasen, Jesper Graa; Liu, Wei;
2016-01-01
This study presents a generic methodology to select working fluids for ORC (Organic Rankine Cycles)taking into account property uncertainties of the working fluids. A Monte Carlo procedure is described as a tool to propagate the influence of the input uncertainty of the fluid parameters on the ORC...
UNC32/33, Covariance Matrices from ENDF/B-5 Resonance Parameter Uncertainties
International Nuclear Information System (INIS)
1 - Description of program or function: The programs UNC 32/33 read uncertainty information from cross-section libraries in the ENDF/B-V format (auto-correlations) and convert these uncertainty data to a group structure selected by the user. In the conversion procedure a weighting neutron spectrum is needed. The converted cross-section uncertainty data can be used in adjustment programs and to calculate the uncertainty in calculated reaction rates. 2 - Method of solution: Straightforward application of uncertainty propagation relations. 3 - Restrictions on the complexity of the problem: None detected for the ENDF/B-V dosimetry file
Managing uncertainty in multiple-criteria decision making related to sustainability assessment
DEFF Research Database (Denmark)
Dorini, Gianluca Fabio; Kapelan, Zoran; Azapagic, Adisa
2011-01-01
on a case study which compares the sustainability of two options for electricity generation: coal versus biomass. Different models have been used to quantify their sustainability performance for a number of economic, environmental and social criteria. Three cases are considered with respect to uncertainty......: (1) no uncertainty, (2) uncertainty in data/models and (3) uncertainty in models and decision-makers’ preferences. The results shows how characterising and propagating uncertainty can help increase the effectiveness of multi-criteria decision making processes and lead to more informed decision....
Propagating Synchrony in Feed-Forward Networks
Directory of Open Access Journals (Sweden)
Sven eJahnke
2013-11-01
Full Text Available Coordinated patterns of precisely timed action potentials (spikes emerge in a variety of neural circuits but their dynamical originis still not well understood. One hypothesis states that synchronous activity propagating through feed-forward chains of groups of neurons (synfire chains may dynamically generate such spike patterns. Additionally, synfire chains offer the possibility to enable reliable signal transmission. So far, mostly densely connected chains, often with all-to-all connectivity between groups, have been theoretically and computationally studied. Yet, such prominent feed-forward structures have not been observed experimentally. Here we analytically and numerically investigate under which conditions diluted feed-forward chains may exhibit synchrony propagation. In addition to conventional linear input summation, we study the impact of nonlinear, non-additive summation accounting for the effect of fast dendritic spikes. The non-linearities promote synchronous inputs to generate precisely timed spikes. We identify how non-additive coupling relaxes the conditions on connectivity such that it enables synchrony propagation at connectivities substantially lower than required for linearly coupled chains. Although the analytical treatment is based on a simple leaky integrate-and-fire neuron model, we show how to generalize our methods to biologically more detailed neuron models and verify our results by numerical simulations with, e.g., Hodgkin Huxley type neurons.
Network planning under uncertainties
Ho, Kwok Shing; Cheung, Kwok Wai
2008-11-01
One of the main focuses for network planning is on the optimization of network resources required to build a network under certain traffic demand projection. Traditionally, the inputs to this type of network planning problems are treated as deterministic. In reality, the varying traffic requirements and fluctuations in network resources can cause uncertainties in the decision models. The failure to include the uncertainties in the network design process can severely affect the feasibility and economics of the network. Therefore, it is essential to find a solution that can be insensitive to the uncertain conditions during the network planning process. As early as in the 1960's, a network planning problem with varying traffic requirements over time had been studied. Up to now, this kind of network planning problems is still being active researched, especially for the VPN network design. Another kind of network planning problems under uncertainties that has been studied actively in the past decade addresses the fluctuations in network resources. One such hotly pursued research topic is survivable network planning. It considers the design of a network under uncertainties brought by the fluctuations in topology to meet the requirement that the network remains intact up to a certain number of faults occurring anywhere in the network. Recently, the authors proposed a new planning methodology called Generalized Survivable Network that tackles the network design problem under both varying traffic requirements and fluctuations of topology. Although all the above network planning problems handle various kinds of uncertainties, it is hard to find a generic framework under more general uncertainty conditions that allows a more systematic way to solve the problems. With a unified framework, the seemingly diverse models and algorithms can be intimately related and possibly more insights and improvements can be brought out for solving the problem. This motivates us to seek a
Interpreting uncertainty terms.
Holtgraves, Thomas
2014-08-01
Uncertainty terms (e.g., some, possible, good, etc.) are words that do not have a fixed referent and hence are relatively ambiguous. A model is proposed that specifies how, from the hearer's perspective, recognition of facework as a potential motive for the use of an uncertainty term results in a calibration of the intended meaning of that term. Four experiments are reported that examine the impact of face threat, and the variables that affect it (e.g., power), on the manner in which a variety of uncertainty terms (probability terms, quantifiers, frequency terms, etc.) are interpreted. Overall, the results demonstrate that increased face threat in a situation will result in a more negative interpretation of an utterance containing an uncertainty term. That the interpretation of so many different types of uncertainty terms is affected in the same way suggests the operation of a fundamental principle of language use, one with important implications for the communication of risk, subjective experience, and so on.
Urrutxua, Hodei; Sanjurjo-Rivo, Manuel; Peláez, Jesús
2016-01-01
In the year 2000 an in-house orbital propagator called DROMO (Peláez et al. in Celest Mech Dyn Astron 97:131-150, 2007. doi: 10.1007/s10569-006-9056-3) was developed by the Space Dynamics Group of the Technical University of Madrid, based in a set of redundant variables including Euler-Rodrigues parameters. An original deduction of the DROMO propagator is carried out, underlining its close relation with the ideal frame concept introduced by Hansen (Abh der Math-Phys Cl der Kon Sachs Ges der Wissensch 5:41-218, 1857). Based on the very same concept, Deprit (J Res Natl Bur Stand Sect B Math Sci 79B(1-2):1-15, 1975) proposed a formulation for orbit propagation. In this paper, similarities and differences with the theory carried out by Deprit are analyzed. Simultaneously, some improvements are introduced in the formulation, that lead to a more synthetic and better performing propagator. Also, the long-term effect of the oblateness of the primary is studied in terms of DROMO variables, and new numerical results are presented to evaluate the performance of the method.
Neutrino Propagation Through Matter
V. A. NaumovLab. of Theoretical Physics, Irkutsk State University, Irkutsk, Russia; L. Perrone(Istituto Nazionale di Fisica Nucleare, Sezione di Firenze, Firenze, Italy)
2015-01-01
We discuss a simple approach to solve the transport equation for high-energy neutrinos in media of any thickness. We present illustrative results obtained with some specific models for the initial spectra of muon neutrinos and antineutrinos propagating through a normal cold medium.
Neutrino Propagation Through Matter
Naumov, V A
1999-01-01
We discuss a simple approach to solve the transport equation for high-energy neutrinos in media of any thickness. We present illustrative results obtained with some specific models for the initial spectra of muon neutrinos and antineutrinos propagating through a normal cold medium.
Nessel, James
2013-01-01
NASA Glenn Research Center has been involved in the characterization of atmospheric effects on space communications links operating at Ka-band and above for the past 20 years. This presentation reports out on the most recent activities of propagation characterization that NASA is currently involved in.
Fuzzy-probabilistic calculations of water-balance uncertainty
Energy Technology Data Exchange (ETDEWEB)
Faybishenko, B.
2009-10-01
Hydrogeological systems are often characterized by imprecise, vague, inconsistent, incomplete, or subjective information, which may limit the application of conventional stochastic methods in predicting hydrogeologic conditions and associated uncertainty. Instead, redictions and uncertainty analysis can be made using uncertain input parameters expressed as probability boxes, intervals, and fuzzy numbers. The objective of this paper is to present the theory for, and a case study as an application of, the fuzzyprobabilistic approach, ombining probability and possibility theory for simulating soil water balance and assessing associated uncertainty in the components of a simple waterbalance equation. The application of this approach is demonstrated using calculations with the RAMAS Risk Calc code, to ssess the propagation of uncertainty in calculating potential evapotranspiration, actual evapotranspiration, and infiltration-in a case study at the Hanford site, Washington, USA. Propagation of uncertainty into the results of water-balance calculations was evaluated by hanging he types of models of uncertainty incorporated into various input parameters. The results of these fuzzy-probabilistic calculations are compared to the conventional Monte Carlo simulation approach and estimates from field observations at the Hanford site.
Instability Versus Equilibrium Propagation of Laser Beam in Plasma
Lushnikov, Pavel M.; Rose, Harvey A.
2003-01-01
We obtain, for the first time, an analytic theory of the forward stimulated Brillouin scattering instability of a spatially and temporally incoherent laser beam, that controls the transition between statistical equilibrium and non-equilibrium (unstable) self-focusing regimes of beam propagation. The stability boundary may be used as a comprehensive guide for inertial confinement fusion designs. Well into the stable regime, an analytic expression for the angular diffusion coefficient is obtain...
Sources of uncertainties in modelling black carbon at the global scale
Vignati, E.; Karl, M; M. Krol; Wilson, J.(School of Physics and Astronomy, University of Birmingham, Birmingham, United Kingdom); Stier, P; F. Cavalli
2010-01-01
Our understanding of the global black carbon (BC) cycle is essentially qualitative due to uncertainties in our knowledge of its properties. This work investigates two source of uncertainties in modelling black carbon: those due to the use of different schemes for BC ageing and its removal rate in the global Transport-Chemistry model TM5 and those due to the uncertainties in the definition and quantification of the observations, which propagate through to both the emission inventories, and the...
Institute of Scientific and Technical Information of China (English)
扈庆; 李显芳; 张继蓉; 印成; 李亚莹
2013-01-01
用微波消解技术消解城市污水处理厂污泥样品，用火焰原子吸收光谱法测定锌，分析影响微波消解-火焰原子吸收法测定总锌含的测量不确定度的各种因素，对各不确定度分量进行了评定和量化，计算了合成标准不确定度和扩展不确定度，污泥中锌含量的测定结果表示为(465±11) mg/kg，k=2。% The measurement uncertainty refers to the random uncertainty in the measurement process. It is complete when the measured data and uncertainty is provided at the same time. This paper described the digestion of samples in activated sludge by microwave,and zinc concentration in the digested samples was measured by FAAS. The measurement uncertainty is evaluated based on the guide of uncertainty evaluation and denotation for measurements. The sources of uncertainty in the measurement were analyzed. Every uncertainty component was evaluated and quantified. The combined standard uncertainty and the expanded uncertainty were expressed. The result of the measurement of Zinc in sludge sample was expressed as (465±11) mg/kg, k=2.
Strategies for Application of Isotopic Uncertainties in Burnup Credit
Energy Technology Data Exchange (ETDEWEB)
Gauld, I.C.
2002-12-23
Uncertainties in the predicted isotopic concentrations in spent nuclear fuel represent one of the largest sources of overall uncertainty in criticality calculations that use burnup credit. The methods used to propagate the uncertainties in the calculated nuclide concentrations to the uncertainty in the predicted neutron multiplication factor (k{sub eff}) of the system can have a significant effect on the uncertainty in the safety margin in criticality calculations and ultimately affect the potential capacity of spent fuel transport and storage casks employing burnup credit. Methods that can provide a more accurate and realistic estimate of the uncertainty may enable increased spent fuel cask capacity and fewer casks needing to be transported, thereby reducing regulatory burden on licensee while maintaining safety for transporting spent fuel. This report surveys several different best-estimate strategies for considering the effects of nuclide uncertainties in burnup-credit analyses. The potential benefits of these strategies are illustrated for a prototypical burnup-credit cask design. The subcritical margin estimated using best-estimate methods is discussed in comparison to the margin estimated using conventional bounding methods of uncertainty propagation. To quantify the comparison, each of the strategies for estimating uncertainty has been performed using a common database of spent fuel isotopic assay measurements for pressurized-light-water reactor fuels and predicted nuclide concentrations obtained using the current version of the SCALE code system. The experimental database applied in this study has been significantly expanded to include new high-enrichment and high-burnup spent fuel assay data recently published for a wide range of important burnup-credit actinides and fission products. Expanded rare earth fission-product measurements performed at the Khlopin Radium Institute in Russia that contain the only known publicly-available measurement for {sup 103
Measurement uncertainty relations
Energy Technology Data Exchange (ETDEWEB)
Busch, Paul, E-mail: paul.busch@york.ac.uk [Department of Mathematics, University of York, York (United Kingdom); Lahti, Pekka, E-mail: pekka.lahti@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Werner, Reinhard F., E-mail: reinhard.werner@itp.uni-hannover.de [Institut für Theoretische Physik, Leibniz Universität, Hannover (Germany)
2014-04-15
Measurement uncertainty relations are quantitative bounds on the errors in an approximate joint measurement of two observables. They can be seen as a generalization of the error/disturbance tradeoff first discussed heuristically by Heisenberg. Here we prove such relations for the case of two canonically conjugate observables like position and momentum, and establish a close connection with the more familiar preparation uncertainty relations constraining the sharpness of the distributions of the two observables in the same state. Both sets of relations are generalized to means of order α rather than the usual quadratic means, and we show that the optimal constants are the same for preparation and for measurement uncertainty. The constants are determined numerically and compared with some bounds in the literature. In both cases, the near-saturation of the inequalities entails that the state (resp. observable) is uniformly close to a minimizing one.
SAGD optimization under uncertainty
Energy Technology Data Exchange (ETDEWEB)
Gossuin, J.; Naccache, P. [Schlumberger SIS, Abingdon (United Kingdom); Bailley, W.; Couet, B. [Schlumberger-Doll Research, Cambridge, MA, (United States)
2011-07-01
In the heavy oil industry, the steam assisted gravity drainage process is often used to enhance oil recovery but this is a costly method and ways to make it more efficient are needed. Multiple methods have been developed to optimize the SAGD process but none of them explicitly considered uncertainty. This paper presents an optimization method in the presence of reservoir uncertainty. This process was tested on an SAGD model where three equi-probable geological models are possible. Preparatory steps were first performed to identify key variables and the optimization model was then proposed. The method was shown to be successful in handling a significant number of uncertainties, optimizing the SAGD process and preventing premature steam channels that can choke production. The optimization method presented herein was successfully applied to an SAGD process and was shown to provide better strategies than sensitivity analysis while handling more complex problems.
Treatment of precipitation uncertainty in rainfall-runoff modelling: a fuzzy set approach
Maskey, Shreedhar; Guinot, Vincent; Price, Roland K.
2004-09-01
The uncertainty in forecasted precipitation remains a major source of uncertainty in real time flood forecasting. Precipitation uncertainty consists of uncertainty in (i) the magnitude, (ii) temporal distribution, and (iii) spatial distribution of the precipitation. This paper presents a methodology for propagating the precipitation uncertainty through a deterministic rainfall-runoff-routing model for flood forecasting. It uses fuzzy set theory combined with genetic algorithms. The uncertainty due to the unknown temporal distribution of the precipitation is achieved by disaggregation of the precipitation into subperiods. The methodology based on fuzzy set theory is particularly useful where a probabilistic forecast of precipitation is not available. A catchment model of the Klodzko valley (Poland) built with HEC-1 and HEC-HMS was used for the application. The results showed that the output uncertainty due to the uncertain temporal distribution of precipitation can be significantly dominant over the uncertainty due to the uncertain quantity of precipitation.
Serenity in political uncertainty.
Doumit, Rita; Afifi, Rema A; Devon, Holli A
2015-01-01
College students are often faced with academic and personal stressors that threaten their well-being. Added to that may be political and environmental stressors such as acts of violence on the streets, interruptions in schooling, car bombings, targeted religious intimidations, financial hardship, and uncertainty of obtaining a job after graduation. Research on how college students adapt to the latter stressors is limited. The aims of this study were (1) to investigate the associations between stress, uncertainty, resilience, social support, withdrawal coping, and well-being for Lebanese youth during their first year of college and (2) to determine whether these variables predicted well-being. A sample of 293 first-year students enrolled in a private university in Lebanon completed a self-reported questionnaire in the classroom setting. The mean age of sample participants was 18.1 years, with nearly an equal percentage of males and females (53.2% vs 46.8%), who lived with their family (92.5%), and whose family reported high income levels (68.4%). Multiple regression analyses revealed that best determinants of well-being are resilience, uncertainty, social support, and gender that accounted for 54.1% of the variance. Despite living in an environment of frequent violence and political uncertainty, Lebanese youth in this study have a strong sense of well-being and are able to go on with their lives. This research adds to our understanding on how adolescents can adapt to stressors of frequent violence and political uncertainty. Further research is recommended to understand the mechanisms through which young people cope with political uncertainty and violence. PMID:25658930
Uncertainty and validation. Effect of user interpretation on uncertainty estimates
International Nuclear Information System (INIS)
Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the
Sensitivity and uncertainty analysis
Cacuci, Dan G; Navon, Ionel Michael
2005-01-01
As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c
Weighted Uncertainty Relations
Xiao, Yunlong; Jing, Naihuan; Li-Jost, Xianqing; Fei, Shao-Ming
2016-03-01
Recently, Maccone and Pati have given two stronger uncertainty relations based on the sum of variances and one of them is nontrivial when the quantum state is not an eigenstate of the sum of the observables. We derive a family of weighted uncertainty relations to provide an optimal lower bound for all situations and remove the restriction on the quantum state. Generalization to multi-observable cases is also given and an optimal lower bound for the weighted sum of the variances is obtained in general quantum situation.
Uncertainty in artificial intelligence
Levitt, TS; Lemmer, JF; Shachter, RD
1990-01-01
Clearly illustrated in this volume is the current relationship between Uncertainty and AI.It has been said that research in AI revolves around five basic questions asked relative to some particular domain: What knowledge is required? How can this knowledge be acquired? How can it be represented in a system? How should this knowledge be manipulated in order to provide intelligent behavior? How can the behavior be explained? In this volume, all of these questions are addressed. From the perspective of the relationship of uncertainty to the basic questions of AI, the book divides naturally i
Song, I.; Rathbun, A. P.; Saffer, D. M.
2011-12-01
Transient fluid flow through rock is governed by two hydraulic properties: permeability (k) and the specific storage (Ss), which are often determined by the pulse-transient technique when k is extremely low (e.g. k flow through the sample to the downstream end. The rock properties, k and Ss, can be determined by time-based recording of only one variable, the pressure change in each reservoir. Experimental error during data acquisition propagates through the data reduction process, leading to uncertainty in experimental results. In addition, unlike steady-state systems, the pressure-time curves are influenced by the compressive storage of the reservoirs and both the dimensions and properties of the sample. Thus, uncertainty in k and Ss may arise from errors in measurement of sample dimension, fluid pressure, or reservoir storages. In this study, the uncertainty in sample dimension is considered to be negligible, and reasonable error ranges in pressure and system storage measurements are considered. We first calculated pressure errors (P) induced by the difference between assumed, or experimentally measured values of k and Ss and their true values. Based on this result, the sensitivity coefficient (∂k/∂P and ∂Ss/∂P) is theoretically ~10 in percentage, i.e. 1% error of the pulse on average during a test cycle produces ~10% uncertainty in k and Ss. The sensitivity coefficient may become larger when the ratio of sample storage to upstream reservoir storage is extremely small. We also examined the sensitivity of experimental error in measuring the storage capacity of system reservoirs to uncertainty in resulting values of k and Ss. Because the reservoirs are typically small for tight rock samples and irregular in shape due to the combination of tubing, fittings, valves, and pressure transducers, the uncertainty should be much greater than that that of pressure. Our analysis reveals that 20% error in measurment of upstream storage causes ~5-15% uncertainty in
Photon propagation in slowly varying electromagnetic fields
Karbstein, Felix
2016-01-01
We study the effective theory of soft photons in slowly varying electromagnetic background fields at one-loop order in QED. This is of relevance for the study of all-optical signatures of quantum vacuum nonlinearity in realistic electromagnetic background fields as provided by high-intensity lasers. The central result derived in this article is a new analytical expression for the photon polarization tensor in two linearly polarized counter-propagating pulsed Gaussian laser beams. As we treat the peak field strengths of both laser beams as free parameters this field configuration can be considered as interpolating between the limiting cases of a purely right- or left-moving laser beam (if one of the peak field strengths is set to zero) and the standing-wave type scenario with two counter-propagating beams of equal strength.
Atomic Uncertainties and their Effects on Astrophysical Diagnostics
Sutherland, Robert; Loch, Stuart; Foster, Adam; Smith, Randall
2015-05-01
The astrophysics and laboratory plasma modeling community have been requesting meaningful uncertainties on atomic data for some time. This would allow them to determine uncertainties due to the atomic data on a range of plasma diagnostic quantities and explain some of the important discrepancies. In recent years there have been much talk, although relatively little progress, on this for theoretical cross section calculations. We present here a method of generating ``baseline'' uncertainties on atomic data, for use in astrophysical modeling. The uncertainty data was used in a modified version of the APEC spectral emission code, to carry these uncertainties on fundamental atomic data through to uncertainties in astrophysical diagnostics, such as fractional abundances and emissivities, providing uncertainties on line ratios. We use a Monte-Carlo method to propagate the uncertainties through to the emissivities, which were tested using a variety of distribution functions. As an illustration of the usefulness of the method, we show results for oxygen, and compare with an existing line ratio diagnostic which has a currently debated discrepancy.
Simulation of excitation and propagation of pico-second ultrasound
Energy Technology Data Exchange (ETDEWEB)
Yang, Seung Yong; Kim, No Kyu [Dept. of Mechanical Engineering, Korea University of Technology and Education, Chunan (Korea, Republic of)
2014-12-15
This paper presents an analytic and numerical simulation of the generation and propagation of pico-second ultrasound with nano-scale wavelength, enabling the production of bulk waves in thin films. An analytic model of laser-matter interaction and elasto-dynamic wave propagation is introduced to calculate the elastic strain pulse in microstructures. The model includes the laser-pulse absorption on the material surface, heat transfer from a photon to the elastic energy of a phonon, and acoustic wave propagation to formulate the governing equations of ultra-short ultrasound. The excitation and propagation of acoustic pulses produced by ultra-short laser pulses are numerically simulated for an aluminum substrate using the finite-difference method and compared with the analytical solution. Furthermore, Fourier analysis was performed to investigate the frequency spectrum of the simulated elastic wave pulse. It is concluded that a pico-second bulk wave with a very high frequency of up to hundreds of gigahertz is successfully generated in metals using a 100-fs laser pulse and that it can be propagated in the direction of thickness for thickness less than 100 nm.
DEFF Research Database (Denmark)
Greasley, David; Madsen, Jakob B.
2006-01-01
A severe collapse of fixed capital formation distinguished the onset of the Great Depression from other investment downturns between the world wars. Using a model estimated for the years 1890-2000, we show that the expected profitability of capital measured by Tobin's q, and the uncertainty...
International Nuclear Information System (INIS)
The notion of 'risk' is discussed in its social and technological contexts, leading to an investigation of the terms factuality, hypotheticality, uncertainty, and vagueness, and to the problems of acceptance and acceptability especially in the context of political decision finding. (DG)
Uncertainty In Quantum Computation
Kak, Subhash
2002-01-01
We examine the effect of previous history on starting a computation on a quantum computer. Specifically, we assume that the quantum register has some unknown state on it, and it is required that this state be cleared and replaced by a specific superposition state without any phase uncertainty, as needed by quantum algorithms. We show that, in general, this task is computationally impossible.
Proportional Representation with Uncertainty
Francesco De Sinopoli; Giovanna Iannantuoni; Elena Manzoni; Carlos Pimienta
2014-01-01
We introduce a model with strategic voting in a parliamentary election with proportional representation and uncertainty about voters’ preferences. In any equilibrium of the model, most voters only vote for those parties whose positions are extreme. In the resulting parliament, a consensus government forms and the policy maximizing the sum of utilities of the members of the government is implemented.
Risk, Uncertainty, and Entrepreneurship
DEFF Research Database (Denmark)
Koudstaal, martin; Sloof, Randolph; Van Praag, Mirjam
2015-01-01
Theory predicts that entrepreneurs have distinct attitudes toward risk and uncertainty, but empirical evidence is mixed. To better understand these mixed results, we perform a large “lab-in-the-field” experiment comparing entrepreneurs to managers (a suitable comparison group) and employees (n D ...
Institute of Scientific and Technical Information of China (English)
范梦璇
2015-01-01
<正>Employ change-related uncertainty is a condition that under current continually changing business environment,the organizations also have to change,the change include strategic direction,structure and staffing levels to help company to keep competitive(Armenakis&Bedeian,1999);However;these
Cettolin, E.; Riedl, A.M.
2013-01-01
An important element for the public support of policies is their perceived justice. At the same time most policy choices have uncertain outcomes. We report the results of a first experiment investigating just allocations of resources when some recipients are exposed to uncertainty. Although, under c
Vegetative propagation of jojoba
Energy Technology Data Exchange (ETDEWEB)
Low, C.B.; Hackett, W.P.
1981-03-01
Development of jojoba as an economically viable crop requires improved methods of propagation and culture. Rooting experiments were performed on cutting material collected from wild jojoba plants. A striking seasonal fluctuation in rooting potential was found. Jojoba plants can be successfully propagated from stem cuttings made during spring, summer, and, to some extent, fall. Variability among jojoba plants may also play a role in rooting potential, although it is not as important as season. In general, the use of auxin (4,000 ppm indolebutyric acid) on jojoba cuttings during periods of high rooting potential promotes adventitious root formation, but during periods of low rooting potential it has no effect or is even slightly inhibitory. In the greenhouse, cutting-grown plants apparently reproductively matured sooner than those grown from seed. If this observation holds true for plants transplanted into the field, earlier fruit production by cutting--grown plants would mean earlier return of initial planting and maintenance costs.
Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model
International Nuclear Information System (INIS)
Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs
A Stochastic Nonlinear Water Wave Model for Efficient Uncertainty Quantification
Bigoni, Daniele; Eskilsson, Claes
2014-01-01
A major challenge in next-generation industrial applications is to improve numerical analysis by quantifying uncertainties in predictions. In this work we present a stochastic formulation of a fully nonlinear and dispersive potential flow water wave model for the probabilistic description of the evolution waves. This model is discretized using the Stochastic Collocation Method (SCM), which provides an approximate surrogate of the model. This can be used to accurately and efficiently estimate the probability distribution of the unknown time dependent stochastic solution after the forward propagation of uncertainties. We revisit experimental benchmarks often used for validation of deterministic water wave models. We do this using a fully nonlinear and dispersive model and show how uncertainty in the model input can influence the model output. Based on numerical experiments and assumed uncertainties in boundary data, our analysis reveals that some of the known discrepancies from deterministic simulation in compa...
Infrared finite electron propagator
International Nuclear Information System (INIS)
We investigate the properties of a dressed electron which reduces, in a particular class of gauges, to the usual fermion. A one-loop calculation of the propagator is presented. We show explicitly that an infrared finite, multiplicative, mass shell renormalization is possible for this dressed electron, or, equivalently, for the usual fermion in the above-mentioned gauges. The results are in complete accord with previous conjectures. copyright 1997 The American Physical Society
Propagation of tides in the Mandovi and Zuari estuaries
Digital Repository Service at National Institute of Oceanography (India)
Shetye, S.R.
analytic model for tidal propagation in a channel whose cross-sectional area decreases exponentially with distance from the mouth and has influx of riverine freshwater at its head is formulated. The model solution mimics the observed features (a)-(c) cited...
Late time tail of wave propagation on curved spacetime
Ching, E S C; Suen, W M; Young, K; Ching, E S C; Leung, P T; Suen, W M; Young, K
1994-01-01
The late time behavior of waves propagating on a general curved spacetime is studied. The late time tail is not necessarily an inverse power of time. Our work extends, places in context, and provides understanding for the known results for the Schwarzschild spacetime. Analytic and numerical results are in excellent agreement.
Seismic wave propagation in fractured media: A discontinuous Galerkin approach
De Basabe, Jonás D.
2011-01-01
We formulate and implement a discontinuous Galekin method for elastic wave propagation that allows for discontinuities in the displacement field to simulate fractures or faults using the linear- slip model. We show numerical results using a 2D model with one linear- slip discontinuity and different frequencies. The results show a good agreement with analytic solutions. © 2011 Society of Exploration Geophysicists.
International Nuclear Information System (INIS)
The Analytic Hierarchy Process (AHP) has been used to help determine the importance of components and phenomena in thermal-hydraulic safety analyses of nuclear reactors. The AHP results are based, in part on expert opinion. Therefore, it is prudent to evaluate the uncertainty of the AHP ranks of importance. Prior applications have addressed uncertainty with experimental data comparisons and bounding sensitivity calculations. These methods work well when a sufficient experimental data base exists to justify the comparisons. However, in the case of limited or no experimental data the size of the uncertainty is normally made conservatively large. Accordingly, the author has taken another approach, that of performing a statistically based uncertainty analysis. The new work is based on prior evaluations of the importance of components and phenomena in the thermal-hydraulic safety analysis of the Advanced Neutron Source Reactor (ANSR), a new facility now in the design phase. The uncertainty during large break loss of coolant, and decay heat removal scenarios is estimated by assigning a probability distribution function (pdf) to the potential error in the initial expert estimates of pair-wise importance between the components. Using a Monte Carlo sampling technique, the error pdfs are propagated through the AHP software solutions to determine a pdf of uncertainty in the system wide importance of each component. To enhance the generality of the results, study of one other problem having different number of elements is reported, as are the effects of a larger assumed pdf error in the expert ranks. Validation of the Monte Carlo sample size and repeatability are also documented
Energy Technology Data Exchange (ETDEWEB)
Maas, Axel [University of Graz, Institute of Physics, Graz (Austria)
2015-03-01
Two popular perspectives on the non-perturbative domain of Yang-Mills theories are either in terms of the gluons themselves or in terms of collective gluonic excitations, i.e. topological excitations. If both views are correct, then they are only two different representations of the same underlying physics. One possibility to investigate this connection is by the determination of gluon correlation functions in topological background fields, as created by the smearing of lattice configurations. This is performed here for the minimal Landau gauge gluon propagator, ghost propagator, and running coupling, both in momentum and position space for SU(2) Yang-Mills theory. The results show that the salient low-momentum features of the propagators are qualitatively retained under smearing at sufficiently small momenta, in agreement with an equivalence of both perspectives. However, the mid-momentum behavior is significantly affected. These results are also relevant for the construction of truncations in functional methods, as they provide hints on necessary properties to be retained in truncations. (orig.)
Propagating waves along spicules
Okamoto, Takenori J
2011-01-01
Alfv\\'enic waves are thought to play an important role in coronal heating and acceleration of solar wind. Here we investigated the statistical properties of Alfv\\'enic waves along spicules (jets that protrude into the corona) in a polar coronal hole using high cadence observations of the Solar Optical Telescope (SOT) onboard \\emph{Hinode}. We developed a technique for the automated detection of spicules and high-frequency waves. We detected 89 spicules, and found: (1) a mix of upward propagating, downward propagating, as well as standing waves (occurrence rates of 59%, 21%, and 20%, respectively). (2) The phase speed gradually increases with height. (3) Upward waves dominant at lower altitudes, standing waves at higher altitudes. (4) Standing waves dominant in the early and late phases of each spicule, while upward waves were dominant in the middle phase. (5) In some spicules, we find waves propagating upward (from the bottom) and downward (from the top) to form a standing wave in the middle of the spicule. (...
DEFF Research Database (Denmark)
Abdul Samad, Noor Asma Fazli Bin; Sin, Gürkan
2013-01-01
The objective of this study is to test and validate a Process Analytical Technology (PAT) system design on a potassium dichromate crystallization process in the presence of input uncertainties using uncertainty and sensitivity analysis. To this end a systematic framework for managing uncertaintie...
Nonradiative limitations to plasmon propagation in chains of metallic nanoparticles
Brandstetter-Kunc, Adam; Downing, Charles A; Weinmann, Dietmar; Jalabert, Rodolfo A
2016-01-01
We investigate the collective plasmonic modes in a chain of metallic nanoparticles that are coupled by near-field interactions. The size- and momentum-dependent nonradiative Landau damping and radiative decay rates are calculated analytically within an open quantum system approach. These decay rates determine the excitation propagation along the chain. In particular, the behavior of the radiative decay rate as a function of the plasmon wavelength leads to a transition from an exponential decay of the collective excitation for short distances to an algebraic decay for large distances. Importantly, we show that the exponential decay is of a purely nonradiative origin. Our transparent model enables us to provide analytical expressions for the polarization-dependent plasmon excitation profile along the chain and for the associated propagation length. Our theoretical analysis constitutes an important step in the quest for the optimal conditions for plasmonic propagation in nanoparticle chains.
Uncertainty in mapping urban air quality using crowdsourcing techniques
Schneider, Philipp; Castell, Nuria; Lahoz, William; Bartonova, Alena
2016-04-01
Small and low-cost sensors measuring various air pollutants have become available in recent years owing to advances in sensor technology. Such sensors have significant potential for improving high-resolution mapping of air quality in the urban environment as they can be deployed in comparatively large numbers and therefore are able to provide information at unprecedented spatial detail. However, such sensor devices are subject to significant and currently little understood uncertainties that affect their usability. Not only do these devices exhibit random errors and biases of occasionally substantial magnitudes, but these errors may also shift over time. In addition, there often tends to be significant inter-sensor variability even when supposedly identical sensors from the same manufacturer are used. We need to quantify accurately these uncertainties to make proper use of the information they provide. Furthermore, when making use of the data and producing derived products such as maps, the measurement uncertainties that propagate throughout the analysis need to be clearly communicated to the scientific and non-scientific users of the map products. Based on recent experiences within the EU-funded projects CITI-SENSE and hackAIR we discuss the uncertainties along the entire processing chain when using crowdsourcing techniques for mapping urban air quality. Starting with the uncertainties exhibited by the sensors themselves, we present ways of quantifying the error characteristics of a network of low-cost microsensors and show suitable statistical metrics for summarizing them. Subsequently, we briefly present a data-fusion-based method for mapping air quality in the urban environment and illustrate how we propagate the uncertainties of the individual sensors throughout the mapping system, resulting in detailed maps that document the pixel-level uncertainty for each concentration field. Finally, we present methods for communicating the resulting spatial uncertainty
Effective propagation in a perturbed periodic structure
International Nuclear Information System (INIS)
In a recent paper [D. Torrent, A. Hakansson, F. Cervera, and J. Sanchez-Dehesa, Phys. Rev. Lett. 96, 204302 (2006)] inspected the effective parameters of a cluster containing an ensemble of scatterers with a periodic or a weakly disordered arrangement. A small amount of disorder is shown to have a small influence on the characteristics of the acoustic wave propagation with respect to the periodic case. In this Brief Report, we inspect further the effect of a deviation in the scatterer distribution from the periodic distribution. The quasicrystalline approximation is shown to be an efficient tool to quantify this effect. An analytical formula for the effective wave number is obtained in one-dimensional acoustic medium and is compared with the Berryman result in the low-frequency limit. Direct numerical calculations show a good agreement with the analytical predictions
On the cosmological propagation of high energy particles in magnetic fields
International Nuclear Information System (INIS)
In the present work the connection between high energy particles and cosmic magnetic fields is explored. Particularly, the focus lies on the propagation of ultra-high energy cosmic rays (UHECRs) and very-high energy gamma rays (VHEGRs) over cosmological distances, under the influence of cosmic magnetic fields. The first part of this work concerns the propagation of UHECRs in the magnetized cosmic web, which was studied both analytically and numerically. A parametrization for the suppression of the UHECR flux at energies ∝ 1018 eV due to diffusion in extragalactic magnetic fields was found, making it possible to set an upper limit on the energy at which this magnetic horizon effect sets in, which is
MO-E-BRE-01: Determination, Minimization and Communication of Uncertainties in Radiation Therapy
Energy Technology Data Exchange (ETDEWEB)
Van Dyk, J; Palta, J; Bortfeld, T; Mijnheer, B [Western University, London, ON (United Kingdom)
2014-06-15
Medical Physicists have a general understanding of uncertainties in the radiation treatment process, both with respect to dosimetry and geometry. However, there is a desire to be more quantitative about uncertainty estimation. A recent International Atomic Energy Agency (IAEA) report (about to be published) recommends that we should be as “accurate as reasonably achievable, technical and biological factors being taken into account”. Thus, a single recommendation as a goal for accuracy in radiation therapy is an oversimplification. That report also suggests that individual clinics should determine their own level of uncertainties for their specific treatment protocols. The question is “how do we implement this in clinical practice”? AAPM Monograph 35 (2011 AAPM Summer School) addressed many specific aspects of uncertainties in each of the steps of a course of radiation treatment. The intent of this symposium is: (1) to review uncertainty considerations in the entire radiation treatment process including uncertainty determination for each step and uncertainty propagation for the total process, (2) to consider aspects of robust optimization which optimizes treatment plans while protecting them against uncertainties, and (3) to describe various methods of displaying uncertainties and communicating uncertainties to the relevant professionals. While the theoretical and research aspects will also be described, the emphasis will be on the practical considerations for the medical physicist in clinical practice. Learning Objectives: To review uncertainty determination in the overall radiation treatment process. To consider uncertainty modeling and uncertainty propagation. To highlight the basic ideas and clinical potential of robust optimization procedures to generate optimal treatment plans that are not severely affected by uncertainties. To describe methods of uncertainty communication and display.
MO-E-BRE-01: Determination, Minimization and Communication of Uncertainties in Radiation Therapy
International Nuclear Information System (INIS)
Medical Physicists have a general understanding of uncertainties in the radiation treatment process, both with respect to dosimetry and geometry. However, there is a desire to be more quantitative about uncertainty estimation. A recent International Atomic Energy Agency (IAEA) report (about to be published) recommends that we should be as “accurate as reasonably achievable, technical and biological factors being taken into account”. Thus, a single recommendation as a goal for accuracy in radiation therapy is an oversimplification. That report also suggests that individual clinics should determine their own level of uncertainties for their specific treatment protocols. The question is “how do we implement this in clinical practice”? AAPM Monograph 35 (2011 AAPM Summer School) addressed many specific aspects of uncertainties in each of the steps of a course of radiation treatment. The intent of this symposium is: (1) to review uncertainty considerations in the entire radiation treatment process including uncertainty determination for each step and uncertainty propagation for the total process, (2) to consider aspects of robust optimization which optimizes treatment plans while protecting them against uncertainties, and (3) to describe various methods of displaying uncertainties and communicating uncertainties to the relevant professionals. While the theoretical and research aspects will also be described, the emphasis will be on the practical considerations for the medical physicist in clinical practice. Learning Objectives: To review uncertainty determination in the overall radiation treatment process. To consider uncertainty modeling and uncertainty propagation. To highlight the basic ideas and clinical potential of robust optimization procedures to generate optimal treatment plans that are not severely affected by uncertainties. To describe methods of uncertainty communication and display
Investment choice under uncertainty: A review essay
Directory of Open Access Journals (Sweden)
Trifunović Dejan
2005-01-01
Full Text Available An investment opportunity whose return is perfectly predictable, hardly exists at all. Instead, investor makes his decisions under conditions of uncertainty. Theory of expected utility is the main analytical tool for description of choice under uncertainty. Critics of the theory contend that individuals have bounded rationality and that the theory of expected utility is not correct. When agents are faced with risky decisions they behave differently, conditional on their attitude towards risk. They can be risk loving, risk averse or risk neutral. In order to make an investment decision it is necessary to compare probability distribution functions of returns. Investment decision making is much simpler if one uses expected values and variances instead of probability distribution functions.
Enhanced phase mixing of Alfv\\'en waves propagating in stratified and divergent coronal structures
Smith, P. D.; Tsiklauri, D.; Ruderman, M. S.
2007-01-01
Corrected analytical solutions describing the enhanced phase mixing of Alfven waves propagating in divergent stratified coronal structures are presented. These show that the enhanced phase mixing mechanism can dissipate Alfven waves at heights less than half that is predicted by the previous analytical solutions. The enhanced phase mixing of 0.1 Hz harmonic Alfven waves propagating in strongly divergent, H_b=5 Mm, stratified coronal structures, H_rho=50 Mm, can fulfill 100% of an active regio...
Dynamical quarks effects on the gluon propagation and chiral symmetry restoration
Bashir, A; Rodríguez-Quintero, J
2014-01-01
We exploit the recent lattice results for the infrared gluon propagator with light dynamical quarks and solve the gap equation for the quark propagator. Chiral symmetry breaking and confinement (intimately tied with the analytic properties of QCD Schwinger functions) order parameters are then studied.
Wood, Alexander
2004-01-01
deterministic methods for Hg TMDL decision support, one that is fully compatible with an adaptive management approach. This alternative approach uses empirical data and informed judgment to provide a scientific and technical basis for helping National Pollutant Discharge Elimination System (NPDES) permit holders make management decisions. An Hg-offset system would be an option if a wastewater-treatment plant could not achieve NPDES permit requirements for HgT reduction. We develop a probabilistic decision-analytical model consisting of three submodels for HgT loading, MeHg, and cost mitigation within a Bayesian network that integrates information of varying rigor and detail into a simple model of a complex system. Hg processes are identified and quantified by using a combination of historical data, statistical models, and expert judgment. Such an integrated approach to uncertainty analysis allows easy updating of prediction and inference when observations of model variables are made. We demonstrate our approach with data from the Cache Creek watershed (a subbasin of the Sacramento River watershed). The empirical models used to generate the needed probability distributions are based on the same empirical models currently being used by the Central Valley Regional Water Quality Control Cache Creek Hg TMDL working group. The significant difference is that input uncertainty and error are explicitly included in the model and propagated throughout its algorithms. This work demonstrates how to integrate uncertainty into the complex and highly uncertain Hg TMDL decisionmaking process. The various sources of uncertainty are propagated as decision risk that allows decisionmakers to simultaneously consider uncertainties in remediation/implementation costs while attempting to meet environmental/ecologic targets. We must note that this research is on going. As more data are collected, the HgT and cost-mitigation submodels are updated and the uncer
Handbook of management under uncertainty
2001-01-01
A mere few years ago it would have seemed odd to propose a Handbook on the treatment of management problems within a sphere of uncertainty. Even today, on the threshold of the third millennium, this statement may provoke a certain wariness. In fact, to resort to exact or random data, that is probable date, is quite normal and con venient, as we then know where we are going best, where we are proposing to go if all occurs as it is conceived and hoped for. To treat uncertain information, to accept a new principle and from there determined criteria, without being sure of oneself and confiding only in the will to better understand objects and phenomena, constitutes and compromise with a new form of understanding the behaviour of current beings that goes even further than simple rationality. Economic Science and particularly the use of its elements of configuration in the world of management, has imbued several generations with an analytical spirit that has given rise to the elaboration of theories widely accept...
Uncertainty Principle Respects Locality
Wang, Dongsheng
2013-01-01
The notion of nonlocality implicitly implies there might be some kind of spooky action at a distance in nature, however, the validity of quantum mechanics has been well tested up to now. In this work it is argued that the notion of nonlocality is physically improper, the basic principle of locality in nature is well respected by quantum mechanics, namely, the uncertainty principle. We show that the quantum bound on the Clauser, Horne, Shimony, and Holt (CHSH) inequality can be recovered from the uncertainty relation in a multipartite setting, and the same bound exists classically which indicates that nonlocality does not capture the essence of quantum and then distinguish quantum mechanics and classical mechanics properly. We further argue that the super-quantum correlation demonstrated by the nonlocal box is not physically comparable with the quantum one, as the result, the physical foundation for the existence of nonlocality is falsified. The origin of the quantum structure of nature still remains to be exp...
Propagation of Airy Gaussian vortex beams in uniaxial crystals
Weihao, Yu; Ruihuang, Zhao; Fu, Deng; Jiayao, Huang; Chidao, Chen; Xiangbo, Yang; Yanping, Zhao; Dongmei, Deng
2016-04-01
The propagation dynamics of the Airy Gaussian vortex beams in uniaxial crystals orthogonal to the optical axis has been investigated analytically and numerically. The propagation expression of the beams has been obtained. The propagation features of the Airy Gaussian vortex beams are shown with changes of the distribution factor and the ratio of the extraordinary refractive index to the ordinary refractive index. The correlations between the ratio and the maximum intensity value during the propagation, and its appearing distance have been investigated. Project supported by the National Natural Science Foundation of China (Grant Nos. 11374108, 11374107, 10904041, and 11547212), the Foundation of Cultivating Outstanding Young Scholars of Guangdong Province, China, the CAS Key Laboratory of Geospace Environment, University of Science and Technology of China, the National Training Program of Innovation and Entrepreneurship for Undergraduates (Grant No. 2015093), and the Science and Technology Projects of Guangdong Province, China (Grant No. 2013B031800011).
Commwarrior worm propagation model for smart phone networks
Institute of Scientific and Technical Information of China (English)
XIA Wei; LI Zhao-hui; CHEN Zeng-qiang; YUAN Zhu-zhi
2008-01-01
Commwarrior worm is capable of spreading through both Bluetooth and multimedia messaging service (MMS) in smart phone networks. According to the propagation characteristics of Bluetooth and MMS, we built the susceptible- exposed-infected-recovered-dormancy (SEIRD) model for the Bluetooth and MMS hybrid spread mode and performed the stability analysis. The simulation results show good correlation with our theoretical analysis and demonstrate the effectiveness of this dynamic propagation model. On the basis of the SEIRD model, we further discuss at length the influence of the propagation parameters such as user gather density in groups, moving velocity of smart phone, the time for worm to replicate itself, and other interrelated parameters on the propagation of the virus. On the basis of these analytical and simulation results, some feasible control strategies will be proposed to restrain the spread of mobile worm such as commwarrior on smart phone network.
Uncertainty in artificial intelligence
Shachter, RD; Henrion, M; Lemmer, JF
1990-01-01
This volume, like its predecessors, reflects the cutting edge of research on the automation of reasoning under uncertainty.A more pragmatic emphasis is evident, for although some papers address fundamental issues, the majority address practical issues. Topics include the relations between alternative formalisms (including possibilistic reasoning), Dempster-Shafer belief functions, non-monotonic reasoning, Bayesian and decision theoretic schemes, and new inference techniques for belief nets. New techniques are applied to important problems in medicine, vision, robotics, and natural language und
Cost uncertainty for different levels of technology maturity
International Nuclear Information System (INIS)
It is difficult at best to apply a single methodology for estimating cost uncertainties related to technologies of differing maturity. While highly mature technologies may have significant performance and manufacturing cost data available, less well developed technologies may be defined in only conceptual terms. Regardless of the degree of technical maturity, often a cost estimate relating to application of the technology may be required to justify continued funding for development. Yet, a cost estimate without its associated uncertainty lacks the information required to assess the economic risk. For this reason, it is important for the developer to provide some type of uncertainty along with a cost estimate. This study demonstrates how different methodologies for estimating uncertainties can be applied to cost estimates for technologies of different maturities. For a less well developed technology an uncertainty analysis of the cost estimate can be based on a sensitivity analysis; whereas, an uncertainty analysis of the cost estimate for a well developed technology can be based on an error propagation technique from classical statistics. It was decided to demonstrate these uncertainty estimation techniques with (1) an investigation of the additional cost of remediation due to beyond baseline, nearly complete, waste heel retrieval from underground storage tanks (USTs) at Hanford; and (2) the cost related to the use of crystalline silico-titanate (CST) rather than the baseline CS100 ion exchange resin for cesium separation from UST waste at Hanford
Calibration Under Uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Swiler, Laura Painton; Trucano, Timothy Guy
2005-03-01
This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.
Uncertainty Quantification of Calculated Temperatures for the AGR-1 Experiment
Energy Technology Data Exchange (ETDEWEB)
Binh T. Pham; Jeffrey J. Einerson; Grant L. Hawkes
2013-03-01
This report documents an effort to quantify the uncertainty of the calculated temperature data for the first Advanced Gas Reactor (AGR-1) fuel irradiation experiment conducted in the INL’s Advanced Test Reactor (ATR) in support of the Next Generation Nuclear Plant (NGNP) R&D program. Recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, the results of the numerical simulations can be used in combination with the statistical analysis methods to improve qualification of measured data. Additionally, the temperature simulation data for AGR tests can be used for validation of the fuel transport and fuel performance simulation models. The crucial roles of the calculated fuel temperatures in ensuring achievement of the AGR experimental program objectives require accurate determination of the model temperature uncertainties. The report is organized into three chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program and provides overviews of AGR-1 measured data, AGR-1 test configuration and test procedure, and thermal simulation. Chapters 2 describes the uncertainty quantification procedure for temperature simulation data of the AGR-1 experiment, namely, (i) identify and quantify uncertainty sources; (ii) perform sensitivity analysis for several thermal test conditions; (iii) use uncertainty propagation to quantify overall response temperature uncertainty. A set of issues associated with modeling uncertainties resulting from the expert assessments are identified. This also includes the experimental design to estimate the main effects and interactions of the important thermal model parameters. Chapter 3 presents the overall uncertainty results for the six AGR-1 capsules. This includes uncertainties for the daily volume-average and peak fuel temperatures, daily average temperatures at TC locations, and time-average volume-average and time-average peak fuel temperatures.
Uncertainty Quantification of Calculated Temperatures for the AGR-1 Experiment
Energy Technology Data Exchange (ETDEWEB)
Binh T. Pham; Jeffrey J. Einerson; Grant L. Hawkes
2012-04-01
This report documents an effort to quantify the uncertainty of the calculated temperature data for the first Advanced Gas Reactor (AGR-1) fuel irradiation experiment conducted in the INL's Advanced Test Reactor (ATR) in support of the Next Generation Nuclear Plant (NGNP) R&D program. Recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, the results of the numerical simulations can be used in combination with the statistical analysis methods to improve qualification of measured data. Additionally, the temperature simulation data for AGR tests can be used for validation of the fuel transport and fuel performance simulation models. The crucial roles of the calculated fuel temperatures in ensuring achievement of the AGR experimental program objectives require accurate determination of the model temperature uncertainties. The report is organized into three chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program and provides overviews of AGR-1 measured data, AGR-1 test configuration and test procedure, and thermal simulation. Chapters 2 describes the uncertainty quantification procedure for temperature simulation data of the AGR-1 experiment, namely, (i) identify and quantify uncertainty sources; (ii) perform sensitivity analysis for several thermal test conditions; (iii) use uncertainty propagation to quantify overall response temperature uncertainty. A set of issues associated with modeling uncertainties resulting from the expert assessments are identified. This also includes the experimental design to estimate the main effects and interactions of the important thermal model parameters. Chapter 3 presents the overall uncertainty results for the six AGR-1 capsules. This includes uncertainties for the daily volume-average and peak fuel temperatures, daily average temperatures at TC locations, and time-average volume-average and time-average peak fuel temperatures.
Model and parameter uncertainty in IDF relationships under climate change
Chandra, Rupa; Saha, Ujjwal; Mujumdar, P. P.
2015-05-01
Quantifying distributional behavior of extreme events is crucial in hydrologic designs. Intensity Duration Frequency (IDF) relationships are used extensively in engineering especially in urban hydrology, to obtain return level of extreme rainfall event for a specified return period and duration. Major sources of uncertainty in the IDF relationships are due to insufficient quantity and quality of data leading to parameter uncertainty due to the distribution fitted to the data and uncertainty as a result of using multiple GCMs. It is important to study these uncertainties and propagate them to future for accurate assessment of return levels for future. The objective of this study is to quantify the uncertainties arising from parameters of the distribution fitted to data and the multiple GCM models using Bayesian approach. Posterior distribution of parameters is obtained from Bayes rule and the parameters are transformed to obtain return levels for a specified return period. Markov Chain Monte Carlo (MCMC) method using Metropolis Hastings algorithm is used to obtain the posterior distribution of parameters. Twenty six CMIP5 GCMs along with four RCP scenarios are considered for studying the effects of climate change and to obtain projected IDF relationships for the case study of Bangalore city in India. GCM uncertainty due to the use of multiple GCMs is treated using Reliability Ensemble Averaging (REA) technique along with the parameter uncertainty. Scale invariance theory is employed for obtaining short duration return levels from daily data. It is observed that the uncertainty in short duration rainfall return levels is high when compared to the longer durations. Further it is observed that parameter uncertainty is large compared to the model uncertainty.
Temporal scaling in information propagation
Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi
2014-06-01
For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
Fazzari, D M
2001-01-01
This report presents the results of an evaluation of the Total Measurement Uncertainty (TMU) for the Canberra manufactured Segmented Gamma Scanner Assay System (SGSAS) as employed at the Hanford Plutonium Finishing Plant (PFP). In this document, TMU embodies the combined uncertainties due to all of the individual random and systematic sources of measurement uncertainty. It includes uncertainties arising from corrections and factors applied to the analysis of transuranic waste to compensate for inhomogeneities and interferences from the waste matrix and radioactive components. These include uncertainty components for any assumptions contained in the calibration of the system or computation of the data. Uncertainties are propagated at 1 sigma. The final total measurement uncertainty value is reported at the 95% confidence level. The SGSAS is a gamma assay system that is used to assay plutonium and uranium waste. The SGSAS system can be used in a stand-alone mode to perform the NDA characterization of a containe...
A structured analysis of uncertainty surrounding modeled impacts of groundwater-extraction rules
Guillaume, Joseph H. A.; Qureshi, M. Ejaz; Jakeman, Anthony J.
2012-08-01
Integrating economic and groundwater models for groundwater-management can help improve understanding of trade-offs involved between conflicting socioeconomic and biophysical objectives. However, there is significant uncertainty in most strategic decision-making situations, including in the models constructed to represent them. If not addressed, this uncertainty may be used to challenge the legitimacy of the models and decisions made using them. In this context, a preliminary uncertainty analysis was conducted of a dynamic coupled economic-groundwater model aimed at assessing groundwater extraction rules. The analysis demonstrates how a variety of uncertainties in such a model can be addressed. A number of methods are used including propagation of scenarios and bounds on parameters, multiple models, block bootstrap time-series sampling and robust linear regression for model calibration. These methods are described within the context of a theoretical uncertainty management framework, using a set of fundamental uncertainty management tasks and an uncertainty typology.
Institute of Scientific and Technical Information of China (English)
王晖; 刘大有; 等
1994-01-01
In this paper we consider the problem of sequential processing and present a sequential model based on the back-propagation algorithm.This model is intended to deal with intrinsically sequential problems,such as word recognition,speech recognition,natural language understanding.This model can be used to train a network to learn the sequence of input patterns,in a fixed order or a random order.Besides,this model is open- and partial-associative,characterized as “resognizing while accumulating”, which, as we argue, is mental cognition process oriented.
Rockower, Edward B.
1985-01-01
A number of laser propagation codes have been assessed as to their suitability for modeling Army High Energy Laser (HEL) weapons used in an anti- sensor mode. We identify a number of areas in which systems analysis HEL codes are deficient. Most notably, available HEL scaling law codes model the laser aperture as circular, possibly with a fixed (e.g. 10%) obscuration. However, most HELs have rectangular apertures with up to 30% obscuration. We present a beam-quality/aperture shape scaling rela...
International Nuclear Information System (INIS)
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the risks presented by nuclear installations based on postulated frequencies and magnitudes of potential accidents. In 1991, the US Nuclear Regulatory Commission (NRC) and the Commission of the European Communities (CEC) began a joint uncertainty analysis of the two codes. The ultimate objective of the joint effort was to develop credible and traceable uncertainty distributions for the input variables of the codes. Expert elicitation was identified as the best technology available for developing a library of uncertainty distributions for the selected consequence parameters. The study was formulated jointly and was limited to the current code models and to physical quantities that could be measured in experiments. Experts developed their distributions independently. To validate the distributions generated for the wet deposition input variables, samples were taken from these distributions and propagated through the wet deposition code model. Resulting distributions closely replicated the aggregated elicited wet deposition distributions. To validate the distributions generated for the dispersion code input variables, samples from the distributions and propagated through the Gaussian plume model (GPM) implemented in the MACCS and COSYMA codes. Project teams from the NRC and CEC cooperated successfully to develop and implement a unified process for the elaboration of uncertainty distributions on consequence code input parameters. Formal expert judgment elicitation proved valuable for synthesizing the best available information. Distributions on measurable atmospheric dispersion and deposition parameters were successfully elicited from experts involved in the many phenomenological areas of consequence analysis. This volume is the first of a three-volume document describing the project
Energy Technology Data Exchange (ETDEWEB)
Harper, F.T.; Young, M.L.; Miller, L.A. [Sandia National Labs., Albuquerque, NM (United States); Hora, S.C. [Univ. of Hawaii, Hilo, HI (United States); Lui, C.H. [Nuclear Regulatory Commission, Washington, DC (United States); Goossens, L.H.J.; Cooke, R.M. [Delft Univ. of Technology (Netherlands); Paesler-Sauer, J. [Research Center, Karlsruhe (Germany); Helton, J.C. [and others
1995-01-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the risks presented by nuclear installations based on postulated frequencies and magnitudes of potential accidents. In 1991, the US Nuclear Regulatory Commission (NRC) and the Commission of the European Communities (CEC) began a joint uncertainty analysis of the two codes. The ultimate objective of the joint effort was to develop credible and traceable uncertainty distributions for the input variables of the codes. Expert elicitation was identified as the best technology available for developing a library of uncertainty distributions for the selected consequence parameters. The study was formulated jointly and was limited to the current code models and to physical quantities that could be measured in experiments. Experts developed their distributions independently. To validate the distributions generated for the wet deposition input variables, samples were taken from these distributions and propagated through the wet deposition code model. Resulting distributions closely replicated the aggregated elicited wet deposition distributions. To validate the distributions generated for the dispersion code input variables, samples from the distributions and propagated through the Gaussian plume model (GPM) implemented in the MACCS and COSYMA codes. Project teams from the NRC and CEC cooperated successfully to develop and implement a unified process for the elaboration of uncertainty distributions on consequence code input parameters. Formal expert judgment elicitation proved valuable for synthesizing the best available information. Distributions on measurable atmospheric dispersion and deposition parameters were successfully elicited from experts involved in the many phenomenological areas of consequence analysis. This volume is the first of a three-volume document describing the project.
Propagation speed of gamma radiation in brass
Energy Technology Data Exchange (ETDEWEB)
Cavalcante, Jose T.P.D.; Silva, Paulo R.J.; Saitovich, Henrique
2009-07-01
The propagation speed (PS) of visible light -represented by a short frequency range in the large frame of electromagnetic radiations (ER) frequencies- in air was measured during the last century, using a great deal of different methods, with high precision results being achieved. Presently, a well accepted value, with very small uncertainty, is c= 299,792.458 Km/s) (c reporting to the Latin word celeritas: 'speed swiftness'). When propagating in denser material media (MM), such value is always lower when compared to the air value, with the propagating MM density playing an important role. Until present, such studies focusing propagation speeds, refractive indexes, dispersions were specially related to visible light, or to ER in wavelengths ranges dose to it, and with a transparent MM. A first incursion in this subject dealing with {gamma}-rays was performed using an electronic coincidence counting system, when the value of it's PS was measured in air, C{sub {gamma}}{sub (air)}298,300.15 Km/s; a method that went on with later electronic improvements. always in air. To perform such measurements the availability of a {gamma}-radiation source in which two {gamma}-rays are emitted simultaneously in opposite directions -as already used as well as applied in the present case- turns out to be essential to the feasibility of the experiment, as far as no reflection techniques could be used. Such a suitable source was the positron emitter {sup 22}Na placed in a thin wall metal container in which the positrons are stopped and annihilated when reacting with the medium electrons, in such way originating -as it is very well established from momentum/energy conservation laws - two gamma-rays, energy 511 KeV each, both emitted simultaneously in opposite directions. In all the previous experiments were used photomultiplier detectors coupled to NaI(Tl) crystal scintillators, which have a good energy resolution but a deficient time resolution for such purposes
Arnold, Dan; Demyanov, Vasily; Christie, Mike; Bakay, Alexander; Gopa, Konstantin
2016-10-01
Assessing the change in uncertainty in reservoir production forecasts over field lifetime is rarely undertaken because of the complexity of joining together the individual workflows. This becomes particularly important in complex fields such as naturally fractured reservoirs. The impact of this problem has been identified in previous and many solutions have been proposed but never implemented on complex reservoir problems due to the computational cost of quantifying uncertainty and optimising the reservoir development, specifically knowing how many and what kind of simulations to run. This paper demonstrates a workflow that propagates uncertainty throughout field lifetime, and into the decision making process by a combination of a metric-based approach, multi-objective optimisation and Bayesian estimation of uncertainty. The workflow propagates uncertainty estimates from appraisal into initial development optimisation, then updates uncertainty through history matching and finally propagates it into late-life optimisation. The combination of techniques applied, namely the metric approach and multi-objective optimisation, help evaluate development options under uncertainty. This was achieved with a significantly reduced number of flow simulations, such that the combined workflow is computationally feasible to run for a real-field problem. This workflow is applied to two synthetic naturally fractured reservoir (NFR) case studies in appraisal, field development, history matching and mid-life EOR stages. The first is a simple sector model, while the second is a more complex full field example based on a real life analogue. This study infers geological uncertainty from an ensemble of models that are based on the carbonate Brazilian outcrop which are propagated through the field lifetime, before and after the start of production, with the inclusion of production data significantly collapsing the spread of P10-P90 in reservoir forecasts. The workflow links uncertainty
Uncertainty analysis of the CPA and a quadrupolar CPA equation of state - With emphasis on CO2
DEFF Research Database (Denmark)
Bjørner, Martin G.; Sin, Gürkan; Kontogeorgis, Georgios M.
2016-01-01
of correlation between the adjustable parameters. This results in significant propagated errors for certain output properties. To reduce the uncertainty in the adjustable model parameters the heat of vaporization was included as additional correlation data. This resulted in parameter distributions which followed...... estimation and the bootstrap method for parameter estimation. The uncertainties in the parameters estimated from the bootstrap method are propagated to physical property and vapor liquid equilibrium predictions using Monte Carlo simulations. The results indicate that both the pure compound parameter...... uncertainty and the propagated uncertainty are negligible for the modeling approaches which employ three adjustable parameters. For modeling approaches with more than three adjustable parameters, however, there may be significant uncertainties in the pure compound parameters, as well as a high degree...
Warranted Uncertainty and Students' Knowledge and Use of Drugs
Sieber, Joan E.; And Others
1978-01-01
The ability to recognize questions warranting uncertainty was effectively developed in 167 fourth to sixth graders. A three-year followup evaluation measuring transfer of these skills on hard and soft drug use confirmed that students had developed a skeptical analytic attitude, allowing them to ignore peer and parental dogma about drugs.…
Uncertainty in magnetic activity indices
Institute of Scientific and Technical Information of China (English)
2008-01-01
Magnetic activity indices are widely used in theoretical studies of solar-terrestrial coupling and space weather prediction. However, the indices suffer from various uncertainties, which limit their application and even mislead to incorrect conclu-sion. In this paper we analyze three most popular indices, Kp, AE and Dst. Three categories of uncertainties in magnetic indices are discussed: "data uncertainty" originating from inadequate data processing, "station uncertainty" caused by in-complete station covering, and "physical uncertainty" stemming from unclear physical mechanism. A comparison between magnetic disturbances and related indices indicate that the residual Sq will cause an uncertainty of 1―2 in K meas-urement, the uncertainty in saturated AE is as much as 50%, and the uncertainty in Dst index caused by the partial ring currents is about a half of the partial ring cur-rent.
Uncertainties in parton distribution functions
Energy Technology Data Exchange (ETDEWEB)
Martin, A.D. [Deptartment of Physics, University of Durham, Durham DH1 3LE (United Kingdom); Roberts, R.G. [Rutherford Appleton Laboratory, Chilton, Didcot, Oxon OX11 0QX (United Kingdom); Stirling, W.J. [Department of Physics, University of Durham, Durham DH1 3LE (United Kingdom); Department of Mathematical Sciences, University of Durham, Durham DH1 3LE (United Kingdom); Thorne, R.S. [Jesus College, University of Oxford, Oxford OX1 3DW (United Kingdom)
2000-05-01
We discuss the uncertainty in the predictions for hard scattering cross sections at hadron colliders due to uncertainties in the input parton distributions, using W production at the LHC as an example. (author)
Leblanc, Thierry; Sica, Robert J.; van Gijsel, Joanna A. E.; Godin-Beekmann, Sophie; Haefele, Alexander; Trickl, Thomas; Payen, Guillaume; Liberti, Gianluigi
2016-08-01
A standardized approach for the definition, propagation, and reporting of uncertainty in the ozone differential absorption lidar data products contributing to the Network for the Detection for Atmospheric Composition Change (NDACC) database is proposed. One essential aspect of the proposed approach is the propagation in parallel of all independent uncertainty components through the data processing chain before they are combined together to form the ozone combined standard uncertainty. The independent uncertainty components contributing to the overall budget include random noise associated with signal detection, uncertainty due to saturation correction, background noise extraction, the absorption cross sections of O3, NO2, SO2, and O2, the molecular extinction cross sections, and the number densities of the air, NO2, and SO2. The expression of the individual uncertainty components and their step-by-step propagation through the ozone differential absorption lidar (DIAL) processing chain are thoroughly estimated. All sources of uncertainty except detection noise imply correlated terms in the vertical dimension, which requires knowledge of the covariance matrix when the lidar signal is vertically filtered. In addition, the covariance terms must be taken into account if the same detection hardware is shared by the lidar receiver channels at the absorbed and non-absorbed wavelengths. The ozone uncertainty budget is presented as much as possible in a generic form (i.e., as a function of instrument performance and wavelength) so that all NDACC ozone DIAL investigators across the network can estimate, for their own instrument and in a straightforward manner, the expected impact of each reviewed uncertainty component. In addition, two actual examples of full uncertainty budget are provided, using nighttime measurements from the tropospheric ozone DIAL located at the Jet Propulsion Laboratory (JPL) Table Mountain Facility, California, and nighttime measurements from the JPL
Wave propagation in spatially modulated tubes.
Ziepke, A; Martens, S; Engel, H
2016-09-01
We investigate wave propagation in rotationally symmetric tubes with a periodic spatial modulation of cross section. Using an asymptotic perturbation analysis, the governing quasi-two-dimensional reaction-diffusion equation can be reduced into a one-dimensional reaction-diffusion-advection equation. Assuming a weak perturbation by the advection term and using projection method, in a second step, an equation of motion for traveling waves within such tubes can be derived. Both methods predict properly the nonlinear dependence of the propagation velocity on the ratio of the modulation period of the geometry to the intrinsic width of the front, or pulse. As a main feature, we observe finite intervals of propagation failure of waves induced by the tube's modulation and derive an analytically tractable condition for their occurrence. For the highly diffusive limit, using the Fick-Jacobs approach, we show that wave velocities within modulated tubes are governed by an effective diffusion coefficient. Furthermore, we discuss the effects of a single bottleneck on the period of pulse trains. We observe period changes by integer fractions dependent on the bottleneck width and the period of the entering pulse train. PMID:27608990
Wave propagation in spatially modulated tubes
Ziepke, A.; Martens, S.; Engel, H.
2016-09-01
We investigate wave propagation in rotationally symmetric tubes with a periodic spatial modulation of cross section. Using an asymptotic perturbation analysis, the governing quasi-two-dimensional reaction-diffusion equation can be reduced into a one-dimensional reaction-diffusion-advection equation. Assuming a weak perturbation by the advection term and using projection method, in a second step, an equation of motion for traveling waves within such tubes can be derived. Both methods predict properly the nonlinear dependence of the propagation velocity on the ratio of the modulation period of the geometry to the intrinsic width of the front, or pulse. As a main feature, we observe finite intervals of propagation failure of waves induced by the tube's modulation and derive an analytically tractable condition for their occurrence. For the highly diffusive limit, using the Fick-Jacobs approach, we show that wave velocities within modulated tubes are governed by an effective diffusion coefficient. Furthermore, we discuss the effects of a single bottleneck on the period of pulse trains. We observe period changes by integer fractions dependent on the bottleneck width and the period of the entering pulse train.
Quaternionic Multilayer Perceptron with Local Analyticity
Directory of Open Access Journals (Sweden)
Nobuyuki Matsui
2012-11-01
Full Text Available A multi-layered perceptron type neural network is presented and analyzed in this paper. All neuronal parameters such as input, output, action potential and connection weight are encoded by quaternions, which are a class of hypercomplex number system. Local analytic condition is imposed on the activation function in updating neurons’ states in order to construct learning algorithm for this network. An error back-propagation algorithm is introduced for modifying the connection weights of the network.
DEFF Research Database (Denmark)
Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.
2011-01-01
Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...... the uncertainty of the weather radar rainfall input. The main findings of this work, is that the input uncertainty propagate through the urban drainage model with significant effects on the model result. The GLUE methodology is in general a usable way to explore this uncertainty although; the exact width...
S-parameter uncertainty computations
DEFF Research Database (Denmark)
Vidkjær, Jens
1993-01-01
A method for computing uncertainties of measured s-parameters is presented. Unlike the specification software provided with network analyzers, the new method is capable of calculating the uncertainties of arbitrary s-parameter sets and instrument settings.......A method for computing uncertainties of measured s-parameters is presented. Unlike the specification software provided with network analyzers, the new method is capable of calculating the uncertainties of arbitrary s-parameter sets and instrument settings....
Legal uncertainty and contractual innovation
Yaron Leitner
2005-01-01
Although innovative contracts are important for economic growth, when firms face uncertainty as to whether contracts will be enforced, they may choose not to innovate. Legal uncertainty can arise if a judge interprets the terms of a contract in a way that is antithetical to the intentions of the parties to the contract. Or sometimes a judge may understand the contract but overrule it for other reasons. How does legal uncertainty affect firms’ decisions to innovate? In “Legal Uncertainty and C...
Strategy for addressing composition uncertainties in a Hanford high-level waste vitrification plant
Energy Technology Data Exchange (ETDEWEB)
Bryan, M.F.; Piepel, G.F.
1996-03-01
Various requirements will be imposed on the feed material and glass produced by the high-level waste (HLW) vitrification plant at the Hanford Site. A statistical process/product control system will be used to control the melter feed composition and to check and document product quality. Two general types of uncertainty are important in HLW vitrification process/product control: model uncertainty and composition uncertainty. Model uncertainty is discussed by Hrma, Piepel, et al. (1994). Composition uncertainty includes the uncertainties inherent in estimates of feed composition and other process measurements. Because feed composition is a multivariate quantity, multivariate estimates of composition uncertainty (i.e., covariance matrices) are required. Three components of composition uncertainty will play a role in estimating and checking batch and glass attributes: batch-to-batch variability, within-batch uncertainty, and analytical uncertainty. This document reviews the techniques to be used in estimating and updating composition uncertainties and in combining these composition uncertainties with model uncertainty to yield estimates of (univariate) uncertainties associated with estimates of batch and glass properties.
Sensor Localization using Generalized Belief Propagation in Networks with Loops
Savic, Vladimir; Zazo, Santiago
2009-01-01
Belief propagation (BP), also called “sum-product algorithm”, is one of the best-known graphical model for inference in statistical physics, artificial intelligence, computer vision, etc. Furthermore, a recent research in distributed sensor network localization showed us that BP is an efficient way to obtain sensor location as well as appropriate uncertainty. However, BP convergence is not guaranteed in a network with loops. In this paper, we propose localization using generalized belief prop...
Analysis of Uncertainty in Dynamic Processes Development of Banks Functioning
Directory of Open Access Journals (Sweden)
Aleksei V. Korovyakovskii
2013-01-01
Full Text Available The paper offers the approach to measure of uncertainty estimation in dynamic processes of banks functioning, using statistic data of different banking operations indicators. To calculate measure of uncertainty in dynamic processes of banks functioning the phase images of relevant sets of statistic data are considered. Besides, it is shown that the form of phase image of the studied sets of statistic data can act as a basis of measure of uncertainty estimation in dynamic processes of banks functioning. The set of analytical characteristics are offered to formalize the form of phase image definition of the studied sets of statistic data. It is shown that the offered analytical characteristics consider inequality of changes in values of the studied sets of statistic data, which is one of the ways of uncertainty display in dynamic processes development. The invariant estimates of measure of uncertainty in dynamic processes of banks functioning, considering significant changes in absolute values of the same indicators for different banks were obtained. The examples of calculation of measure of uncertainty in dynamic processes of concrete banks functioning were cited.
Impact of dissolution on the uncertainty of spent fuel analysis
International Nuclear Information System (INIS)
One of the objectives of the French Alternative Energies and Atomic Energy Commission in the Marcoule Centre is to accurately quantify the composition of nuclear spent fuel, i.e. to determine the concentration of each isotope with suitable measurement uncertainty. These analysis results are essential for the validation of calculation codes used for the simulation of fuel behaviour in nuclear reactors and for nuclear matter accountancy. The different experimental steps are first the reception of a piece of spent fuel rod at the laboratory of dissolution studies, and then dissolution in a hot cell of a sample of the spent fuel rod received. Several steps are necessary to obtain a complete dissolution. Taking into account these process steps, and not only those of analysis for the evaluation of measurement uncertainties, is new, and is described in this paper. The uncertainty estimation incorporating the process has been developed following the approach proposed by the Guide to the Expression of Uncertainty in Measurement (GUM). The mathematical model of measurement was established by examining the dissolution process step by step. The law of propagation of uncertainty was applied to this model. A point by point examination of each step of the process permitted the identification of all sources of uncertainties considered in this propagation for each input variable. The measurement process presented involves the process and the analysis. The contents of this document show the importance of taking the process into account in order to give a more reliable uncertainty assessment to the result of a concentration or isotope ratio of two isotopes in spent fuel. (author)
Estimation of uncertainty for fatigue growth rate at cryogenic temperatures
Nyilas, Arman; Weiss, Klaus P.; Urbach, Elisabeth; Marcinek, Dawid J.
2014-01-01
Fatigue crack growth rate (FCGR) measurement data for high strength austenitic alloys at cryogenic environment suffer in general from a high degree of data scatter in particular at ΔK regime below 25 MPa√m. Using standard mathematical smoothing techniques forces ultimately a linear relationship at stage II regime (crack propagation rate versus ΔK) in a double log field called Paris law. However, the bandwidth of uncertainty relies somewhat arbitrary upon the researcher's interpretation. The present paper deals with the use of the uncertainty concept on FCGR data as given by GUM (Guidance of Uncertainty in Measurements), which since 1993 is a recommended procedure to avoid subjective estimation of error bands. Within this context, the lack of a true value addresses to evaluate the best estimate by a statistical method using the crack propagation law as a mathematical measurement model equation and identifying all input parameters. Each parameter necessary for the measurement technique was processed using the Gaussian distribution law by partial differentiation of the terms to estimate the sensitivity coefficients. The combined standard uncertainty determined for each term with its computed sensitivity coefficients finally resulted in measurement uncertainty of the FCGR test result. The described procedure of uncertainty has been applied within the framework of ITER on a recent FCGR measurement for high strength and high toughness Type 316LN material tested at 7 K using a standard ASTM proportional compact tension specimen. The determined values of Paris law constants such as C0 and the exponent m as best estimate along with the their uncertainty value may serve a realistic basis for the life expectancy of cyclic loaded members.
The Uncertainties of Risk Management
DEFF Research Database (Denmark)
Vinnari, Eija; Skærbæk, Peter
2014-01-01
. These include uncertainties relating to legal aspects of risk management solutions, in particular the issue concerning which types of document are considered legally valid; uncertainties relating to the definition and operationalisation of risk management; and uncertainties relating to the resources available...
Cascading rainfall uncertainty into flood inundation impact models
Souvignet, Maxime; Freer, Jim E.; de Almeida, Gustavo A. M.; Coxon, Gemma; Neal, Jeffrey C.; Champion, Adrian J.; Cloke, Hannah L.; Bates, Paul D.
2014-05-01
Observed and numerical weather prediction (NWP) simulated precipitation products typically show differences in their spatial and temporal distribution. These differences can considerably influence the ability to predict hydrological responses. For flood inundation impact studies, as in forecast situations, an atmospheric-hydrologic-hydraulic model chain is needed to quantify the extent of flood risk. Uncertainties cascaded through the model chain are seldom explored, and more importantly, how potential input uncertainties propagate through this cascade, and how best to approach this, is still poorly understood. This requires a combination of modelling capabilities, the non-linear transformation of rainfall to river flow using rainfall-runoff models, and finally the hydraulic flood wave propagation based on the runoff predictions. Improving the characterisation of uncertainty, and what is important to include, in each component is important for quantifying impacts and understanding flood risk for different return periods. In this paper, we propose to address this issue by i) exploring the effects of errors in rainfall on inundation predictive capacity within an uncertainty framework by testing inundation uncertainty against different comparable meteorological conditions (i.e. using different rainfall products) and ii) testing different techniques to cascade uncertainties (e.g. bootstrapping, PPU envelope) within the GLUE (generalised likelihood uncertainty estimation) framework. Our method cascades rainfall uncertainties into multiple rainfall-runoff model structures using the Framework for Understanding Structural Errors (FUSE). The resultant prediction uncertainties in upstream discharge provide uncertain boundary conditions that are cascaded into a simplified shallow water hydraulic model (LISFLOOD-FP). Rainfall data captured by three different measurement techniques - rain gauges, gridded radar data and numerical weather predictions (NWP) models are evaluated
UncertWeb: chaining web services accounting for uncertainty
Cornford, Dan; Jones, Richard; Bastin, Lucy; Williams, Matthew; Pebesma, Edzer; Nativi, Stefano
2010-05-01
The development of interoperable services that permit access to data and processes, typically using web service based standards opens up the possibility for increasingly complex chains of data and processes, which might be discovered and composed in increasingly automatic ways. This concept, sometimes referred to as the "Model Web", offers the promise of integrated (Earth) system models, with pluggable web service based components which can be discovered, composed and evaluated dynamically. A significant issue with such service chains, indeed in any composite model composed of coupled components, is that in all interesting (non-linear) cases the effect of uncertainties on inputs, or components within the chain will have complex, potentially unexpected effects on the outputs. Within the FP7 UncertWeb project we will be developing a mechanism and an accompanying set of tools to enable rigorous uncertainty management in web based service chains involving both data and processes. The project will exploit and extend the UncertML candidate standard to flexibly propagate uncertainty through service chains, including looking at mechanisms to develop uncertainty enabled profiles of existing Open Geospatial Consortium services. To facilitate the use of such services we will develop tools to address the definition of the input uncertainties (elicitation), manage the uncertainty propagation (emulation), undertake uncertainty and sensitivity analysis and visualise the output uncertainty. In this talk we will outline the challenges of the UncertWeb project, illustrating this with a prototype service chain we have created for correcting station level pressure to sea-level pressure, which accounts for the various uncertainties involved. In particular we will discuss some of the challenges of chaining Open Geospatial Consortium services using the Business Process Execution Language. We will also address the issue of computational cost and communication bandwidth requirements for
An analytic method for sensitivity analysis of complex systems
Zhu, Yueying; Li, Wei; Cai, Xu
2016-01-01
Sensitivity analysis is concerned with understanding how the model output depends on uncertainties (variances) in inputs and then identifies which inputs are important in contributing to the prediction imprecision. Uncertainty determination in output is the most crucial step in sensitivity analysis. In the present paper, an analytic expression, which can exactly evaluate the uncertainty in output as a function of the output's derivatives and inputs' central moments, is firstly deduced for general multivariate models with given relationship between output and inputs in terms of Taylor series expansion. A $\\gamma$-order relative uncertainty for output, denoted by $\\mathrm{R^{\\gamma}_v}$, is introduced to quantify the contributions of input uncertainty of different orders. On this basis, it is shown that the widely used approximation considering the first order contribution from the variance of input variable can satisfactorily express the output uncertainty only when the input variance is very small or the inpu...
Estimation of sedimentary proxy records together with associated uncertainty
Goswami, B.; Heitzig, J.; Rehfeld, K.; Marwan, N.; Anoop, A.; Prasad, S.; Kurths, J.
2014-01-01
Sedimentary proxy records constitute a significant portion of the recorded evidence that allows us to investigate paleoclimatic conditions and variability. However, uncertainties in the dating of proxy archives limit our ability to fix the timing of past events and interpret proxy record intercomparisons. While there are various age-modeling approaches to improve the estimation of the age–depth relations of archives, relatively little focus has been placed on the propagation...
Risk, Uncertainty and Entrepreneurship
DEFF Research Database (Denmark)
Koudstaal, Martin; Sloof, Randolph; Van Praag, Mirjam
Theory predicts that entrepreneurs have distinct attitudes towards risk and uncertainty, but empirical evidence is mixed. To better understand the unique behavioral characteristics of entrepreneurs and the causes of these mixed results, we perform a large ‘lab-in-the-field’ experiment comparing...... entrepreneurs to managers – a suitable comparison group – and employees (n = 2288). The results indicate that entrepreneurs perceive themselves as less risk averse than managers and employees, in line with common wisdom. However, when using experimental incentivized measures, the differences are subtler...
Kadane, Joseph B
2011-01-01
An intuitive and mathematical introduction to subjective probability and Bayesian statistics. An accessible, comprehensive guide to the theory of Bayesian statistics, Principles of Uncertainty presents the subjective Bayesian approach, which has played a pivotal role in game theory, economics, and the recent boom in Markov Chain Monte Carlo methods. Both rigorous and friendly, the book contains: Introductory chapters examining each new concept or assumption Just-in-time mathematics -- the presentation of ideas just before they are applied Summary and exercises at the end of each chapter Discus
Collective Uncertainty Entanglement Test
Rudnicki, Łukasz; Życzkowski, Karol
2011-01-01
For a given pure state of a composite quantum system we analyze the product of its projections onto a set of locally orthogonal separable pure states. We derive a bound for this product analogous to the entropic uncertainty relations. For bipartite systems the bound is saturated for maximally entangled states and it allows us to construct a family of entanglement measures, we shall call collectibility. As these quantities are experimentally accessible, the approach advocated contributes to the task of experimental quantification of quantum entanglement, while for a three-qubit system it is capable to identify the genuine three-party entanglement.
Hwang, Rong-Jen; Rogers, Craig; Beltran, Jada; Razatos, Gerasimos; Avery, Jason
2016-06-01
Reporting a measurement of uncertainty helps to determine the limitations of the method of analysis and aids in laboratory accreditation. This laboratory has conducted a study to estimate a reasonable uncertainty for the mass concentration of vaporous ethanol, in g/210 L, by the Intoxilyzer(®) 8000 breath analyzer. The uncertainty sources used were: gas chromatograph (GC) calibration adjustment, GC analytical, certified reference material, Intoxilyzer(®) 8000 calibration adjustment and Intoxilyzer(®) 8000 analytical. Standard uncertainties attributed to these sources were calculated and separated into proportional and constant standard uncertainties. Both the combined proportional and the constant standard uncertainties were further combined to an expanded uncertainty as both a percentage and an unit. To prevent any under reporting of the expanded uncertainty, 0.10 g/210 L was chosen as the defining point for expressing the expanded uncertainty. For the Intoxilyzer(®) 8000, all vaporous ethanol results at or above 0.10 g/210 L, the expanded uncertainty will be reported as ±3.6% at a confidence level of 95% (k = 2); for vaporous ethanol results below 0.10 g/210 L, the expanded uncertainty will be reported as ±0.0036 g/210 L at a confidence level of 95% (k = 2).
Light Front Boson Model Propagation
Institute of Scientific and Technical Information of China (English)
Jorge Henrique Sales; Alfredo Takashi Suzuki
2011-01-01
stract The scope and aim of this work is to describe the two-body interaction mediated by a particle (either the scalar or the gauge boson) within the light-front formulation. To do this, first of all we point out the importance of propagators and Green functions in Quantum Mechanics. Then we project the covariant quantum propagator onto the light front time to get the propagator for scalar particles in these coordinates. This operator propagates the wave function from x+ = 0 to x+ ＞ O. It corresponds to the definition of the time ordering operation in the light front time x+. We calculate the light-front Green's function for 2 interacting bosons propagating forward in x+. We also show how to write down the light front Green's function from the Feynman propagator and finally make a generalization to N bosons.
Haji Hosseinloo, Ashkan; Turitsyn, Konstantin
2016-05-01
Vibratory energy harvesters as potential replacements for conventional batteries are not as robust as batteries. Their performance can drastically deteriorate in the presence of uncertainty in their parameters. Parametric uncertainty is inevitable with any physical device mainly due to manufacturing tolerances, defects, and environmental effects such as temperature and humidity. Hence, uncertainty propagation analysis and optimization under uncertainty seem indispensable with any energy harvester design. Here we propose a new modeling philosophy for optimization under uncertainty; optimization for the worst-case scenario (minimum power) rather than for the ensemble expectation of the power. The proposed optimization philosophy is practically very useful when there is a minimum requirement on the harvested power. We formulate the problems of uncertainty propagation and optimization under uncertainty in a generic and architecture-independent fashion, and then apply them to a single-degree-of-freedom linear piezoelectric energy harvester with uncertainty in its different parameters. The simulation results show that there is a significant improvement in the worst-case power of the designed harvester compared to that of a naively optimized (deterministically optimized) harvester. For instance, for a 10% uncertainty in the natural frequency of the harvester (in terms of its standard deviation) this improvement is about 570%.
Biedermann, Eric; Jauriqui, Leanne; Aldrin, John C.; Mayes, Alexander; Williams, Tom; Mazdiyasni, Siamack
2016-02-01
Resonant Ultrasound Spectroscopy (RUS) is a nondestructive evaluation (NDE) method which can be used for material characterization, defect detection, process control and life monitoring for critical components in gas turbine engines, aircraft and other systems. Accurate forward and inverse modeling for RUS requires a proper accounting of the propagation of uncertainty due to the model and measurement sources. A process for quantifying the propagation of uncertainty to RUS frequency results for models and measurements was developed. Epistemic and aleatory sources of uncertainty were identified for forward model parameters, forward model material property and geometry inputs, inverse model parameters, and physical RUS measurements. RUS model parametric studies were then conducted for simple geometric samples to determine the sensitivity of RUS frequencies and model inversion results to the various sources of uncertainty. The results of these parametric studies were used to calculate uncertainty bounds associated with each source. Uncertainty bounds were then compared to assess the relative impact of the various sources of uncertainty, and mitigations were identified. The elastic material property inputs for forward models, such as Young's Modulus, were found to be the most significant source of uncertainty in these studies. The end result of this work was the development of an uncertainty quantification process that can be adapted to a broad range of components and materials.
Gauge engineering and propagators
Maas, Axel
2016-01-01
Beyond perturbation theory gauge-fixing becomes more involved due to the Gribov-Singer ambiguity: The appearance of additional gauge copies requires to define a procedure how to handle them. For the case of Landau gauge the structure and properties of these additional gauge copies will be investigated. Based on these properties gauge conditions are constructed to account for these gauge copies. The dependence of the propagators on the choice of these complete gauge-fixings will then be investigated using lattice gauge theory for Yang-Mills theory. It is found that the implications for the infrared, and to some extent mid-momentum behavior, can be substantial. In going beyond the Yang-Mills case it turns out that the influence of matter can generally not be neglected. This will be briefly discussed for various types of matter.
Uncertainty of the beam energy measurement in the e+e- collision using Compton backscattering
Mo, Xiao-Hu
2014-10-01
The beam energy is measured in the e+e- collision by using Compton backscattering. The uncertainty of this measurement process is studied by virtue of analytical formulas, and the special effects of variant energy spread and energy drift on the systematic uncertainty estimation are also studied with the Monte Carlo sampling technique. These quantitative conclusions are especially important for understanding the uncertainty of the beam energy measurement system.
Oil price uncertainty in Canada
Energy Technology Data Exchange (ETDEWEB)
Elder, John [Department of Finance and Real Estate, 1272 Campus Delivery, Colorado State University, Fort Collins, CO 80523 (United States); Serletis, Apostolos [Department of Economics, University of Calgary, Calgary, Alberta (Canada)
2009-11-15
Bernanke [Bernanke, Ben S. Irreversibility, uncertainty, and cyclical investment. Quarterly Journal of Economics 98 (1983), 85-106.] shows how uncertainty about energy prices may induce optimizing firms to postpone investment decisions, thereby leading to a decline in aggregate output. Elder and Serletis [Elder, John and Serletis, Apostolos. Oil price uncertainty.] find empirical evidence that uncertainty about oil prices has tended to depress investment in the United States. In this paper we assess the robustness of these results by investigating the effects of oil price uncertainty in Canada. Our results are remarkably similar to existing results for the United States, providing additional evidence that uncertainty about oil prices may provide another explanation for why the sharp oil price declines of 1985 failed to produce rapid output growth. Impulse-response analysis suggests that uncertainty about oil prices may tend to reinforce the negative response of output to positive oil shocks. (author)