Climate change decision-making: Model & parameter uncertainties explored
Energy Technology Data Exchange (ETDEWEB)
Dowlatabadi, H.; Kandlikar, M.; Linville, C.
1995-12-31
A critical aspect of climate change decision-making is uncertainties in current understanding of the socioeconomic, climatic and biogeochemical processes involved. Decision-making processes are much better informed if these uncertainties are characterized and their implications understood. Quantitative analysis of these uncertainties serve to inform decision makers about the likely outcome of policy initiatives, and help set priorities for research so that outcome ambiguities faced by the decision-makers are reduced. A family of integrated assessment models of climate change have been developed at Carnegie Mellon. These models are distinguished from other integrated assessment efforts in that they were designed from the outset to characterize and propagate parameter, model, value, and decision-rule uncertainties. The most recent of these models is ICAM 2.1. This model includes representation of the processes of demographics, economic activity, emissions, atmospheric chemistry, climate and sea level change and impacts from these changes and policies for emissions mitigation, and adaptation to change. The model has over 800 objects of which about one half are used to represent uncertainty. In this paper we show, that when considering parameter uncertainties, the relative contribution of climatic uncertainties are most important, followed by uncertainties in damage calculations, economic uncertainties and direct aerosol forcing uncertainties. When considering model structure uncertainties we find that the choice of policy is often dominated by model structure choice, rather than parameter uncertainties.
Wells, J. R.; Kim, J. B.
2011-12-01
Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that
Exploring uncertainty of Amazon dieback in a perturbed parameter Earth system ensemble.
Boulton, Chris A; Booth, Ben B B; Good, Peter
2017-12-01
The future of the Amazon rainforest is unknown due to uncertainties in projected climate change and the response of the forest to this change (forest resiliency). Here, we explore the effect of some uncertainties in climate and land surface processes on the future of the forest, using a perturbed physics ensemble of HadCM3C. This is the first time Amazon forest changes are presented using an ensemble exploring both land vegetation processes and physical climate feedbacks in a fully coupled modelling framework. Under three different emissions scenarios, we measure the change in the forest coverage by the end of the 21st century (the transient response) and make a novel adaptation to a previously used method known as "dry-season resilience" to predict the long-term committed response of the forest, should the state of the climate remain constant past 2100. Our analysis of this ensemble suggests that there will be a high chance of greater forest loss on longer timescales than is realized by 2100, especially for mid-range and low emissions scenarios. In both the transient and predicted committed responses, there is an increasing uncertainty in the outcome of the forest as the strength of the emissions scenarios increases. It is important to note however, that very few of the simulations produce future forest loss of the magnitude previously shown under the standard model configuration. We find that low optimum temperatures for photosynthesis and a high minimum leaf area index needed for the forest to compete for space appear to be precursors for dieback. We then decompose the uncertainty into that associated with future climate change and that associated with forest resiliency, finding that it is important to reduce the uncertainty in both of these if we are to better determine the Amazon's outcome. © 2017 John Wiley & Sons Ltd.
Dealing with exploration uncertainties
International Nuclear Information System (INIS)
Capen, E.
1992-01-01
Exploration for oil and gas should fulfill the most adventurous in their quest for excitement and surprise. This paper tries to cover that tall order. The authors will touch on the magnitude of the uncertainty (which is far greater than in most other businesses), the effects of not knowing target sizes very well, how to build uncertainty into analyses naturally, how to tie reserves and chance estimates to economics, and how to look at the portfolio effect of an exploration program. With no apologies, the authors will be using a different language for some readers - the language of uncertainty, which means probability and statistics. These tools allow one to combine largely subjective exploration information with the more analytical data from the engineering and economic side
Failure probability under parameter uncertainty.
Gerrard, R; Tsanakas, A
2011-05-01
In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.
Energy Technology Data Exchange (ETDEWEB)
Denman, Matthew R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brooks, Dusty Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-08-01
Sandia National Laboratories (SNL) has conducted an uncertainty analysi s (UA) on the Fukushima Daiichi unit (1F1) accident progression wit h the MELCOR code. Volume I of the 1F1 UA discusses the physical modeling details and time history results of the UA. Volume II of the 1F1 UA discusses the statistical viewpoint. The model used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). The goal of this work was to perform a focused evaluation of uncertainty in core damage progression behavior and its effect on key figures - of - merit (e.g., hydrogen production, fraction of intact fuel, vessel lower head failure) and in doing so assess the applicability of traditional sensitivity analysis techniques .
Uncertainties of Molecular Structural Parameters
International Nuclear Information System (INIS)
Császár, Attila G.
2014-01-01
Full text: The most fundamental property of a molecule is its three-dimensional (3D) structure formed by its constituent atoms (see, e.g., the perfectly regular hexagon associated with benzene). It is generally accepted that knowledge of the detailed structure of a molecule is a prerequisite to determine most of its other properties. What nowadays is a seemingly simple concept, namely that molecules have a structure, was introduced into chemistry in the 19th century. Naturally, the word changed its meaning over the years. Elemental analysis, simple structural formulae, two-dimensional and then 3D structures mark the development of the concept to its modern meaning. When quantum physics and quantum chemistry emerged in the 1920s, the simple concept associating structure with a three-dimensional object seemingly gained a firm support. Nevertheless, what seems self-explanatory today is in fact not so straightforward to justify within quantum mechanics. In quantum chemistry the concept of an equilibrium structure of a molecule is tied to the Born-Oppenheimer approximation but beyond the adiabatic separation of the motions of the nuclei and the electrons the meaning of a structure is still slightly obscured. Putting the conceptual difficulties aside, there are several experimental, empirical, and theoretical techniques to determine structures of molecules. One particular problem, strongly related to the question of uncertainties of “measured” or “computed” structural parameters, is that all the different techniques correspond to different structure definitions and thus yield different structural parameters. Experiments probing the structure of molecules rely on a number of structure definitions, to name just a few: r_0, r_g, r_a, r_s, r_m, etc., and one should also consider the temperature dependence of most of these structural parameters which differ from each other in the way the rovibrational motions of the molecules are treated and how the averaging is
Habitable zone dependence on stellar parameter uncertainties
International Nuclear Information System (INIS)
Kane, Stephen R.
2014-01-01
An important property of exoplanetary systems is the extent of the Habitable Zone (HZ), defined as that region where water can exist in a liquid state on the surface of a planet with sufficient atmospheric pressure. Both ground- and space-based observations have revealed a plethora of confirmed exoplanets and exoplanetary candidates, most notably from the Kepler mission using the transit detection technique. Many of these detected planets lie within the predicted HZ of their host star. However, as is the case with the derived properties of the planets themselves, the HZ boundaries depend on how well we understand the host star. Here we quantify the uncertainties of HZ boundaries on the parameter uncertainties of the host star. We examine the distribution of stellar parameter uncertainties from confirmed exoplanet hosts and Kepler candidate hosts and translate these into HZ boundary uncertainties. We apply this to several known systems with an HZ planet to determine the uncertainty in their HZ status.
Habitable zone dependence on stellar parameter uncertainties
Energy Technology Data Exchange (ETDEWEB)
Kane, Stephen R., E-mail: skane@sfsu.edu [Department of Physics and Astronomy, San Francisco State University, 1600 Holloway Avenue, San Francisco, CA 94132 (United States)
2014-02-20
An important property of exoplanetary systems is the extent of the Habitable Zone (HZ), defined as that region where water can exist in a liquid state on the surface of a planet with sufficient atmospheric pressure. Both ground- and space-based observations have revealed a plethora of confirmed exoplanets and exoplanetary candidates, most notably from the Kepler mission using the transit detection technique. Many of these detected planets lie within the predicted HZ of their host star. However, as is the case with the derived properties of the planets themselves, the HZ boundaries depend on how well we understand the host star. Here we quantify the uncertainties of HZ boundaries on the parameter uncertainties of the host star. We examine the distribution of stellar parameter uncertainties from confirmed exoplanet hosts and Kepler candidate hosts and translate these into HZ boundary uncertainties. We apply this to several known systems with an HZ planet to determine the uncertainty in their HZ status.
Schlegel, N.; Seroussi, H. L.; Boening, C.; Larour, E. Y.; Limonadi, D.; Schodlok, M.; Watkins, M. M.
2017-12-01
The Jet Propulsion Laboratory-University of California at Irvine Ice Sheet System Model (ISSM) is a thermo-mechanical 2D/3D parallelized finite element software used to physically model the continental-scale flow of ice at high resolutions. Embedded into ISSM are uncertainty quantification (UQ) tools, based on the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA) software. ISSM-DAKOTA offers various UQ methods for the investigation of how errors in model input impact uncertainty in simulation results. We utilize these tools to regionally sample model input and key parameters, based on specified bounds of uncertainty, and run a suite of continental-scale 100-year ISSM forward simulations of the Antarctic Ice Sheet. Resulting diagnostics (e.g., spread in local mass flux and regional mass balance) inform our conclusion about which parameters and/or forcing has the greatest impact on century-scale model simulations of ice sheet evolution. The results allow us to prioritize the key datasets and measurements that are critical for the minimization of ice sheet model uncertainty. Overall, we find that Antartica's total sea level contribution is strongly affected by grounding line retreat, which is driven by the magnitude of ice shelf basal melt rates and by errors in bedrock topography. In addition, results suggest that after 100 years of simulation, Thwaites glacier is the most significant source of model uncertainty, and its drainage basin has the largest potential for future sea level contribution. This work is performed at and supported by the California Institute of Technology's Jet Propulsion Laboratory. Supercomputing time is also supported through a contract with the National Aeronautics and Space Administration's Cryosphere program.
Robustness of dynamic systems with parameter uncertainties
Balemi, S; Truöl, W
1992-01-01
Robust Control is one of the fastest growing and promising areas of research today. In many practical systems there exist uncertainties which have to be considered in the analysis and design of control systems. In the last decade methods were developed for dealing with dynamic systems with unstructured uncertainties such as HOO_ and £I-optimal control. For systems with parameter uncertainties, the seminal paper of V. L. Kharitonov has triggered a large amount of very promising research. An international workshop dealing with all aspects of robust control was successfully organized by S. P. Bhattacharyya and L. H. Keel in San Antonio, Texas, USA in March 1991. We organized the second international workshop in this area in Ascona, Switzer land in April 1992. However, this second workshop was restricted to robust control of dynamic systems with parameter uncertainties with the objective to concentrate on some aspects of robust control. This book contains a collection of papers presented at the International W...
Parameter Uncertainty for Repository Thermal Analysis
Energy Technology Data Exchange (ETDEWEB)
Hardin, Ernest [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Greenberg, Harris [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dupont, Mark [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2015-10-01
This report is one follow-on to a study of reference geologic disposal design concepts (Hardin et al. 2011a). Based on an analysis of maximum temperatures, that study concluded that certain disposal concepts would require extended decay storage prior to emplacement, or the use of small waste packages, or both. The study used nominal values for thermal properties of host geologic media and engineered materials, demonstrating the need for uncertainty analysis to support the conclusions. This report is a first step that identifies the input parameters of the maximum temperature calculation, surveys published data on measured values, uses an analytical approach to determine which parameters are most important, and performs an example sensitivity analysis. Using results from this first step, temperature calculations planned for FY12 can focus on only the important parameters, and can use the uncertainty ranges reported here. The survey of published information on thermal properties of geologic media and engineered materials, is intended to be sufficient for use in generic calculations to evaluate the feasibility of reference disposal concepts. A full compendium of literature data is beyond the scope of this report. The term “uncertainty” is used here to represent both measurement uncertainty and spatial variability, or variability across host geologic units. For the most important parameters (e.g., buffer thermal conductivity) the extent of literature data surveyed samples these different forms of uncertainty and variability. Finally, this report is intended to be one chapter or section of a larger FY12 deliverable summarizing all the work on design concepts and thermal load management for geologic disposal (M3FT-12SN0804032, due 15Aug2012).
On economic resolution and uncertainty in hydrocarbon exploration assessment
International Nuclear Information System (INIS)
Lerche, I.
1998-01-01
When assessment of parameters of a decision tree for a hydrocarbon exploration project can lie within estimated ranges, it is shown that the ensemble average expected value has two sorts of uncertainties: one is due to the expected value of each realization of the decision tree being different than the average; the second is due to intrinsic variance of each decision tree. The total standard error of the average expected value combines both sorts. The use of additional statistical measures, such as standard error, volatility, and cumulative probability of making a profit, provide insight into the selection process leading to a more appropriate decision. In addition, the use of relative contributions and relative importance for the uncertainty measures guides one to a better determination of those parameters that dominantly influence the total ensemble uncertainty. In this way one can concentrate resources on efforts to minimize the uncertainty ranges of such dominant parameters. A numerical illustration is provided to indicate how such calculations can be performed simply with a hand calculator. (author)
Neglect Of Parameter Estimation Uncertainty Can Significantly Overestimate Structural Reliability
Directory of Open Access Journals (Sweden)
Rózsás Árpád
2015-12-01
Full Text Available Parameter estimation uncertainty is often neglected in reliability studies, i.e. point estimates of distribution parameters are used for representative fractiles, and in probabilistic models. A numerical example examines the effect of this uncertainty on structural reliability using Bayesian statistics. The study reveals that the neglect of parameter estimation uncertainty might lead to an order of magnitude underestimation of failure probability.
Uncertainty of Modal Parameters Estimated by ARMA Models
DEFF Research Database (Denmark)
Jensen, Jakob Laigaard; Brincker, Rune; Rytter, Anders
In this paper the uncertainties of identified modal parameters such as eigenfrequencies and damping ratios are assessed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the param...
Radiotherapy Dose Fractionation under Parameter Uncertainty
International Nuclear Information System (INIS)
Davison, Matt; Kim, Daero; Keller, Harald
2011-01-01
In radiotherapy, radiation is directed to damage a tumor while avoiding surrounding healthy tissue. Tradeoffs ensue because dose cannot be exactly shaped to the tumor. It is particularly important to ensure that sensitive biological structures near the tumor are not damaged more than a certain amount. Biological tissue is known to have a nonlinear response to incident radiation. The linear quadratic dose response model, which requires the specification of two clinically and experimentally observed response coefficients, is commonly used to model this effect. This model yields an optimization problem giving two different types of optimal dose sequences (fractionation schedules). Which fractionation schedule is preferred depends on the response coefficients. These coefficients are uncertainly known and may differ from patient to patient. Because of this not only the expected outcomes but also the uncertainty around these outcomes are important, and it might not be prudent to select the strategy with the best expected outcome.
Uncertainty of Modal Parameters Estimated by ARMA Models
DEFF Research Database (Denmark)
Jensen, Jacob Laigaard; Brincker, Rune; Rytter, Anders
1990-01-01
In this paper the uncertainties of identified modal parameters such as eidenfrequencies and damping ratios are assed. From the measured response of dynamic excited structures the modal parameters may be identified and provide important structural knowledge. However the uncertainty of the parameters...... by simulation study of a lightly damped single degree of freedom system. Identification by ARMA models has been choosen as system identification method. It is concluded that both the sampling interval and number of sampled points may play a significant role with respect to the statistical errors. Furthermore......, it is shown that the model errors may also contribute significantly to the uncertainty....
Variability and Uncertainties of Key Hydrochemical Parameters for SKB Sites
Energy Technology Data Exchange (ETDEWEB)
Bath, Adrian [Intellisci Ltd, Willoughby on the Wolds, Loughborough (United Kingdom); Hermansson, Hans-Peter [Studsvik Nuclear AB, Nykoeping (Sweden)
2006-12-15
being able to characterise them thermodynamically. Geochemical modelling with the MEDUSA program and the HYDRA thermodynamic database was used to construct a set of Eh/pH diagrams for the iron and sulphur system in Forsmark groundwaters. Geochemical modelling with the PHREEQCI program was used for two purposes connected with uncertainties in key hydrochemical parameters: (i) to adjust pH to compensate for CO{sub 2} outgassing on the basis of an assumption that in situ groundwater should be at equilibrium with calcite, and (ii) to evaluate the hypothetical Eh on the basis of assumed control by Fe{sup 3+}/Fe{sup 2+}, Fe(OH)3/Fe{sup 2+} and SO{sub 4} 2-/HS- redox couples so as to assess evidence for control and buffering of redox and for reactivity of other redox sensitive parameters. These calculations were carried out with reported groundwater data from Forsmark and Simpevarp sites and also from the Aespoe HRL. It is emphasised that the purpose of these calculations is to explore and illustrate the theoretical basis of geochemical interpretations, and to understand what are the assumptions, simplifications and uncertainties in interpreting hydrochemical data especially redox and pH. Deviations of {+-}10 mV are attributable to minor differences in thermodynamic data and other model inputs. Some of the conclusions from geochemical modelling are: (i) pH data, when adjusted to compensate for CO{sub 2} outgassing, are typically 0.2 to 0.4 pH units lower than the measured values, which suggests one aspect of uncertainty in measured pH values. (ii) Most measured pH/Eh points for Forsmark are located close to the HS{sup -}/SO{sub 4} 2-line in an Eh/pH diagram, suggesting that the couple HS{sup -}/SO{sub 4} 2-controls Eh at normal SO{sub 4} 2-concentrations (above about 0.5 mM and around 5 mM). (iii) Eh calculated from the couples SO{sub 4} 2-/HS- and Fe(OH)3/Fe{sup 2+} are rather close to the measured Eh in most cases. In contrast, the Eh calculated from the Fe{sup 3+}/Fe{sup 2
Effect of uncertainty parameters on graphene sheets Young's modulus prediction
International Nuclear Information System (INIS)
Sahlaoui, Habib; Sidhom Habib; Guedri, Mohamed
2013-01-01
Software based on molecular structural mechanics approach (MSMA) and using finite element method (FEM) has been developed to predict the Young's modulus of graphene sheets. Obtained results have been compared to results available in the literature and good agreement has been shown when the same values of uncertainty parameters are used. A sensibility of the models to their uncertainty parameters has been investigated using a stochastic finite element method (SFEM). The different values of the used uncertainty parameters, such as molecular mechanics force field constants k_r and k_θ, thickness (t) of a graphene sheet and length ( L_B) of a carbon carbon bonds, have been collected from the literature. Strong sensibilities of 91% to the thickness and of 21% to the stretching force (k_r) have been shown. The results justify the great difference between Young's modulus predicted values of the graphene sheets and their large disagreement with experimental results.
Incorporating model parameter uncertainty into inverse treatment planning
International Nuclear Information System (INIS)
Lian Jun; Xing Lei
2004-01-01
Radiobiological treatment planning depends not only on the accuracy of the models describing the dose-response relation of different tumors and normal tissues but also on the accuracy of tissue specific radiobiological parameters in these models. Whereas the general formalism remains the same, different sets of model parameters lead to different solutions and thus critically determine the final plan. Here we describe an inverse planning formalism with inclusion of model parameter uncertainties. This is made possible by using a statistical analysis-based frameset developed by our group. In this formalism, the uncertainties of model parameters, such as the parameter a that describes tissue-specific effect in the equivalent uniform dose (EUD) model, are expressed by probability density function and are included in the dose optimization process. We found that the final solution strongly depends on distribution functions of the model parameters. Considering that currently available models for computing biological effects of radiation are simplistic, and the clinical data used to derive the models are sparse and of questionable quality, the proposed technique provides us with an effective tool to minimize the effect caused by the uncertainties in a statistical sense. With the incorporation of the uncertainties, the technique has potential for us to maximally utilize the available radiobiology knowledge for better IMRT treatment
Exploring Heterogeneous Multicore Architectures for Advanced Embedded Uncertainty Quantification.
Energy Technology Data Exchange (ETDEWEB)
Phipps, Eric T.; Edwards, Harold C.; Hu, Jonathan J.
2014-09-01
We explore rearrangements of classical uncertainty quantification methods with the aim of achieving higher aggregate performance for uncertainty quantification calculations on emerging multicore and manycore architectures. We show a rearrangement of the stochastic Galerkin method leads to improved performance and scalability on several computational architectures whereby un- certainty information is propagated at the lowest levels of the simulation code improving memory access patterns, exposing new dimensions of fine grained parallelism, and reducing communica- tion. We also develop a general framework for implementing such rearrangements for a diverse set of uncertainty quantification algorithms as well as computational simulation codes to which they are applied.
Uncertainty in dual permeability model parameters for structured soils
Arora, B.; Mohanty, B. P.; McGuire, J. T.
2012-01-01
Successful application of dual permeability models (DPM) to predict contaminant transport is contingent upon measured or inversely estimated soil hydraulic and solute transport parameters. The difficulty in unique identification of parameters for the additional macropore- and matrix-macropore interface regions, and knowledge about requisite experimental data for DPM has not been resolved to date. Therefore, this study quantifies uncertainty in dual permeability model parameters of experimental soil columns with different macropore distributions (single macropore, and low- and high-density multiple macropores). Uncertainty evaluation is conducted using adaptive Markov chain Monte Carlo (AMCMC) and conventional Metropolis-Hastings (MH) algorithms while assuming 10 out of 17 parameters to be uncertain or random. Results indicate that AMCMC resolves parameter correlations and exhibits fast convergence for all DPM parameters while MH displays large posterior correlations for various parameters. This study demonstrates that the choice of parameter sampling algorithms is paramount in obtaining unique DPM parameters when information on covariance structure is lacking, or else additional information on parameter correlations must be supplied to resolve the problem of equifinality of DPM parameters. This study also highlights the placement and significance of matrix-macropore interface in flow experiments of soil columns with different macropore densities. Histograms for certain soil hydraulic parameters display tri-modal characteristics implying that macropores are drained first followed by the interface region and then by pores of the matrix domain in drainage experiments. Results indicate that hydraulic properties and behavior of the matrix-macropore interface is not only a function of saturated hydraulic conductivity of the macroporematrix interface (Ksa) and macropore tortuosity (lf) but also of other parameters of the matrix and macropore domains.
Parameter Uncertainty on AGCM-simulated Tropical Cyclones
He, F.
2015-12-01
This work studies the parameter uncertainty on tropical cyclone (TC) simulations in Atmospheric General Circulation Models (AGCMs) using the Reed-Jablonowski TC test case, which is illustrated in Community Atmosphere Model (CAM). It examines the impact from 24 parameters across the physical parameterization schemes that represent the convection, turbulence, precipitation and cloud processes in AGCMs. The one-at-a-time (OAT) sensitivity analysis method first quantifies their relative importance on TC simulations and identifies the key parameters to the six different TC characteristics: intensity, precipitation, longwave cloud radiative forcing (LWCF), shortwave cloud radiative forcing (SWCF), cloud liquid water path (LWP) and ice water path (IWP). Then, 8 physical parameters are chosen and perturbed using the Latin-Hypercube Sampling (LHS) method. The comparison between OAT ensemble run and LHS ensemble run shows that the simulated TC intensity is mainly affected by the parcel fractional mass entrainment rate in Zhang-McFarlane (ZM) deep convection scheme. The nonlinear interactive effect among different physical parameters is negligible on simulated TC intensity. In contrast, this nonlinear interactive effect plays a significant role in other simulated tropical cyclone characteristics (precipitation, LWCF, SWCF, LWP and IWP) and greatly enlarge their simulated uncertainties. The statistical emulator Extended Multivariate Adaptive Regression Splines (EMARS) is applied to characterize the response functions for nonlinear effect. Last, we find that the intensity uncertainty caused by physical parameters is in a degree comparable to uncertainty caused by model structure (e.g. grid) and initial conditions (e.g. sea surface temperature, atmospheric moisture). These findings suggest the importance of using the perturbed physics ensemble (PPE) method to revisit tropical cyclone prediction under climate change scenario.
Impulsive control of permanent magnet synchronous motors with parameters uncertainties
International Nuclear Information System (INIS)
Li Dong; Zhang Xiaohong; Wang Shilong; Yan Dan; Wang Hui
2008-01-01
The permanent magnet synchronous motors (PMSMs) may have chaotic behaviours for the uncertain values of parameters or under certain working conditions, which threatens the secure and stable operation of motor-driven. It is important to study methods of controlling or suppressing chaos in PMSMs. In this paper, robust stabilities of PMSM with parameter uncertainties are investigated. After the uncertain matrices which represent the variable system parameters are formulated through matrix analysis, a novel asymptotical stability criterion is established. Some illustrated examples are also given to show the effectiveness of the obtained results
Model uncertainty in financial markets : Long run risk and parameter uncertainty
de Roode, F.A.
2014-01-01
Uncertainty surrounding key parameters of financial markets, such as the in- flation and equity risk premium, constitute a major risk for institutional investors with long investment horizons. Hedging the investors’ inflation exposure can be challenging due to the lack of domestic inflation-linked
On the EU approach for DEMO architecture exploration and dealing with uncertainties
Energy Technology Data Exchange (ETDEWEB)
Coleman, M., E-mail: matti.coleman@euro-fusion.org [EUROfusion Consortium, Boltzmannstraße 2, 85748 Garching (Germany); CCFE Fusion Association, Culham Science Centre, Abingdon, Oxfordshire OX14 3DB (United Kingdom); Maviglia, F.; Bachmann, C. [EUROfusion Consortium, Boltzmannstraße 2, 85748 Garching (Germany); Anthony, J. [CCFE Fusion Association, Culham Science Centre, Abingdon, Oxfordshire OX14 3DB (United Kingdom); Federici, G. [EUROfusion Consortium, Boltzmannstraße 2, 85748 Garching (Germany); Shannon, M. [EUROfusion Consortium, Boltzmannstraße 2, 85748 Garching (Germany); CCFE Fusion Association, Culham Science Centre, Abingdon, Oxfordshire OX14 3DB (United Kingdom); Wenninger, R. [EUROfusion Consortium, Boltzmannstraße 2, 85748 Garching (Germany); Max-Planck-Institut für Plasmaphysik, 85748 Garching (Germany)
2016-11-01
Highlights: • The issue of epistemic uncertainties in the DEMO design basis is described. • An approach to tackle uncertainty by investigating plant architectures is proposed. • The first wall heat load uncertainty is addressed following the proposed approach. - Abstract: One of the difficulties inherent in designing a future fusion reactor is dealing with uncertainty. As the major step between ITER and the commercial exploitation of nuclear fusion energy, DEMO will have to address many challenges – the natures of which are still not fully known. Unlike fission reactors, fusion reactors suffer from the intrinsic complexity of the tokamak (numerous interdependent system parameters) and from the dependence of plasma physics on scale – prohibiting design exploration founded on incremental progression and small-scale experimentation. For DEMO, this means that significant technical uncertainties will exist for some time to come, and a systems engineering design exploration approach must be developed to explore the reactor architecture when faced with these uncertainties. Important uncertainties in the context of fusion reactor design are discussed and a strategy for dealing with these is presented, treating the uncertainty in the first wall loads as an example.
Uncertainty of Doppler reactivity worth due to uncertainties of JENDL-3.2 resonance parameters
Energy Technology Data Exchange (ETDEWEB)
Zukeran, Atsushi [Hitachi Ltd., Hitachi, Ibaraki (Japan). Power and Industrial System R and D Div.; Hanaki, Hiroshi; Nakagawa, Tuneo; Shibata, Keiichi; Ishikawa, Makoto
1998-03-01
Analytical formula of Resonance Self-shielding Factor (f-factor) is derived from the resonance integral (J-function) based on NR approximation and the analytical expression for Doppler reactivity worth ({rho}) is also obtained by using the result. Uncertainties of the f-factor and Doppler reactivity worth are evaluated on the basis of sensitivity coefficients to the resonance parameters. The uncertainty of the Doppler reactivity worth at 487{sup 0}K is about 4 % for the PNC Large Fast Breeder Reactor. (author)
Statistical approach for uncertainty quantification of experimental modal model parameters
DEFF Research Database (Denmark)
Luczak, M.; Peeters, B.; Kahsin, M.
2014-01-01
Composite materials are widely used in manufacture of aerospace and wind energy structural components. These load carrying structures are subjected to dynamic time-varying loading conditions. Robust structural dynamics identification procedure impose tight constraints on the quality of modal models...... represent different complexity levels ranging from coupon, through sub-component up to fully assembled aerospace and wind energy structural components made of composite materials. The proposed method is demonstrated on two application cases of a small and large wind turbine blade........ This paper aims at a systematic approach for uncertainty quantification of the parameters of the modal models estimated from experimentally obtained data. Statistical analysis of modal parameters is implemented to derive an assessment of the entire modal model uncertainty measure. Investigated structures...
Monte Carlo parameter studies and uncertainty analyses with MCNP5
International Nuclear Information System (INIS)
Brown, F. B.; Sweezy, J. E.; Hayes, R.
2004-01-01
A software tool called mcnp p study has been developed to automate the setup, execution, and collection of results from a series of MCNP5 Monte Carlo calculations. This tool provides a convenient means of performing parameter studies, total uncertainty analyses, parallel job execution on clusters, stochastic geometry modeling, and other types of calculations where a series of MCNP5 jobs must be performed with varying problem input specifications. (authors)
Uncertainty analysis of flexible rotors considering fuzzy parameters and fuzzy-random parameters
Directory of Open Access Journals (Sweden)
Fabian Andres Lara-Molina
Full Text Available Abstract The components of flexible rotors are subjected to uncertainties. The main sources of uncertainties include the variation of mechanical properties. This contribution aims at analyzing the dynamics of flexible rotors under uncertain parameters modeled as fuzzy and fuzzy random variables. The uncertainty analysis encompasses the modeling of uncertain parameters and the numerical simulation of the corresponding flexible rotor model by using an approach based on fuzzy dynamic analysis. The numerical simulation is accomplished by mapping the fuzzy parameters of the deterministic flexible rotor model. Thereby, the flexible rotor is modeled by using both the Fuzzy Finite Element Method and the Fuzzy Stochastic Finite Element Method. Numerical simulations illustrate the methodology conveyed in terms of orbits and frequency response functions subject to uncertain parameters.
Parameter uncertainty in CGE Modeling of the environmental impacts of economic policies
Energy Technology Data Exchange (ETDEWEB)
Abler, D.G.; Shortle, J.S. [Agricultural Economics, Pennsylvania State University, University Park, PA (United States); Rodriguez, A.G. [University of Costa Rica, San Jose (Costa Rica)
1999-07-01
This study explores the role of parameter uncertainty in Computable General Equilibrium (CGE) modeling of the environmental impacts of macroeconomic and sectoral policies, using Costa Rica as a case for study. A CGE model is constructed which includes eight environmental indicators covering deforestation, pesticides, overfishing, hazardous wastes, inorganic wastes, organic wastes, greenhouse gases, and air pollution. The parameters are treated as random variables drawn from prespecified distributions. Evaluation of each policy option consists of a Monte Carlo experiment. The impacts of the policy options on the environmental indicators are relatively robust to different parameter values, in spite of the wide range of parameter values employed. 33 refs.
Parameter uncertainty in CGE Modeling of the environmental impacts of economic policies
International Nuclear Information System (INIS)
Abler, D.G.; Shortle, J.S.; Rodriguez, A.G.
1999-01-01
This study explores the role of parameter uncertainty in Computable General Equilibrium (CGE) modeling of the environmental impacts of macroeconomic and sectoral policies, using Costa Rica as a case for study. A CGE model is constructed which includes eight environmental indicators covering deforestation, pesticides, overfishing, hazardous wastes, inorganic wastes, organic wastes, greenhouse gases, and air pollution. The parameters are treated as random variables drawn from prespecified distributions. Evaluation of each policy option consists of a Monte Carlo experiment. The impacts of the policy options on the environmental indicators are relatively robust to different parameter values, in spite of the wide range of parameter values employed. 33 refs
Bilcke, Joke; Beutels, Philippe; Brisson, Marc; Jit, Mark
2011-01-01
Accounting for uncertainty is now a standard part of decision-analytic modeling and is recommended by many health technology agencies and published guidelines. However, the scope of such analyses is often limited, even though techniques have been developed for presenting the effects of methodological, structural, and parameter uncertainty on model results. To help bring these techniques into mainstream use, the authors present a step-by-step guide that offers an integrated approach to account for different kinds of uncertainty in the same model, along with a checklist for assessing the way in which uncertainty has been incorporated. The guide also addresses special situations such as when a source of uncertainty is difficult to parameterize, resources are limited for an ideal exploration of uncertainty, or evidence to inform the model is not available or not reliable. for identifying the sources of uncertainty that influence results most are also described. Besides guiding analysts, the guide and checklist may be useful to decision makers who need to assess how well uncertainty has been accounted for in a decision-analytic model before using the results to make a decision.
Parameter uncertainty in simulations of extreme precipitation and attribution studies.
Timmermans, B.; Collins, W. D.; O'Brien, T. A.; Risser, M. D.
2017-12-01
The attribution of extreme weather events, such as heavy rainfall, to anthropogenic influence involves the analysis of their probability in simulations of climate. The climate models used however, such as the Community Atmosphere Model (CAM), employ approximate physics that gives rise to "parameter uncertainty"—uncertainty about the most accurate or optimal values of numerical parameters within the model. In particular, approximate parameterisations for convective processes are well known to be influential in the simulation of precipitation extremes. Towards examining the impact of this source of uncertainty on attribution studies, we investigate the importance of components—through their associated tuning parameters—of parameterisations relating to deep and shallow convection, and cloud and aerosol microphysics in CAM. We hypothesise that as numerical resolution is increased the change in proportion of variance induced by perturbed parameters associated with the respective components is consistent with the decreasing applicability of the underlying hydrostatic assumptions. For example, that the relative influence of deep convection should diminish as resolution approaches that where convection can be resolved numerically ( 10 km). We quantify the relationship between the relative proportion of variance induced and numerical resolution by conducting computer experiments that examine precipitation extremes over the contiguous U.S. In order to mitigate the enormous computational burden of running ensembles of long climate simulations, we use variable-resolution CAM and employ both extreme value theory and surrogate modelling techniques ("emulators"). We discuss the implications of the relationship between parameterised convective processes and resolution both in the context of attribution studies and progression towards models that fully resolve convection.
Parameter uncertainty effects on variance-based sensitivity analysis
International Nuclear Information System (INIS)
Yu, W.; Harris, T.J.
2009-01-01
In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables-regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used
Exploring the implication of climate process uncertainties within the Earth System Framework
Booth, B.; Lambert, F. H.; McNeal, D.; Harris, G.; Sexton, D.; Boulton, C.; Murphy, J.
2011-12-01
Uncertainties in the magnitude of future climate change have been a focus of a great deal of research. Much of the work with General Circulation Models has focused on the atmospheric response to changes in atmospheric composition, while other processes remain outside these frameworks. Here we introduce an ensemble of new simulations, based on an Earth System configuration of HadCM3C, designed to explored uncertainties in both physical (atmospheric, oceanic and aerosol physics) and carbon cycle processes, using perturbed parameter approaches previously used to explore atmospheric uncertainty. Framed in the context of the climate response to future changes in emissions, the resultant future projections represent significantly broader uncertainty than existing concentration driven GCM assessments. The systematic nature of the ensemble design enables interactions between components to be explored. For example, we show how metrics of physical processes (such as climate sensitivity) are also influenced carbon cycle parameters. The suggestion from this work is that carbon cycle processes represent a comparable contribution to uncertainty in future climate projections as contributions from atmospheric feedbacks more conventionally explored. The broad range of climate responses explored within these ensembles, rather than representing a reason for inaction, provide information on lower likelihood but high impact changes. For example while the majority of these simulations suggest that future Amazon forest extent is resilient to the projected climate changes, a small number simulate dramatic forest dieback. This ensemble represents a framework to examine these risks, breaking them down into physical processes (such as ocean temperature drivers of rainfall change) and vegetation processes (where uncertainties point towards requirements for new observational constraints).
International Nuclear Information System (INIS)
Greenspan, E.
1982-01-01
This chapter presents the mathematical basis for sensitivity functions, discusses their physical meaning and information they contain, and clarifies a number of issues concerning their application, including the definition of group sensitivities, the selection of sensitivity functions to be included in the analysis, and limitations of sensitivity theory. Examines the theoretical foundation; criticality reset sensitivities; group sensitivities and uncertainties; selection of sensitivities included in the analysis; and other uses and limitations of sensitivity functions. Gives the theoretical formulation of sensitivity functions pertaining to ''as-built'' designs for performance parameters of the form of ratios of linear flux functionals (such as reaction-rate ratios), linear adjoint functionals, bilinear functions (such as reactivity worth ratios), and for reactor reactivity. Offers a consistent procedure for reducing energy-dependent or fine-group sensitivities and uncertainties to broad group sensitivities and uncertainties. Provides illustrations of sensitivity functions as well as references to available compilations of such functions and of total sensitivities. Indicates limitations of sensitivity theory originating from the fact that this theory is based on a first-order perturbation theory
Conjunctive use Management under Uncertainty in Aquifer Parameters
Directory of Open Access Journals (Sweden)
Mahmoud Mohammad Rezapour Tabari
2010-01-01
Full Text Available Conjunctive use operation policies play a vital role in the sustainability of water resources and their optimal allocation. To be realistic conditions of real water resource system should be considered in simulation and derivation of operating rules of real-world water resource system. In this research, the combined fuzzy logic and direct search optimization technique is used to account for the uncertainty associated with parameters affecting groundwater table level fluctuations. These parameters include specific yields and inflow recharge and outflow discharge from the aquifer, which are typically uncertain. A membership function is determined for each parameter using hydrogeologic and piezometric data. For each membership value ( level cut, the corresponding intervals are determined. These intervals are considered as constraints on the membership value of the groundwater table level fluctuations in the optimization model. The process is repeated for other level cuts to obtain the fuzzy number. For the uncertainty influencing the water demands, a conjunctive use model with water resources constraints is developed. Using this model, the priorities for the different zones and their optimal allocations are determined. The results show that the better the real conditions are reflected in the conjunctive use model, the better will the system be reliably capable of handling the water demands. The results of the proposed model also indicate that it present reliable allocations compared to the static conventional models and that it performs more desirably and practically in allocating supplies to water demands as it duly includes the opinions of the decision-makers involved.
Hysteresis and uncertainty in soil water-retention curve parameters
Likos, William J.; Lu, Ning; Godt, Jonathan W.
2014-01-01
Accurate estimates of soil hydraulic parameters representing wetting and drying paths are required for predicting hydraulic and mechanical responses in a large number of applications. A comprehensive suite of laboratory experiments was conducted to measure hysteretic soil-water characteristic curves (SWCCs) representing a wide range of soil types. Results were used to quantitatively assess differences and uncertainty in three simplifications frequently adopted to estimate wetting-path SWCC parameters from more easily measured drying curves. They are the following: (1) αw=2αd, (2) nw=nd, and (3) θws=θds, where α, n, and θs are fitting parameters entering van Genuchten’s commonly adopted SWCC model, and the superscripts w and d indicate wetting and drying paths, respectively. The average ratio αw/αd for the data set was 2.24±1.25. Nominally cohesive soils had a lower αw/αd ratio (1.73±0.94) than nominally cohesionless soils (3.14±1.27). The average nw/nd ratio was 1.01±0.11 with no significant dependency on soil type, thus confirming the nw=nd simplification for a wider range of soil types than previously available. Water content at zero suction during wetting (θws) was consistently less than during drying (θds) owing to air entrapment. The θws/θds ratio averaged 0.85±0.10 and was comparable for nominally cohesive (0.87±0.11) and cohesionless (0.81±0.08) soils. Regression statistics are provided to quantitatively account for uncertainty in estimating hysteretic retention curves. Practical consequences are demonstrated for two case studies.
International Nuclear Information System (INIS)
Ahn, Kwang Il; Yang, Joon Eon
2003-01-01
In the risk and reliability analysis of complex technological systems, the primary concern of formal uncertainty analysis is to understand why uncertainties arise, and to evaluate how they impact the results of the analysis. In recent times, many of the uncertainty analyses have focused on parameters of the risk and reliability analysis models, whose values are uncertain in an aleatory or an epistemic way. As the field of parametric uncertainty analysis matures, however, more attention is being paid to the explicit treatment of uncertainties that are addressed in the predictive model itself as well as the accuracy of the predictive model. The essential steps for evaluating impacts of these model uncertainties in the presence of parameter uncertainties are to determine rigorously various sources of uncertainties to be addressed in an underlying model itself and in turn model parameters, based on our state-of-knowledge and relevant evidence. Answering clearly the question of how to characterize and treat explicitly the forgoing different sources of uncertainty is particularly important for practical aspects such as risk and reliability optimization of systems as well as more transparent risk information and decision-making under various uncertainties. The main purpose of this paper is to provide practical guidance for quantitatively treating various model uncertainties that would often be encountered in the risk and reliability modeling process of complex technological systems
Uncertainty In Measuring Noise Parameters Of a Communication Receiver
International Nuclear Information System (INIS)
Korcz, Karol; Palczynska, Beata; Spiralski, Ludwik
2005-01-01
The paper presents the method of assessing uncertainty in measuring the usable sensitivity Es of communication receiver. The influence of partial uncertainties of measuring the noise factor F and the energy pass band of the receiver Δf on the combined standard uncertainty level is analyzed. The method to assess the uncertainty in measuring the noise factor on the basis of the systematic component of uncertainty, assuming that the main source of measurement uncertainty is the hardware of the measuring system, is proposed. The assessment of uncertainty in measuring the pass band of the receiver is determined with the assumption that input quantities of the measurement equation are not correlated. They are successive, discrete values of the spectral power density of the noise on the output of receiver. The results of the analyses of particular uncertainties components of measuring the sensitivity, which were carried out for a typical communication receiver, are presented
Separating the contributions of variability and parameter uncertainty in probability distributions
International Nuclear Information System (INIS)
Sankararaman, S.; Mahadevan, S.
2013-01-01
This paper proposes a computational methodology to quantify the individual contributions of variability and distribution parameter uncertainty to the overall uncertainty in a random variable. Even if the distribution type is assumed to be known, sparse or imprecise data leads to uncertainty about the distribution parameters. If uncertain distribution parameters are represented using probability distributions, then the random variable can be represented using a family of probability distributions. The family of distributions concept has been used to obtain qualitative, graphical inference of the contributions of natural variability and distribution parameter uncertainty. The proposed methodology provides quantitative estimates of the contributions of the two types of uncertainty. Using variance-based global sensitivity analysis, the contributions of variability and distribution parameter uncertainty to the overall uncertainty are computed. The proposed method is developed at two different levels; first, at the level of a variable whose distribution parameters are uncertain, and second, at the level of a model output whose inputs have uncertain distribution parameters
International Nuclear Information System (INIS)
Chungcharoen, E.
1997-01-01
A model was developed to help determine the future development of hydrocarbon reserves. The uncertainties of geological parameters were incorporated into the model in an effort to provide an assessment of the distribution of total hydrocarbon discoveries that are expected to be recovered as a result of exploration activity. The economic parameters were also incorporated into the model in an effort to determine the economic worth of multiple-wells exploration activity. The first part of this study included the geological parameters in the initial field size distribution and the number of fields distribution. Dry hole data was also considered to reflect the exploration risk. The distribution of total hydrocarbon discoveries for a selected number of exploratory wells was determined. The second part of the study included the economic parameters such as the price of oil and gas and the cost of exploration, development and production. The distribution of the number of discoveries and the distribution of total hydrocarbon discoveries was compared to produce a probability distribution of the net present value of a proposed exploration program. The offshore Nova Scotia Shelf basin was chosen for testing the methodology. Several scenarios involving changes in economic parameters were shown. This methodology could help in determining future development programs for hydrocarbon reserves. The methodology can also help governments in policy making decisions regarding taxes and royalty regimes for exploration programs
Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean
Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.
2011-12-01
Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling
Improving weather predictability by including land-surface model parameter uncertainty
Orth, Rene; Dutra, Emanuel; Pappenberger, Florian
2016-04-01
The land surface forms an important component of Earth system models and interacts nonlinearly with other parts such as ocean and atmosphere. To capture the complex and heterogenous hydrology of the land surface, land surface models include a large number of parameters impacting the coupling to other components of the Earth system model. Focusing on ECMWF's land-surface model HTESSEL we present in this study a comprehensive parameter sensitivity evaluation using multiple observational datasets in Europe. We select 6 poorly constrained effective parameters (surface runoff effective depth, skin conductivity, minimum stomatal resistance, maximum interception, soil moisture stress function shape, total soil depth) and explore their sensitivity to model outputs such as soil moisture, evapotranspiration and runoff using uncoupled simulations and coupled seasonal forecasts. Additionally we investigate the possibility to construct ensembles from the multiple land surface parameters. In the uncoupled runs we find that minimum stomatal resistance and total soil depth have the most influence on model performance. Forecast skill scores are moreover sensitive to the same parameters as HTESSEL performance in the uncoupled analysis. We demonstrate the robustness of our findings by comparing multiple best performing parameter sets and multiple randomly chosen parameter sets. We find better temperature and precipitation forecast skill with the best-performing parameter perturbations demonstrating representativeness of model performance across uncoupled (and hence less computationally demanding) and coupled settings. Finally, we construct ensemble forecasts from ensemble members derived with different best-performing parameterizations of HTESSEL. This incorporation of parameter uncertainty in the ensemble generation yields an increase in forecast skill, even beyond the skill of the default system. Orth, R., E. Dutra, and F. Pappenberger, 2016: Improving weather predictability by
Model-based verification method for solving the parameter uncertainty in the train control system
International Nuclear Information System (INIS)
Cheng, Ruijun; Zhou, Jin; Chen, Dewang; Song, Yongduan
2016-01-01
This paper presents a parameter analysis method to solve the parameter uncertainty problem for hybrid system and explore the correlation of key parameters for distributed control system. For improving the reusability of control model, the proposed approach provides the support for obtaining the constraint sets of all uncertain parameters in the abstract linear hybrid automata (LHA) model when satisfying the safety requirements of the train control system. Then, in order to solve the state space explosion problem, the online verification method is proposed to monitor the operating status of high-speed trains online because of the real-time property of the train control system. Furthermore, we construct the LHA formal models of train tracking model and movement authority (MA) generation process as cases to illustrate the effectiveness and efficiency of the proposed method. In the first case, we obtain the constraint sets of uncertain parameters to avoid collision between trains. In the second case, the correlation of position report cycle and MA generation cycle is analyzed under both the normal and the abnormal condition influenced by packet-loss factor. Finally, considering stochastic characterization of time distributions and real-time feature of moving block control system, the transient probabilities of wireless communication process are obtained by stochastic time petri nets. - Highlights: • We solve the parameters uncertainty problem by using model-based method. • We acquire the parameter constraint sets by verifying linear hybrid automata models. • Online verification algorithms are designed to monitor the high-speed trains. • We analyze the correlation of key parameters and uncritical parameters. • The transient probabilities are obtained by using reliability analysis.
Uncertainty estimation of core safety parameters using cross-correlations of covariance matrix
International Nuclear Information System (INIS)
Yamamoto, A.; Yasue, Y.; Endo, T.; Kodama, Y.; Ohoka, Y.; Tatsumi, M.
2012-01-01
An uncertainty estimation method for core safety parameters, for which measurement values are not obtained, is proposed. We empirically recognize the correlations among the prediction errors among core safety parameters, e.g., a correlation between the control rod worth and assembly relative power of corresponding position. Correlations of uncertainties among core safety parameters are theoretically estimated using the covariance of cross sections and sensitivity coefficients for core parameters. The estimated correlations among core safety parameters are verified through the direct Monte-Carlo sampling method. Once the correlation of uncertainties among core safety parameters is known, we can estimate the uncertainty of a safety parameter for which measurement value is not obtained. Furthermore, the correlations can be also used for the reduction of uncertainties of core safety parameters. (authors)
Determination of a PWR key neutron parameters uncertainties and conformity studies applications
International Nuclear Information System (INIS)
Bernard, D.
2002-01-01
The aim of this thesis was to evaluate uncertainties of key neutron parameters of slab reactors. Uncertainties sources have many origins, technologic origin for parameters of fabrication and physical origin for nuclear data. First, each contribution of uncertainties is calculated and finally, a factor of uncertainties is associated to key slab parameter like reactivity, isotherm reactivity coefficient, control rod efficiency, power form factor before irradiation and lifetime. This factors of uncertainties were computed by Generalized Perturbations Theory in case of step 0 and by directs calculations in case of irradiation problems. One of neutronic conformity applications was about fabrication and nuclear data targets precision adjustments. Statistic (uncertainties) and deterministic (deviations) approaches were studied. Then neutronics key slab parameters uncertainties were reduced and so nuclear performances were optimised. (author)
Booth, B.; Collins, M.; Harris, G.; Chris, H.; Jones, C.
2007-12-01
A number of recent studies have highlighted the risk of abrupt dieback of the Amazon Rain Forest as the result of climate changes over the next century. The recent 2005 Amazon drought brought wider acceptance of the idea that that climate drivers will play a significant role in future rain forest stability, yet that stability is still subject to considerable degree of uncertainty. We present a study which seeks to explore some of the underlying uncertainties both in the climate drivers of dieback and in the terrestrial land surface formulation used in GCMs. We adopt a perturbed physics approach which forms part of a wider project which is covered in an accompanying abstract submitted to the multi-model ensembles session. We first couple the same interactive land surface model to a number of different versions of the Hadley Centre atmosphere-ocean model that exhibit a wide range of different physical climate responses in the future. The rainforest extent is shown to collapse in all model cases but the timing of the collapse is dependent on the magnitude of the climate drivers. In the second part, we explore uncertainties in the terrestrial land surface model using the perturbed physics ensemble approach, perturbing uncertain parameters which have an important role in the vegetation and soil response. Contrasting the two approaches enables a greater understanding of the relative importance of climatic and land surface model uncertainties in Amazon dieback.
Inherent uncertainties in meteorological parameters for wind turbine design
Doran, J. C.
1982-01-01
Major difficulties associated with meteorological measurments such as the inability to duplicate the experimental conditions from one day to the next are discussed. This lack of consistency is compounded by the stochastic nature of many of the meteorological variables of interest. Moreover, simple relationships derived in one location may be significantly altered by topographical or synoptic differences encountered at another. The effect of such factors is a degree of inherent uncertainty if an attempt is made to describe the atmosphere in terms of universal laws. Some of these uncertainties and their causes are examined, examples are presented and some implications for wind turbine design are suggested.
International Nuclear Information System (INIS)
Hofer, E.; Hoffman, F.O.
1987-02-01
The uncertainty analysis of model predictions has to discriminate between two fundamentally different types of uncertainty. The presence of stochastic variability (Type 1 uncertainty) necessitates the use of a probabilistic model instead of the much simpler deterministic one. Lack of knowledge (Type 2 uncertainty), however, applies to deterministic as well as to probabilistic model predictions and often dominates over uncertainties of Type 1. The term ''probability'' is interpreted differently in the probabilistic analysis of either type of uncertainty. After these discriminations have been explained the discussion centers on the propagation of parameter uncertainties through the model, the derivation of quantitative uncertainty statements for model predictions and the presentation and interpretation of the results of a Type 2 uncertainty analysis. Various alternative approaches are compared for a very simple deterministic model
Exploring Uncertainty Perception as a Driver of Design Activity
DEFF Research Database (Denmark)
Cash, Philip; Kreye, Melanie
2018-01-01
, and representation action. We bring together prior works on uncertainty perception in the design and management literatures to derive three contributions. First, we describe how uncertainty perception is associated with activity progression, linking all three core actions. Second, we identify characteristic patterns...
Gomez, Jose Alfonso; Owens, Phillip N.; Koiter, Alex J.; Lobb, David
2016-04-01
One of the major sources of uncertainty in attributing sediment sources in fingerprinting studies is the uncertainty in determining the concentrations of the elements used in the mixing model due to the variability of the concentrations of these elements in the source materials (e.g., Kraushaar et al., 2015). The uncertainty in determining the "true" concentration of a given element in each one of the source areas depends on several factors, among them the spatial variability of that element, the sampling procedure and sampling density. Researchers have limited control over these factors, and usually sampling density tends to be sparse, limited by time and the resources available. Monte Carlo analysis has been used regularly in fingerprinting studies to explore the probable solutions within the measured variability of the elements in the source areas, providing an appraisal of the probability of the different solutions (e.g., Collins et al., 2012). This problem can be considered analogous to the propagation of uncertainty in hydrologic models due to uncertainty in the determination of the values of the model parameters, and there are many examples of Monte Carlo analysis of this uncertainty (e.g., Freeze, 1980; Gómez et al., 2001). Some of these model analyses rely on the simulation of "virtual" situations that were calibrated from parameter values found in the literature, with the purpose of providing insight about the response of the model to different configurations of input parameters. This approach - evaluating the answer for a "virtual" problem whose solution could be known in advance - might be useful in evaluating the propagation of uncertainty in mixing models in sediment fingerprinting studies. In this communication, we present the preliminary results of an on-going study evaluating the effect of variability of element concentrations in source materials, sampling density, and the number of elements included in the mixing models. For this study a virtual
Chowdhury, S.; Sharma, A.
2005-12-01
present. SIMEX is based on theory that the trend in alternate parameters can be extrapolated back to the notional error free zone. We illustrate the utility of SIMEX in a synthetic rainfall-runoff modelling scenario and an application to study the dependence of uncertain distributed sea surface temperature anomalies with an indicator of the El Nino Southern Oscillation, the Southern Oscillation Index (SOI). The errors in rainfall data and its affect is explored using Sacramento rainfall runoff model. The rainfall uncertainty is assumed to be multiplicative and temporally invariant. The model used to relate the sea surface temperature anomalies (SSTA) to the SOI is assumed to be of a linear form. The nature of uncertainty in the SSTA is additive and varies with time. The SIMEX framework allows assessment of the relationship between the error free inputs and response. Cook, J.R., Stefanski, L. A., Simulation-Extrapolation Estimation in Parametric Measurement Error Models, Journal of the American Statistical Association, 89 (428), 1314-1328, 1994.
Parameter sensitivity and uncertainty of the forest carbon flux model FORUG : a Monte Carlo analysis
Energy Technology Data Exchange (ETDEWEB)
Verbeeck, H.; Samson, R.; Lemeur, R. [Ghent Univ., Ghent (Belgium). Laboratory of Plant Ecology; Verdonck, F. [Ghent Univ., Ghent (Belgium). Dept. of Applied Mathematics, Biometrics and Process Control
2006-06-15
The FORUG model is a multi-layer process-based model that simulates carbon dioxide (CO{sub 2}) and water exchange between forest stands and the atmosphere. The main model outputs are net ecosystem exchange (NEE), total ecosystem respiration (TER), gross primary production (GPP) and evapotranspiration. This study used a sensitivity analysis to identify the parameters contributing to NEE uncertainty in the FORUG model. The aim was to determine if it is necessary to estimate the uncertainty of all parameters of a model to determine overall output uncertainty. Data used in the study were the meteorological and flux data of beech trees in Hesse. The Monte Carlo method was used to rank sensitivity and uncertainty parameters in combination with a multiple linear regression. Simulations were run in which parameters were assigned probability distributions and the effect of variance in the parameters on the output distribution was assessed. The uncertainty of the output for NEE was estimated. Based on the arbitrary uncertainty of 10 key parameters, a standard deviation of 0.88 Mg C per year per NEE was found, which was equal to 24 per cent of the mean value of NEE. The sensitivity analysis showed that the overall output uncertainty of the FORUG model could be determined by accounting for only a few key parameters, which were identified as corresponding to critical parameters in the literature. It was concluded that the 10 most important parameters determined more than 90 per cent of the output uncertainty. High ranking parameters included soil respiration; photosynthesis; and crown architecture. It was concluded that the Monte Carlo technique is a useful tool for ranking the uncertainty of parameters of process-based forest flux models. 48 refs., 2 tabs., 2 figs.
Uncertainty avoidance and the exploration-exploitation trade-off
Broekhuizen, Thijs; Giarratana, Marco S.; Torres, Anna
2017-01-01
Purpose - This study aims to investigate how a firm's uncertainty avoidance - as indicated by the headquarters' national culture - impacts firm performance by affecting exploratory (product innovation) and exploitative (brand trademark protection) activities. It aims to show that firms characterized
Nuclear data adjustment methodology utilizing resonance parameter sensitivities and uncertainties
International Nuclear Information System (INIS)
Broadhead, B.L.
1983-01-01
This work presents the development and demonstration of a Nuclear Data Adjustment Method that allows inclusion of both energy and spatial self-shielding into the adjustment procedure. The resulting adjustments are for the basic parameters (i.e. resonance parameters) in the resonance regions and for the group cross sections elsewhere. The majority of this development effort concerns the production of resonance parameter sensitivity information which allows the linkage between the responses of interest and the basic parameters. The resonance parameter sensitivity methodology developed herein usually provides accurate results when compared to direct recalculations using existng and well-known cross section processing codes. However, it has been shown in several cases that self-shielded cross sections can be very non-linear functions of the basic parameters. For this reason caution must be used in any study which assumes that a linear relatonship exists between a given self-shielded group cross section and its corresponding basic data parameters. The study also has pointed out the need for more approximate techniques which will allow the required sensitivity information to be obtained in a more cost effective manner
A Tool for Parameter-space Explorations
Murase, Yohsuke; Uchitane, Takeshi; Ito, Nobuyasu
A software for managing simulation jobs and results, named "OACIS", is presented. It controls a large number of simulation jobs executed in various remote servers, keeps these results in an organized way, and manages the analyses on these results. The software has a web browser front end, and users can submit various jobs to appropriate remote hosts from a web browser easily. After these jobs are finished, all the result files are automatically downloaded from the computational hosts and stored in a traceable way together with the logs of the date, host, and elapsed time of the jobs. Some visualization functions are also provided so that users can easily grasp the overview of the results distributed in a high-dimensional parameter space. Thus, OACIS is especially beneficial for the complex simulation models having many parameters for which a lot of parameter searches are required. By using API of OACIS, it is easy to write a code that automates parameter selection depending on the previous simulation results. A few examples of the automated parameter selection are also demonstrated.
Exploring cosmic origins with CORE: Cosmological parameters
Di Valentino, E.; Brinckmann, T.; Gerbino, M.; Poulin, V.; Bouchet, F. R.; Lesgourgues, J.; Melchiorri, A.; Chluba, J.; Clesse, S.; Delabrouille, J.; Dvorkin, C.; Forastieri, F.; Galli, S.; Hooper, D. C.; Lattanzi, M.; Martins, C. J. A. P.; Salvati, L.; Cabass, G.; Caputo, A.; Giusarma, E.; Hivon, E.; Natoli, P.; Pagano, L.; Paradiso, S.; Rubiño-Martin, J. A.; Achúcarro, A.; Ade, P.; Allison, R.; Arroja, F.; Ashdown, M.; Ballardini, M.; Banday, A. J.; Banerji, R.; Bartolo, N.; Bartlett, J. G.; Basak, S.; Baumann, D.; de Bernardis, P.; Bersanelli, M.; Bonaldi, A.; Bonato, M.; Borrill, J.; Boulanger, F.; Bucher, M.; Burigana, C.; Buzzelli, A.; Cai, Z.-Y.; Calvo, M.; Carvalho, C. S.; Castellano, G.; Challinor, A.; Charles, I.; Colantoni, I.; Coppolecchia, A.; Crook, M.; D'Alessandro, G.; De Petris, M.; De Zotti, G.; Diego, J. M.; Errard, J.; Feeney, S.; Fernandez-Cobos, R.; Ferraro, S.; Finelli, F.; de Gasperis, G.; Génova-Santos, R. T.; González-Nuevo, J.; Grandis, S.; Greenslade, J.; Hagstotz, S.; Hanany, S.; Handley, W.; Hazra, D. K.; Hernández-Monteagudo, C.; Hervias-Caimapo, C.; Hills, M.; Kiiveri, K.; Kisner, T.; Kitching, T.; Kunz, M.; Kurki-Suonio, H.; Lamagna, L.; Lasenby, A.; Lewis, A.; Liguori, M.; Lindholm, V.; Lopez-Caniego, M.; Luzzi, G.; Maffei, B.; Martin, S.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; McCarthy, D.; Melin, J.-B.; Mohr, J. J.; Molinari, D.; Monfardini, A.; Negrello, M.; Notari, A.; Paiella, A.; Paoletti, D.; Patanchon, G.; Piacentini, F.; Piat, M.; Pisano, G.; Polastri, L.; Polenta, G.; Pollo, A.; Quartin, M.; Remazeilles, M.; Roman, M.; Ringeval, C.; Tartari, A.; Tomasi, M.; Tramonte, D.; Trappe, N.; Trombetti, T.; Tucker, C.; Väliviita, J.; van de Weygaert, R.; Van Tent, B.; Vennin, V.; Vermeulen, G.; Vielva, P.; Vittorio, N.; Young, K.; Zannoni, M.
2018-04-01
We forecast the main cosmological parameter constraints achievable with the CORE space mission which is dedicated to mapping the polarisation of the Cosmic Microwave Background (CMB). CORE was recently submitted in response to ESA's fifth call for medium-sized mission proposals (M5). Here we report the results from our pre-submission study of the impact of various instrumental options, in particular the telescope size and sensitivity level, and review the great, transformative potential of the mission as proposed. Specifically, we assess the impact on a broad range of fundamental parameters of our Universe as a function of the expected CMB characteristics, with other papers in the series focusing on controlling astrophysical and instrumental residual systematics. In this paper, we assume that only a few central CORE frequency channels are usable for our purpose, all others being devoted to the cleaning of astrophysical contaminants. On the theoretical side, we assume ΛCDM as our general framework and quantify the improvement provided by CORE over the current constraints from the Planck 2015 release. We also study the joint sensitivity of CORE and of future Baryon Acoustic Oscillation and Large Scale Structure experiments like DESI and Euclid. Specific constraints on the physics of inflation are presented in another paper of the series. In addition to the six parameters of the base ΛCDM, which describe the matter content of a spatially flat universe with adiabatic and scalar primordial fluctuations from inflation, we derive the precision achievable on parameters like those describing curvature, neutrino physics, extra light relics, primordial helium abundance, dark matter annihilation, recombination physics, variation of fundamental constants, dark energy, modified gravity, reionization and cosmic birefringence. In addition to assessing the improvement on the precision of individual parameters, we also forecast the post-CORE overall reduction of the allowed
International Nuclear Information System (INIS)
Hüser, Dorothee; Thomsen-Schmidt, Peter; Hüser, Jonathan; Rief, Sebastian; Seewig, Jörg
2016-01-01
Roughness parameters that characterize contacting surfaces with regard to friction and wear are commonly stated without uncertainties, or with an uncertainty only taking into account a very limited amount of aspects such as repeatability of reproducibility (homogeneity) of the specimen. This makes it difficult to discriminate between different values of single roughness parameters. Therefore uncertainty assessment methods are required that take all relevant aspects into account. In the literature this is rarely performed and examples specific for parameters used in friction and wear are not yet given. We propose a procedure to derive the uncertainty from a single profile employing a statistical method that is based on the statistical moments of the amplitude distribution and the autocorrelation length of the profile. To show the possibilities and the limitations of this method we compare the uncertainty derived from a single profile with that derived from a high statistics experiment. (paper)
Plessis, S.; McDougall, D.; Mandt, K.; Greathouse, T.; Luspay-Kuti, A.
2015-11-01
Bimolecular diffusion coefficients are important parameters used by atmospheric models to calculate altitude profiles of minor constituents in an atmosphere. Unfortunately, laboratory measurements of these coefficients were never conducted at temperature conditions relevant to the atmosphere of Titan. Here we conduct a detailed uncertainty analysis of the bimolecular diffusion coefficient parameters as applied to Titan's upper atmosphere to provide a better understanding of the impact of uncertainty for this parameter on models. Because temperature and pressure conditions are much lower than the laboratory conditions in which bimolecular diffusion parameters were measured, we apply a Bayesian framework, a problem-agnostic framework, to determine parameter estimates and associated uncertainties. We solve the Bayesian calibration problem using the open-source QUESO library which also performs a propagation of uncertainties in the calibrated parameters to temperature and pressure conditions observed in Titan's upper atmosphere. Our results show that, after propagating uncertainty through the Massman model, the uncertainty in molecular diffusion is highly correlated to temperature and we observe no noticeable correlation with pressure. We propagate the calibrated molecular diffusion estimate and associated uncertainty to obtain an estimate with uncertainty due to bimolecular diffusion for the methane molar fraction as a function of altitude. Results show that the uncertainty in methane abundance due to molecular diffusion is in general small compared to eddy diffusion and the chemical kinetics description. However, methane abundance is most sensitive to uncertainty in molecular diffusion above 1200 km where the errors are nontrivial and could have important implications for scientific research based on diffusion models in this altitude range.
Propagation of uncertainties from basic data to key parameters of nuclear reactors
International Nuclear Information System (INIS)
Kodeli, I.
2010-01-01
The author reports the development of a set of computing software (SUSD3D) and of libraries of nuclear data covariance matrices to assess sensitivities of parameters with respect to basic nuclear data, and the corresponding uncertainties, notably for radiation transport for which uncertainty has various origins: reactivity coefficients or neutron and gamma ray flows. He reports the application to fusion and fission reactors
Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly
Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.
2013-01-01
Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…
Chen, Zhuowei; Shi, Liangsheng; Ye, Ming; Zhu, Yan; Yang, Jinzhong
2018-06-01
Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. By using a new variance-based global sensitivity analysis method, this paper identifies important parameters for nitrogen reactive transport with simultaneous consideration of these three uncertainties. A combination of three scenarios of soil temperature and two scenarios of soil moisture creates a total of six scenarios. Four alternative models describing the effect of soil temperature and moisture content are used to evaluate the reduction functions used for calculating actual reaction rates. The results show that for nitrogen reactive transport problem, parameter importance varies substantially among different models and scenarios. Denitrification and nitrification process is sensitive to soil moisture content status rather than to the moisture function parameter. Nitrification process becomes more important at low moisture content and low temperature. However, the changing importance of nitrification activity with respect to temperature change highly relies on the selected model. Model-averaging is suggested to assess the nitrification (or denitrification) contribution by reducing the possible model error. Despite the introduction of biochemical heterogeneity or not, fairly consistent parameter importance rank is obtained in this study: optimal denitrification rate (Kden) is the most important parameter; reference temperature (Tr) is more important than temperature coefficient (Q10); empirical constant in moisture response function (m) is the least important one. Vertical distribution of soil moisture but not temperature plays predominant role controlling nitrogen reaction. This study provides insight into the nitrogen reactive transport modeling and demonstrates an effective strategy of selecting the important parameters when future temperature and soil moisture carry uncertainties or when modelers face with multiple ways of establishing nitrogen
Uncertainties of the Yn Parameters of the Hage-Cifarelli Formalism
Energy Technology Data Exchange (ETDEWEB)
Smith-Nelson, Mark A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Burr, Thomas Lee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hutchinson, Jesson D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cutler, Theresa Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-08-14
One method for determining the physical parameters of a multiplying system is summarized by Cifarelli [1]. In this methodology the single, double and triple rates are determined from what is commonly referred to as Feynman histograms. This paper will examine two methods for estimating the uncertainty in the parameters used in inferring these rates. These methods will be compared with simulated data in order to determine which one best approximates the sample uncertainty.
Sensitivity coefficients of reactor parameters in fast critical assemblies and uncertainty analysis
International Nuclear Information System (INIS)
Aoyama, Takafumi; Suzuki, Takayuki; Takeda, Toshikazu; Hasegawa, Akira; Kikuchi, Yasuyuki.
1986-02-01
Sensitivity coefficients of reactor parameters in several fast critical assemblies to various cross sections were calculated in 16 group by means of SAGEP code based on the generalized perturbation theory. The sensitivity coefficients were tabulated and the difference of sensitivity coefficients was discussed. Furthermore, the uncertainty of calculated reactor parameters due to cross section uncertainty were estimated using the sensitivity coefficients and cross section covariance data. (author)
Exploring uncertainty in the Earth Sciences - the potential field perspective
Saltus, R. W.; Blakely, R. J.
2013-12-01
Interpretation of gravity and magnetic anomalies is mathematically non-unique because multiple theoretical solutions are possible. The mathematical label of 'non-uniqueness' can lead to the erroneous impression that no single interpretation is better in a geologic sense than any other. The purpose of this talk is to present a practical perspective on the theoretical non-uniqueness of potential field interpretation in geology. There are multiple ways to approach and constrain potential field studies to produce significant, robust, and definitive results. For example, a smooth, bell-shaped gravity profile, in theory, could be caused by an infinite set of physical density bodies, ranging from a deep, compact, circular source to a shallow, smoothly varying, inverted bell-shaped source. In practice, however, we can use independent geologic or geophysical information to limit the range of possible source densities and rule out many of the theoretical solutions. We can further reduce the theoretical uncertainty by careful attention to subtle anomaly details. For example, short-wavelength anomalies are a well-known and theoretically established characteristic of shallow geologic sources. The 'non-uniqueness' of potential field studies is closely related to the more general topic of scientific uncertainty in the Earth sciences and beyond. Nearly all results in the Earth sciences are subject to significant uncertainty because problems are generally addressed with incomplete and imprecise data. The increasing need to combine results from multiple disciplines into integrated solutions in order to address complex global issues requires special attention to the appreciation and communication of uncertainty in geologic interpretation.
Degeling, Koen; Ijzerman, Maarten J.; Koopman, Miriam; Koffijberg, Hendrik
2017-01-01
Background: Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive
Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik
2017-01-01
Background Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive
Degeling, Koen; IJzerman, Maarten J.; Koopman, Miriam; Koffijberg, Hendrik
2017-01-01
Background Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by ...
Effect of Uncertainty Parameters in Blowdown and Reflood Models for OPR1000 LBLOCA Analysis
Energy Technology Data Exchange (ETDEWEB)
Huh, Byung Gil; Jin, Chang Yong; Seul, Kwangwon; Hwang, Taesuk [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)
2014-05-15
KINS(Korea Institute of Nuclear Safety) has also performed the audit calculation with the KINS Realistic Evaluation Methodology(KINS-REM) to confirm the validity of licensee's calculation. In the BEPU method, it is very important to quantify the code and model uncertainty. It is referred in the following requirement: BE calculations in Regulatory Guide 1.157 - 'the code and models used are acceptable and applicable to the specific facility over the intended operating range and must quantify the uncertainty in the specific application'. In general, the uncertainty of model/code should be obtained through the data comparison with relevant integral- and separate-effect tests at different scales. However, it is not easy to determine these kinds of uncertainty because of the difficulty for evaluating accurately various experiments. Therefore, the expert judgment has been used in many cases even with the limitation that the uncertainty range of important parameters can be wide and inaccurate. In the KINS-REM, six heat transfer parameters in the blowdown phase have been used to consider the uncertainty of models. Recently, MARS-KS code was modified to consider the uncertainty of the five heat transfer parameters in the reflood phase. Accordingly, it is required that the uncertainty range for parameters of reflood models is determined and the effect of these ranges is evaluated. In this study, the large break LOCA (LBLOCA) analysis for OPR1000 was performed to identify the effect of uncertainty parameters in blowdown and reflood models.
Parameters-related uncertainty in modeling sugar cane yield with an agro-Land Surface Model
Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Ruget, F.; Gabrielle, B.
2012-12-01
Agro-Land Surface Models (agro-LSM) have been developed from the coupling of specific crop models and large-scale generic vegetation models. They aim at accounting for the spatial distribution and variability of energy, water and carbon fluxes within soil-vegetation-atmosphere continuum with a particular emphasis on how crop phenology and agricultural management practice influence the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty in these models is related to the many parameters included in the models' equations. In this study, we quantify the parameter-based uncertainty in the simulation of sugar cane biomass production with the agro-LSM ORCHIDEE-STICS on a multi-regional approach with data from sites in Australia, La Reunion and Brazil. First, the main source of uncertainty for the output variables NPP, GPP, and sensible heat flux (SH) is determined through a screening of the main parameters of the model on a multi-site basis leading to the selection of a subset of most sensitive parameters causing most of the uncertainty. In a second step, a sensitivity analysis is carried out on the parameters selected from the screening analysis at a regional scale. For this, a Monte-Carlo sampling method associated with the calculation of Partial Ranked Correlation Coefficients is used. First, we quantify the sensitivity of the output variables to individual input parameters on a regional scale for two regions of intensive sugar cane cultivation in Australia and Brazil. Then, we quantify the overall uncertainty in the simulation's outputs propagated from the uncertainty in the input parameters. Seven parameters are identified by the screening procedure as driving most of the uncertainty in the agro-LSM ORCHIDEE-STICS model output at all sites. These parameters control photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), root
Uncertainty estimation of core safety parameters using cross-correlations of covariance matrix
International Nuclear Information System (INIS)
Yamamoto, Akio; Yasue, Yoshihiro; Endo, Tomohiro; Kodama, Yasuhiro; Ohoka, Yasunori; Tatsumi, Masahiro
2013-01-01
An uncertainty reduction method for core safety parameters, for which measurement values are not obtained, is proposed. We empirically recognize that there exist some correlations among the prediction errors of core safety parameters, e.g., a correlation between the control rod worth and the assembly relative power at corresponding position. Correlations of errors among core safety parameters are theoretically estimated using the covariance of cross sections and sensitivity coefficients of core parameters. The estimated correlations of errors among core safety parameters are verified through the direct Monte Carlo sampling method. Once the correlation of errors among core safety parameters is known, we can estimate the uncertainty of a safety parameter for which measurement value is not obtained. (author)
Pande, S.; Arkesteijn, L.; Savenije, H.H.G.; Bastidas, L.A.
2014-01-01
This paper presents evidence that model prediction uncertainty does not necessarily rise with parameter dimensionality (the number of parameters). Here by prediction we mean future simulation of a variable of interest conditioned on certain future values of input variables. We utilize a relationship
The Effects of Uncertainty in Speed-Flow Curve Parameters on a Large-Scale Model
DEFF Research Database (Denmark)
Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo
2014-01-01
-delay functions express travel time as a function of traffic flows and the theoretical capacity of the modeled facility. The U.S. Bureau of Public Roads (BPR) formula is one of the most extensively applied volume delay functions in practice. This study investigated uncertainty in the BPR parameters. Initially......-stage Danish national transport model. The results clearly highlight the importance to modeling purposes of taking into account BPR formula parameter uncertainty, expressed as a distribution of values rather than assumed point values. Indeed, the model output demonstrates a noticeable sensitivity to parameter...
DEFF Research Database (Denmark)
Minsley, B. J.; Christensen, Nikolaj Kruse; Christensen, Steen
Model structure, or the spatial arrangement of subsurface lithological units, is fundamental to the hydrological behavior of Earth systems. Knowledge of geological model structure is critically important in order to make informed hydrological predictions and management decisions. Model structure...... is never perfectly known, however, and incorrect assumptions can be a significant source of error when making model predictions. We describe a systematic approach for quantifying model structural uncertainty that is based on the integration of sparse borehole observations and large-scale airborne...... electromagnetic (AEM) data. Our estimates of model structural uncertainty follow a Bayesian framework that accounts for both the uncertainties in geophysical parameter estimates given AEM data, and the uncertainties in the relationship between lithology and geophysical parameters. Using geostatistical sequential...
International Nuclear Information System (INIS)
Küng, Alain; Meli, Felix; Nicolet, Anaïs; Thalmann, Rudolf
2014-01-01
Tactile ultra-precise coordinate measuring machines (CMMs) are very attractive for accurately measuring optical components with high slopes, such as aspheres. The METAS µ-CMM, which exhibits a single point measurement repeatability of a few nanometres, is routinely used for measurement services of microparts, including optical lenses. However, estimating the measurement uncertainty is very demanding. Because of the many combined influencing factors, an analytic determination of the uncertainty of parameters that are obtained by numerical fitting of the measured surface points is almost impossible. The application of numerical simulation (Monte Carlo methods) using a parametric fitting algorithm coupled with a virtual CMM based on a realistic model of the machine errors offers an ideal solution to this complex problem: to each measurement data point, a simulated measurement variation calculated from the numerical model of the METAS µ-CMM is added. Repeated several hundred times, these virtual measurements deliver the statistical data for calculating the probability density function, and thus the measurement uncertainty for each parameter. Additionally, the eventual cross-correlation between parameters can be analyzed. This method can be applied for the calibration and uncertainty estimation of any parameter of the equation representing a geometric element. In this article, we present the numerical simulation model of the METAS µ-CMM and the application of a Monte Carlo method for the uncertainty estimation of measured asphere parameters. (paper)
International Nuclear Information System (INIS)
Mateus, M; Carrilho, J Dias; Da Silva, M Gameiro
2015-01-01
The present study deals with the influence of the sampling parameters on the uncertainty of noise equivalent level in environmental noise measurements. The study has been carried out through the test of different sampling strategies doing resampling trials over continuous monitoring noise files obtained previously in an urban location in the city of Coimbra, in Portugal. On short term measurements, not only the duration of the sampling episodes but also its number have influence on the uncertainty of the result. This influence is higher for the time periods where sound levels suffer a greater variation, such as during the night period. In this period, in case both parameters (duration and number of sampling episodes) are not carefully selected, the uncertainty level can reach too high values contributing to a loss of precision of the measurements. With the obtained data it was investigated the sampling parameters influence on the long term noise indicator uncertainty, calculated according the Draft 1st CD ISO 1996-2:2012 proposed method. It has been verified that this method allows the possibility of defining a general methodology which enables the setting of the parameters once the precision level is fixed. For the three reference periods defined for environmental noise (day, evening and night), it was possible to derive a two variable power law representing the uncertainty of the determined values as a function of the two sampling parameters: duration of sampling episode and number of episodes
Arpaia, P; Lucariello, G; Spiezia, G
2007-01-01
At European Centre of Nuclear Research (CERN), within the new Large Hadron Collider (LHC) project, measurements of magnetic flux with uncertainty of 10 ppm at a few of decades of Hz for several minutes are required. With this aim, a new Fast Digital Integrator (FDI) has been developed in cooperation with University of Sannio, Italy [1]. This paper deals with the final design tuning for achieving target uncertainty by means of experimental statistical parameter design.
Energy Technology Data Exchange (ETDEWEB)
Kumar, Vikas, E-mail: vikas.kumar@urv.cat [Department of Chemical Engineering, Rovira i Virgili University, Tarragona 43007 (Spain); Barros, Felipe P.J. de [Sonny Astani Department of Civil and Environmental Engineering, University of Southern California, Los Angeles 90089, CA (United States); Schuhmacher, Marta [Department of Chemical Engineering, Rovira i Virgili University, Tarragona 43007 (Spain); Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier [Hydrogeology Group, Department of Geotechnical Engineering and Geosciences, University Politècnica de Catalunya-BarcelonaTech, Barcelona 08034 (Spain)
2013-12-15
Highlights: • Dynamic parametric interaction in daily dose prediction under uncertainty. • Importance of temporal dynamics associated with the dose. • Different dose experienced by different population cohorts as a function of time. • Relevance of uncertainty reduction in the input parameters shows temporal dynamism. -- Abstract: We study the time dependent interaction between hydrogeological and exposure parameters in daily dose predictions due to exposure of humans to groundwater contamination. Dose predictions are treated stochastically to account for an incomplete hydrogeological and geochemical field characterization, and an incomplete knowledge of the physiological response. We used a nested Monte Carlo framework to account for uncertainty and variability arising from both hydrogeological and exposure variables. Our interest is in the temporal dynamics of the total dose and their effects on parametric uncertainty reduction. We illustrate the approach to a HCH (lindane) pollution problem at the Ebro River, Spain. The temporal distribution of lindane in the river water can have a strong impact in the evaluation of risk. The total dose displays a non-linear effect on different population cohorts, indicating the need to account for population variability. We then expand the concept of Comparative Information Yield Curves developed earlier (see de Barros et al. [29]) to evaluate parametric uncertainty reduction under temporally variable exposure dose. Results show that the importance of parametric uncertainty reduction varies according to the temporal dynamics of the lindane plume. The approach could be used for any chemical to aid decision makers to better allocate resources towards reducing uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Voegl, E
1970-07-01
This study is to acquaint the oil geologist, reservoir engineer, and manager with modern methods of appraising geological/technical projects and decision problems under uncertainty. Uncertainty attaches to any appraisal of investment projects whose income lies in the future. The greater that uncertainty, the less important become the appraisal methods proper while the computation procedures concerning uncertainty gain in significance. There are briefly discussed the tools of risk determination, i.e., mathematical statistics and probability theory, and some of the most common methods of quantifying the uncertainty are explained. The best known methods of decision finding under multivalent or uncertain expectations, such as conditional and sensibility analyses, minimax and minimax-risk rule, and preference theory are set forth. The risk is defined, and the most common methods of genuine risk determination in exploration and exploitation are discussed. Practical examples illustrate the solution of decision problems under uncertainty, and examples of genuine risk determination are furnished. (29 refs.)
Zhang, Xuesong
2011-11-01
Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow forecasting. In this study, we propose a Markov Chain Monte Carlo (MCMC) framework (BNN-PIS) to incorporate the uncertainties associated with parameters, inputs, and structures into BNNs. This framework allows the structure of the neural networks to change by removing or adding connections between neurons and enables scaling of input data by using rainfall multipliers. The results show that the new BNNs outperform BNNs that only consider uncertainties associated with parameters and model structures. Critical evaluation of posterior distribution of neural network weights, number of effective connections, rainfall multipliers, and hyper-parameters shows that the assumptions held in our BNNs are not well supported. Further understanding of characteristics of and interactions among different uncertainty sources is expected to enhance the application of neural networks for uncertainty analysis of hydrologic forecasting. © 2011 Elsevier B.V.
Exploring entropic uncertainty relation in the Heisenberg XX model with inhomogeneous magnetic field
Huang, Ai-Jun; Wang, Dong; Wang, Jia-Ming; Shi, Jia-Dong; Sun, Wen-Yang; Ye, Liu
2017-08-01
In this work, we investigate the quantum-memory-assisted entropic uncertainty relation in a two-qubit Heisenberg XX model with inhomogeneous magnetic field. It has been found that larger coupling strength J between the two spin-chain qubits can effectively reduce the entropic uncertainty. Besides, we observe the mechanics of how the inhomogeneous field influences the uncertainty, and find out that when the inhomogeneous field parameter b1. Intriguingly, the entropic uncertainty can shrink to zero when the coupling coefficients are relatively large, while the entropic uncertainty only reduces to 1 with the increase of the homogeneous magnetic field. Additionally, we observe the purity of the state and Bell non-locality and obtain that the entropic uncertainty is anticorrelated with both the purity and Bell non-locality of the evolution state.
Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph
2011-12-01
The reliability of biokinetic models is essential in internal dose assessments and radiation risk analysis for the public, occupational workers, and patients exposed to radionuclides. In this paper, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. The paper is divided into two parts. In the first part of the study published here, the uncertainty sources of the model parameters for zirconium (Zr), developed by the International Commission on Radiological Protection (ICRP), were identified and analyzed. Furthermore, the uncertainty of the biokinetic experimental measurement performed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU) for developing a new biokinetic model of Zr was analyzed according to the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. The confidence interval and distribution of model parameters of the ICRP and HMGU Zr biokinetic models were evaluated. As a result of computer biokinetic modelings, the mean, standard uncertainty, and confidence interval of model prediction calculated based on the model parameter uncertainty were presented and compared to the plasma clearance and urinary excretion measured after intravenous administration. It was shown that for the most important compartment, the plasma, the uncertainty evaluated for the HMGU model was much smaller than that for the ICRP model; that phenomenon was observed for other organs and tissues as well. The uncertainty of the integral of the radioactivity of Zr up to 50 y calculated by the HMGU model after ingestion by adult members of the public was shown to be smaller by a factor of two than that of the ICRP model. It was also shown that the distribution type of the model parameter strongly influences the model prediction, and the correlation of the model input parameters affects the model prediction to a
Influence of resonance parameters' correlations on the resonance integral uncertainty; 55Mn case
International Nuclear Information System (INIS)
Zerovnik, Gasper; Trkov, Andrej; Capote, Roberto; Rochman, Dimitri
2011-01-01
For nuclides with a large number of resonances the covariance matrix of resonance parameters can become very large and expensive to process in terms of the computation time. By converting covariance matrix of resonance parameters into covariance matrices of background cross-section in a more or less coarse group structure a considerable amount of computer time and memory can be saved. The question is how important is the information that is discarded in the process. First, the uncertainty of the 55 Mn resonance integral was estimated in narrow resonance approximation for different levels of self-shielding using Bondarenko method by random sampling of resonance parameters according to their covariance matrices from two different 55 Mn evaluations: one from Nuclear Research and Consultancy Group NRG (with large uncertainties but no correlations between resonances), the other from Oak Ridge National Laboratory (with smaller uncertainties but full covariance matrix). We have found out that if all (or at least significant part of the) resonance parameters are correlated, the resonance integral uncertainty greatly depends on the level of self-shielding. Second, it was shown that the commonly used 640-group SAND-II representation cannot describe the increase of the resonance integral uncertainty. A much finer energy mesh for the background covariance matrix would have to be used to take the resonance structure into account explicitly, but then the objective of a more compact data representation is lost.
Lutchen, K R
1990-08-01
A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.
Uncertainty characterization of HOAPS 3.3 latent heat-flux-related parameters
Liman, Julian; Schröder, Marc; Fennig, Karsten; Andersson, Axel; Hollmann, Rainer
2018-03-01
Latent heat flux (LHF) is one of the main contributors to the global energy budget. As the density of in situ LHF measurements over the global oceans is generally poor, the potential of remotely sensed LHF for meteorological applications is enormous. However, to date none of the available satellite products have included estimates of systematic, random, and sampling uncertainties, all of which are essential for assessing their quality. Here, the challenge is taken on by matching LHF-related pixel-level data of the Hamburg Ocean Atmosphere Parameters and Fluxes from Satellite (HOAPS) climatology (version 3.3) to in situ measurements originating from a high-quality data archive of buoys and selected ships. Assuming the ground reference to be bias-free, this allows for deriving instantaneous systematic uncertainties as a function of four atmospheric predictor variables. The approach is regionally independent and therefore overcomes the issue of sparse in situ data densities over large oceanic areas. Likewise, random uncertainties are derived, which include not only a retrieval component but also contributions from in situ measurement noise and the collocation procedure. A recently published random uncertainty decomposition approach is applied to isolate the random retrieval uncertainty of all LHF-related HOAPS parameters. It makes use of two combinations of independent data triplets of both satellite and in situ data, which are analysed in terms of their pairwise variances of differences. Instantaneous uncertainties are finally aggregated, allowing for uncertainty characterizations on monthly to multi-annual timescales. Results show that systematic LHF uncertainties range between 15 and 50 W m-2 with a global mean of 25 W m-2. Local maxima are mainly found over the subtropical ocean basins as well as along the western boundary currents. Investigations indicate that contributions from qa (U) to the overall LHF uncertainty are on the order of 60 % (25 %). From an
Modelling pesticide leaching under climate change: parameter vs. climate input uncertainty
Directory of Open Access Journals (Sweden)
K. Steffens
2014-02-01
Full Text Available Assessing climate change impacts on pesticide leaching requires careful consideration of different sources of uncertainty. We investigated the uncertainty related to climate scenario input and its importance relative to parameter uncertainty of the pesticide leaching model. The pesticide fate model MACRO was calibrated against a comprehensive one-year field data set for a well-structured clay soil in south-western Sweden. We obtained an ensemble of 56 acceptable parameter sets that represented the parameter uncertainty. Nine different climate model projections of the regional climate model RCA3 were available as driven by different combinations of global climate models (GCM, greenhouse gas emission scenarios and initial states of the GCM. The future time series of weather data used to drive the MACRO model were generated by scaling a reference climate data set (1970–1999 for an important agricultural production area in south-western Sweden based on monthly change factors for 2070–2099. 30 yr simulations were performed for different combinations of pesticide properties and application seasons. Our analysis showed that both the magnitude and the direction of predicted change in pesticide leaching from present to future depended strongly on the particular climate scenario. The effect of parameter uncertainty was of major importance for simulating absolute pesticide losses, whereas the climate uncertainty was relatively more important for predictions of changes of pesticide losses from present to future. The climate uncertainty should be accounted for by applying an ensemble of different climate scenarios. The aggregated ensemble prediction based on both acceptable parameterizations and different climate scenarios has the potential to provide robust probabilistic estimates of future pesticide losses.
COMMUNICATING THE PARAMETER UNCERTAINTY IN THE IQWIG EFFICIENCY FRONTIER TO DECISION-MAKERS
Stollenwerk, Björn; Lhachimi, Stefan K; Briggs, Andrew; Fenwick, Elisabeth; Caro, Jaime J; Siebert, Uwe; Danner, Marion; Gerber-Grote, Andreas
2015-01-01
The Institute for Quality and Efficiency in Health Care (IQWiG) developed—in a consultation process with an international expert panel—the efficiency frontier (EF) approach to satisfy a range of legal requirements for economic evaluation in Germany's statutory health insurance system. The EF approach is distinctly different from other health economic approaches. Here, we evaluate established tools for assessing and communicating parameter uncertainty in terms of their applicability to the EF approach. Among these are tools that perform the following: (i) graphically display overall uncertainty within the IQWiG EF (scatter plots, confidence bands, and contour plots) and (ii) communicate the uncertainty around the reimbursable price. We found that, within the EF approach, most established plots were not always easy to interpret. Hence, we propose the use of price reimbursement acceptability curves—a modification of the well-known cost-effectiveness acceptability curves. Furthermore, it emerges that the net monetary benefit allows an intuitive interpretation of parameter uncertainty within the EF approach. This research closes a gap for handling uncertainty in the economic evaluation approach of the IQWiG methods when using the EF. However, the precise consequences of uncertainty when determining prices are yet to be defined. © 2014 The Authors. Health Economics published by John Wiley & Sons Ltd. PMID:24590819
Evaluation of thermal-hydraulic parameter uncertainties in a TRIGA research reactor
International Nuclear Information System (INIS)
Mesquita, Amir Z.; Costa, Antonio C.L.; Ladeira, Luiz C.D.; Rezende, Hugo C.; Palma, Daniel A.P.
2015-01-01
Experimental studies had been performed in the TRIGA Research Nuclear Reactor of CDTN/CNEN to find out the its thermal hydraulic parameters. Fuel to coolant heat transfer patterns must be evaluated as function of the reactor power in order to assess the thermal hydraulic performance of the core. The heat generated by nuclear fission in the reactor core is transferred from fuel elements to the cooling system through the fuel-cladding (gap) and the cladding to coolant interfaces. As the reactor core power increases the heat transfer regime from the fuel cladding to the coolant changes from single-phase natural convection to subcooled nucleate boiling. This paper presents the uncertainty analysis in the results of the thermal hydraulics experiments performed. The methodology used to evaluate the propagation of uncertainty in the results was done based on the pioneering article of Kline and McClintock, with the propagation of uncertainties based on the specification of uncertainties in various primary measurements. The uncertainty analysis on thermal hydraulics parameters of the CDTN TRIGA fuel element is determined, basically, by the uncertainty of the reactor's thermal power. (author)
Liguori, M
2008-01-01
We study the impact of cosmological parameters' uncertainties on estimates of the primordial NG parameter f_NL in local and equilateral models of non-Gaussianity. We show that propagating these errors increases the f_NL relative uncertainty by 16% for WMAP and 5 % for Planck in the local case, whereas for equilateral configurations the correction term are 14% and 4%, respectively. If we assume for local f_NL a central value of order 60, according to recent WMAP 5-years estimates, we obtain for Planck a final correction \\Delta f_NL = 3. Although not dramatic, this correction is at the level of the expected estimator uncertainty for Planck, and should then be taken into account when quoting the significance of an eventual future detection. In current estimates of f_NL the cosmological parameters are held fixed at their best-fit values. We finally note that the impact of uncertainties in the cosmological parameters on the final f_NL error bar would become totally negligible if the parameters were allowed to vary...
Identifying the effects of parameter uncertainty on the reliability of riverbank stability modelling
Samadi, A.; Amiri-Tokaldany, E.; Darby, S. E.
2009-05-01
Bank retreat is a key process in fluvial dynamics affecting a wide range of physical, ecological and socioeconomic issues in the fluvial environment. To predict the undesirable effects of bank retreat and to inform effective measures to prevent it, a wide range of bank stability models have been presented in the literature. These models typically express bank stability by defining a factor of safety as the ratio of driving and resisting forces acting on the incipient failure block. These forces are affected by a range of controlling factors that include such aspects as the bank profile (bank height and angle), the geotechnical properties of the bank materials, as well as the hydrological status of the riverbanks. In this paper we evaluate the extent to which uncertainties in the parameterization of these controlling factors feed through to influence the reliability of the resulting bank stability estimate. This is achieved by employing a simple model of riverbank stability with respect to planar failure (which is the most common type of bank stability model) in a series of sensitivity tests and Monte Carlo analyses to identify, for each model parameter, the range of values that induce significant changes in the simulated factor of safety. These identified parameter value ranges are compared to empirically derived parameter uncertainties to determine whether they are likely to confound the reliability of the resulting bank stability calculations. Our results show that parameter uncertainties are typically high enough that the likelihood of generating unreliable predictions is typically very high (> ˜ 80% for predictions requiring a precision of < ± 15%). Because parameter uncertainties are derived primarily from the natural variability of the parameters, rather than measurement errors, much more careful attention should be paid to field sampling strategies, such that the parameter uncertainties and consequent prediction unreliabilities can be quantified more
International Nuclear Information System (INIS)
Yamaguchi, Tetsuji; Minase, Naofumi; Iida, Yoshihisa; Tanaka, Tadao; Nakayama, Shinichi
2005-01-01
This paper describes the current status of our data acquisition on quantifying uncertainties associated with parameters for safety assessment on groundwater scenarios for geological disposal of radioactive wastes. First, sources of uncertainties and the resulting priority in data acquisition were briefed. Then, the current status of data acquisition for quantifying the uncertainties in assessing solubility, diffusivity in bentonite buffer and distribution coefficient on rocks is introduced. The uncertainty with the solubility estimation is quantified from that associated with thermodynamic data and that in estimating groundwater chemistry. The uncertainty associated with the diffusivity in bentonite buffer is composed of variations of relevant factors such as porosity of the bentonite buffer, montmorillonite content, chemical composition of pore water and temperature. The uncertainty of factors such as the specific surface area of the rock, pH, ionic strength, carbonate concentration in groundwater compose uncertainty of the distribution coefficient of radionuclides on rocks. Based on these investigations, problems to be solved in future studies are summarized. (author)
Energy Technology Data Exchange (ETDEWEB)
Cardoni, Jeffrey N.; Kalinich, Donald A.
2014-02-01
Sandia National Laboratories (SNL) plans to conduct uncertainty analyses (UA) on the Fukushima Daiichi unit (1F1) plant with the MELCOR code. The model to be used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). However, that study only examined a handful of various model inputs and boundary conditions, and the predictions yielded only fair agreement with plant data and current release estimates. The goal of this uncertainty study is to perform a focused evaluation of uncertainty in core melt progression behavior and its effect on key figures-of-merit (e.g., hydrogen production, vessel lower head failure, etc.). In preparation for the SNL Fukushima UA work, a scoping study has been completed to identify important core melt progression parameters for the uncertainty analysis. The study also lays out a preliminary UA methodology.
International Nuclear Information System (INIS)
Campolina, Daniel de Almeida Magalhães
2015-01-01
There is an uncertainty for all the components that comprise the model of a nuclear system. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a realistic calculation that has been replacing conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. By analyzing the propagated uncertainty to the effective neutron multiplication factor (k eff ), the effects of the sample size, computational uncertainty and efficiency of a random number generator to represent the distributions that characterize physical uncertainty in a light water reactor was investigated. A program entitled GB s ample was implemented to enable the application of the random sampling method, which requires an automated process and robust statistical tools. The program was based on the black box model and the MCNPX code was used in and parallel processing for the calculation of particle transport. The uncertainties considered were taken from a benchmark experiment in which the effects in k eff due to physical uncertainties is done through a conservative method. In this work a script called GB s ample was implemented to automate the sampling based method, use multiprocessing and assure the necessary robustness. It has been found the possibility of improving the efficiency of the random sampling method by selecting distributions obtained from a random number generator in order to obtain a better representation of uncertainty figures. After the convergence of the method is achieved, in order to reduce the variance of the uncertainty propagated without increase in computational time, it was found the best number o components to be sampled. It was also observed that if the sampling method is used to calculate the effect on k eff due to physical uncertainties reported by
Sun, Wei; Li, Shiyong
2014-08-01
This paper presents an unobservable single-server queueing system with three types of uncertainty, where the service rate, or waiting cost or service quality is random variable that may obtain n(n > 2) values. The information about the realised values of parameters is only known to the server. We are concerned about the server's behaviour: revealing or concealing the information to customers. The n-value assumption and the server's behaviour enable us to consider various pricing strategies. In this paper, we analyse the effect of information and uncertainty on profits and make comparisons between the profits under different pricing strategies. Moreover, as for parameter variability reflected by the number of each parameter's possible choices n, we observe the effect of variable n on all types of profits and find that revealing the parameter information can much more benefit the server with the increase of n.
A Procedure for Characterizing the Range of Input Uncertainty Parameters by the Use of FFTBM
International Nuclear Information System (INIS)
Petruzzi, A.; Kovtonyuk, A.; Raucci, M.; De Luca, D.; Veronese, F.; D'Auria, F.
2013-01-01
In the last years various methodologies were proposed to evaluate the uncertainty of Best Estimate (BE) code predictions. The most used method at the industrial level is based upon the selection of input uncertain parameters, on assigning related ranges of variations and Probability Distribution Functions (PDFs) and on performing a suitable number of code runs to get the combined effect of the variations on the results. A procedure to characterize the variation ranges of the input uncertain parameters is proposed in the paper in place of the usual approach based (mostly) on engineering judgment. The procedure is based on the use of the Fast Fourier Transform Based Method (FFTBM), already part of the Uncertainty Method based on the Accuracy Extrapolation (UMAE) method and extensively used in several international frameworks. The FFTBM has been originally developed to answer questions like 'How long improvements should be added to the system thermal-hydraulic code model? How much simplifications can be introduced and how to conduct an objective comparison?'. The method, easy to understand, convenient to use and user independent, clearly indicates when simulation needs to be improved. The procedure developed for characterizing the range of input uncertainty parameters involves the following main aspects: a) One single input parameter shall not be 'responsible' for the entire error |exp-calc|, unless exceptional situations to be evaluated case by case; b) Initial guess for Max and Min for variation ranges to be based on the usual (adopted) expertise; c) More than one experiment can be used per each NPP and each scenario. Highly influential parameters are expected to be the same. The bounding ranges should be considered for the NPP uncertainty analysis; d) A data base of suitable uncertainty input parameters can be created per each NPP and each transient scenario. (authors)
Visual exploration of parameter influence on phylogenetic trees.
Hess, Martin; Bremm, Sebastian; Weissgraeber, Stephanie; Hamacher, Kay; Goesele, Michael; Wiemeyer, Josef; von Landesberger, Tatiana
2014-01-01
Evolutionary relationships between organisms are frequently derived as phylogenetic trees inferred from multiple sequence alignments (MSAs). The MSA parameter space is exponentially large, so tens of thousands of potential trees can emerge for each dataset. A proposed visual-analytics approach can reveal the parameters' impact on the trees. Given input trees created with different parameter settings, it hierarchically clusters the trees according to their structural similarity. The most important clusters of similar trees are shown together with their parameters. This view offers interactive parameter exploration and automatic identification of relevant parameters. Biologists applied this approach to real data of 16S ribosomal RNA and protein sequences of ion channels. It revealed which parameters affected the tree structures. This led to a more reliable selection of the best trees.
Optimum design of forging process parameters and preform shape under uncertainties
International Nuclear Information System (INIS)
Repalle, Jalaja; Grandhi, Ramana V.
2004-01-01
Forging is a highly complex non-linear process that is vulnerable to various uncertainties, such as variations in billet geometry, die temperature, material properties, workpiece and forging equipment positional errors and process parameters. A combination of these uncertainties could induce heavy manufacturing losses through premature die failure, final part geometric distortion and production risk. Identifying the sources of uncertainties, quantifying and controlling them will reduce risk in the manufacturing environment, which will minimize the overall cost of production. In this paper, various uncertainties that affect forging tool life and preform design are identified, and their cumulative effect on the forging process is evaluated. Since the forging process simulation is computationally intensive, the response surface approach is used to reduce time by establishing a relationship between the system performance and the critical process design parameters. Variability in system performance due to randomness in the parameters is computed by applying Monte Carlo Simulations (MCS) on generated Response Surface Models (RSM). Finally, a Robust Methodology is developed to optimize forging process parameters and preform shape. The developed method is demonstrated by applying it to an axisymmetric H-cross section disk forging to improve the product quality and robustness
Synchronization of chaotic systems with parameter uncertainties via variable structure control
International Nuclear Information System (INIS)
Etemadi, Shahram; Alasty, Aria; Salarieh, Hassan
2006-01-01
The Letter introduces a robust control design method to synchronize a pair of different uncertain chaotic systems. The technique is based on sliding-mode and variable structure control theories. Comparison of proposed method with previous works is performed during simulations. It is shown that the proposed controller while appearing in a faster response, is able to overcome random uncertainties of all model parameters
Synchronization of chaotic systems with parameter uncertainties via variable structure control
Energy Technology Data Exchange (ETDEWEB)
Etemadi, Shahram [Centre of Excellence in Design, Robotics and Automation (CEDRA), School of Mechanical Engineering, Sharif University of Technology, Tehran (Iran, Islamic Republic of); Alasty, Aria [Centre of Excellence in Design, Robotics and Automation (CEDRA), School of Mechanical Engineering, Sharif University of Technology, Tehran (Iran, Islamic Republic of)]. E-mail: aalasti@sharif.edu; Salarieh, Hassan [Centre of Excellence in Design, Robotics and Automation (CEDRA), School of Mechanical Engineering, Sharif University of Technology, Tehran (Iran, Islamic Republic of)
2006-08-28
The Letter introduces a robust control design method to synchronize a pair of different uncertain chaotic systems. The technique is based on sliding-mode and variable structure control theories. Comparison of proposed method with previous works is performed during simulations. It is shown that the proposed controller while appearing in a faster response, is able to overcome random uncertainties of all model parameters.
Parameter estimation techniques and uncertainty in ground water flow model predictions
International Nuclear Information System (INIS)
Zimmerman, D.A.; Davis, P.A.
1990-01-01
Quantification of uncertainty in predictions of nuclear waste repository performance is a requirement of Nuclear Regulatory Commission regulations governing the licensing of proposed geologic repositories for high-level radioactive waste disposal. One of the major uncertainties in these predictions is in estimating the ground-water travel time of radionuclides migrating from the repository to the accessible environment. The cause of much of this uncertainty has been attributed to a lack of knowledge about the hydrogeologic properties that control the movement of radionuclides through the aquifers. A major reason for this lack of knowledge is the paucity of data that is typically available for characterizing complex ground-water flow systems. Because of this, considerable effort has been put into developing parameter estimation techniques that infer property values in regions where no measurements exist. Currently, no single technique has been shown to be superior or even consistently conservative with respect to predictions of ground-water travel time. This work was undertaken to compare a number of parameter estimation techniques and to evaluate how differences in the parameter estimates and the estimation errors are reflected in the behavior of the flow model predictions. That is, we wished to determine to what degree uncertainties in flow model predictions may be affected simply by the choice of parameter estimation technique used. 3 refs., 2 figs
Nonlinear parameter estimation in inviscid compressible flows in presence of uncertainties
International Nuclear Information System (INIS)
Jemcov, A.; Mathur, S.
2004-01-01
The focus of this paper is on the formulation and solution of inverse problems of parameter estimation using algorithmic differentiation. The inverse problem formulated here seeks to determine the input parameters that minimize a least squares functional with respect to certain target data. The formulation allows for uncertainty in the target data by considering the least squares functional in a stochastic basis described by the covariance of the target data. Furthermore, to allow for robust design, the formulation also accounts for uncertainties in the input parameters. This is achieved using the method of propagation of uncertainties using the directional derivatives of the output parameters with respect to unknown parameters. The required derivatives are calculated simultaneously with the solution using generic programming exploiting the template and operator overloading features of the C++ language. The methodology described here is general and applicable to any numerical solution procedure for any set of governing equations but for the purpose of this paper we consider a finite volume solution of the compressible Euler equations. In particular, we illustrate the method for the case of supersonic flow in a duct with a wedge. The parameter to be determined is the inlet Mach number and the target data is the axial component of velocity at the exit of the duct. (author)
Directory of Open Access Journals (Sweden)
Leonetti Davide
2018-01-01
Full Text Available Estimating and reducing uncertainty in fatigue test data analysis is a relevant task in order to assess the reliability of a structural connection with respect to fatigue. Several statistical models have been proposed in the literature with the aim of representing the stress range vs. endurance trend of fatigue test data under constant amplitude loading and the scatter in the finite and infinite life regions. In order to estimate the safety level of the connection also the uncertainty related to the amount of information available need to be estimated using the methods provided by the theory of statistic. The Bayesian analysis is employed to reduce the uncertainty due to the often small amount of test data by introducing prior information related to the parameters of the statistical model. In this work, the inference of fatigue test data belonging to cover plated steel beams is presented. The uncertainty is estimated by making use of Bayesian and frequentist methods. The 5% quantile of the fatigue life is estimated by taking into account the uncertainty related to the sample size for both a dataset containing few samples and one containing more data. The S-N curves resulting from the application of the employed methods are compared and the effect of the reduction of uncertainty in the infinite life region is quantified.
International Nuclear Information System (INIS)
Bernard, D.
2001-12-01
The aim of this thesis was to evaluate uncertainties of key neutron parameters of slab reactors. Uncertainties sources have many origins, technologic origin for parameters of fabrication and physical origin for nuclear data. First, each contribution of uncertainties is calculated and finally, a factor of uncertainties is associated to key slab parameter like reactivity, isotherm reactivity coefficient, control rod efficiency, power form factor before irradiation and life-time. This factors of uncertainties were computed by Generalized Perturbations Theory in case of step 0 and by directs calculations in case of irradiation problems. One of neutronic conformity applications was about fabrication and nuclear data targets precision adjustments. Statistic (uncertainties) and deterministic (deviations) approaches were studied. Then, neutronics key slab parameters uncertainties were reduced and so nuclear performances were optimized. (author)
Network optimization including gas lift and network parameters under subsurface uncertainty
Energy Technology Data Exchange (ETDEWEB)
Schulze-Riegert, R.; Baffoe, J.; Pajonk, O. [SPT Group GmbH, Hamburg (Germany); Badalov, H.; Huseynov, S. [Technische Univ. Clausthal, Clausthal-Zellerfeld (Germany). ITE; Trick, M. [SPT Group, Calgary, AB (Canada)
2013-08-01
Optimization of oil and gas field production systems poses a great challenge to field development due to complex and multiple interactions between various operational design parameters and subsurface uncertainties. Conventional analytical methods are capable of finding local optima based on single deterministic models. They are less applicable for efficiently generating alternative design scenarios in a multi-objective context. Practical implementations of robust optimization workflows integrate the evaluation of alternative design scenarios and multiple realizations of subsurface uncertainty descriptions. Production or economic performance indicators such as NPV (Net Present Value) are linked to a risk-weighted objective function definition to guide the optimization processes. This work focuses on an integrated workflow using a reservoir-network simulator coupled to an optimization framework. The work will investigate the impact of design parameters while considering the physics of the reservoir, wells, and surface facilities. Subsurface uncertainties are described by well parameters such as inflow performance. Experimental design methods are used to investigate parameter sensitivities and interactions. Optimization methods are used to find optimal design parameter combinations which improve key performance indicators of the production network system. The proposed workflow will be applied to a representative oil reservoir coupled to a network which is modelled by an integrated reservoir-network simulator. Gas-lift will be included as an explicit measure to improve production. An objective function will be formulated for the net present value of the integrated system including production revenue and facility costs. Facility and gas lift design parameters are tuned to maximize NPV. Well inflow performance uncertainties are introduced with an impact on gas lift performance. Resulting variances on NPV are identified as a risk measure for the optimized system design. A
Hsu, Wei-Ting; Loh, Chin-Hsiung; Chao, Shu-Hsien
2015-03-01
Stochastic subspace identification method (SSI) has been proven to be an efficient algorithm for the identification of liner-time-invariant system using multivariate measurements. Generally, the estimated modal parameters through SSI may be afflicted with statistical uncertainty, e.g. undefined measurement noises, non-stationary excitation, finite number of data samples etc. Therefore, the identified results are subjected to variance errors. Accordingly, the concept of the stabilization diagram can help users to identify the correct model, i.e. through removing the spurious modes. Modal parameters are estimated at successive model orders where the physical modes of the system are extracted and separated from the spurious modes. Besides, an uncertainty computation scheme was derived for the calculation of uncertainty bounds for modal parameters at some given model order. The uncertainty bounds of damping ratios are particularly interesting, as the estimation of damping ratios are difficult to obtain. In this paper, an automated stochastic subspace identification algorithm is addressed. First, the identification of modal parameters through covariance-driven stochastic subspace identification from the output-only measurements is used for discussion. A systematic way of investigation on the criteria for the stabilization diagram is presented. Secondly, an automated algorithm of post-processing on stabilization diagram is demonstrated. Finally, the computation of uncertainty bounds for each mode with all model order in the stabilization diagram is utilized to determine system natural frequencies and damping ratios. Demonstration of this study on the system identification of a three-span steel bridge under operation condition is presented. It is shown that the proposed new operation procedure for the automated covariance-driven stochastic subspace identification can enhance the robustness and reliability in structural health monitoring.
Energy Technology Data Exchange (ETDEWEB)
Kock, A.
1996-05-01
The objectives of this research are: (1) to calculate and compare off site doses from atmospheric tritium releases at the Savannah River Site using monthly versus 5 year meteorological data and annual source terms, including additional seasonal and site specific parameters not included in present annual assessments; and (2) to calculate the range of the above dose estimates based on distributions in model parameters given by uncertainty estimates found in the literature. Consideration will be given to the sensitivity of parameters given in former studies.
International Nuclear Information System (INIS)
Kock, A.
1996-05-01
The objectives of this research are: (1) to calculate and compare off site doses from atmospheric tritium releases at the Savannah River Site using monthly versus 5 year meteorological data and annual source terms, including additional seasonal and site specific parameters not included in present annual assessments; and (2) to calculate the range of the above dose estimates based on distributions in model parameters given by uncertainty estimates found in the literature. Consideration will be given to the sensitivity of parameters given in former studies
Uncertainties on hydrocarbon exploration assessments in both the absence and presence of optioning
International Nuclear Information System (INIS)
Lerche, I.
1998-01-01
For hydrocarbon exploration opportunities a decision tree evaluation including variance in expected value leads to an extra uncertainty on the quality and worth of expected values as a decision device, due to both intrinsic uncertainties in success probability, assessed gains and assessed costs, and to the fact that the expected value is not one of the realizable outcomes. This paper shows how these uncertainty factors can be properly taken into account to provide a revised assessment of worth. In addition, a similar sense of logic prevails when options are considered for an opportunity. The uncertainty and success probability for an optional opportunity are also assessed in terms of the volatility of the maximum option worth. (author)
International Nuclear Information System (INIS)
Chiba, Go; Tsuji, Masashi; Narabayashi, Tadashi
2014-01-01
In order to properly quantify fission reactor neutronics parameter uncertainties, we have to use covariance data and sensitivity profiles consistently. In the present paper, we establish two consistent methodologies for uncertainty quantification: a self-shielded cross section-based consistent methodology and an infinitely-diluted cross section-based consistent methodology. With these methodologies and the covariance data of uranium-238 nuclear data given in JENDL-3.3, we quantify uncertainties of infinite neutron multiplication factors of light water reactor and fast reactor fuel cells. While an inconsistent methodology gives results which depend on the energy group structure of neutron flux and neutron-nuclide reaction cross section representation, both the consistent methodologies give fair results with no such dependences.
Directory of Open Access Journals (Sweden)
GO CHIBA
2014-06-01
Full Text Available In order to properly quantify fission reactor neutronics parameter uncertainties, we have to use covariance data and sensitivity profiles consistently. In the present paper, we establish two consistent methodologies for uncertainty quantification: a self-shielded cross section-based consistent methodology and an infinitely-diluted cross section-based consistent methodology. With these methodologies and the covariance data of uranium-238 nuclear data given in JENDL-3.3, we quantify uncertainties of infinite neutron multiplication factors of light water reactor and fast reactor fuel cells. While an inconsistent methodology gives results which depend on the energy group structure of neutron flux and neutron-nuclide reaction cross section representation, both the consistent methodologies give fair results with no such dependences.
Indian Academy of Sciences (India)
To reflect this uncertainty in the climate scenarios, the use of AOGCMs that explicitly simulate the carbon cycle and chemistry of all the substances are needed. The Hadley Centre has developed a version of the climate model that allows the effect of climate change on the carbon cycle and its feedback into climate, to be ...
International Nuclear Information System (INIS)
Silva, T.A. da
1988-01-01
The comparison between the uncertainty method recommended by International Atomic Energy Agency (IAEA) and the and the International Weight and Measure Commitee (CIPM) are showed, for the calibration of clinical dosimeters in the secondary standard Dosimetry Laboratory (SSDL). (C.G.C.) [pt
International Nuclear Information System (INIS)
Martorell, S.; Martón, I.; Villamizar, M.; Sánchez, A.I.; Carlos, S.
2014-01-01
This paper presents an approach and an example of application for the evaluation of risk impact of changes to Completion Times within the License Basis of a Nuclear Power Plant based on the use of the Probabilistic Risk Assessment addressing identification, treatment and analysis of uncertainties in an integrated manner. It allows full development of a three tired approach (Tier 1–3) following the principles of the risk-informed decision-making accounting for uncertainties as proposed by many regulators. Completion Time is the maximum outage time a safety related equipment is allowed to be down, e.g. for corrective maintenance, which is established within the Limiting Conditions for Operation included into Technical Specifications for operation of a Nuclear Power Plant. The case study focuses on a Completion Time change of the Accumulators System of a Nuclear Power Plant using a level 1 PRA. It focuses on several sources of model and parameter uncertainties. The results obtained show the risk impact of the proposed CT change including both types of epistemic uncertainties is small as compared with current safety goals of concern to Tier 1. However, what concerns to Tier 2 and 3, the results obtained show how the use of some traditional and uncertainty importance measures helps in identifying high risky configurations that should be avoided in NPP technical specifications no matter the duration of CT (Tier 2), and other configurations that could take part of a configuration risk management program (Tier 3). - Highlights: • New approach for evaluation of risk impact of changes to Completion Times. • Integrated treatment and analysis of model and parameter uncertainties. • PSA based application to support risk-informed decision-making. • Measures of importance for identification of risky configurations. • Management of important safety issues to accomplish safety goals
Energy Technology Data Exchange (ETDEWEB)
WHEELER, TIMOTHY A.; WYSS, GREGORY D.; HARPER, FREDERICK T.
2000-11-01
Uncertainty distributions for specific parameters of the Cassini General Purpose Heat Source Radioisotope Thermoelectric Generator (GPHS-RTG) Final Safety Analysis Report consequence risk analysis were revised and updated. The revisions and updates were done for all consequence parameters for which relevant information exists from the joint project on Probabilistic Accident Consequence Uncertainty Analysis by the United States Nuclear Regulatory Commission and the Commission of European Communities.
Calculation uncertainty of distribution-like parameters in NPP of PAKS
International Nuclear Information System (INIS)
Szecsenyi, Zsolt; Korpas, Layos
2000-01-01
In the reactor-physical point of view there were two important events in the Nuclear Power Plant of PAKS in this year. The Russian type profiled assemblies were loaded into the PAKS Unit 3, and new limitation system was introduced on the same Unit. It was required to solve a lot of problems because of these both events. One of these problems was the determination of uncertainty of quantities of the new limitation considering the fabrication uncertainties for the profiled assembly. The importance of determination of uncertainty is to guarantee on 99.9% level the avoidance of fuel failure. In this paper the principles of determination of calculation accuracy, applied methods and obtained results are presented in case of distribution-like parameters. A few elements of the method have been presented on earlier symposiums, so in this paper the whole method is just outlined. For example the GPT method was presented in the following paper: Uncertainty analysis of pin wise power distribution of WWER-440 assembly considering fabrication uncertainties. Finally in the summary of this paper additional intrinsic opportunities in the method are presented. (Authors)
Are subject-specific musculoskeletal models robust to the uncertainties in parameter identification?
Directory of Open Access Journals (Sweden)
Giordano Valente
Full Text Available Subject-specific musculoskeletal modeling can be applied to study musculoskeletal disorders, allowing inclusion of personalized anatomy and properties. Independent of the tools used for model creation, there are unavoidable uncertainties associated with parameter identification, whose effect on model predictions is still not fully understood. The aim of the present study was to analyze the sensitivity of subject-specific model predictions (i.e., joint angles, joint moments, muscle and joint contact forces during walking to the uncertainties in the identification of body landmark positions, maximum muscle tension and musculotendon geometry. To this aim, we created an MRI-based musculoskeletal model of the lower limbs, defined as a 7-segment, 10-degree-of-freedom articulated linkage, actuated by 84 musculotendon units. We then performed a Monte-Carlo probabilistic analysis perturbing model parameters according to their uncertainty, and solving a typical inverse dynamics and static optimization problem using 500 models that included the different sets of perturbed variable values. Model creation and gait simulations were performed by using freely available software that we developed to standardize the process of model creation, integrate with OpenSim and create probabilistic simulations of movement. The uncertainties in input variables had a moderate effect on model predictions, as muscle and joint contact forces showed maximum standard deviation of 0.3 times body-weight and maximum range of 2.1 times body-weight. In addition, the output variables significantly correlated with few input variables (up to 7 out of 312 across the gait cycle, including the geometry definition of larger muscles and the maximum muscle tension in limited gait portions. Although we found subject-specific models not markedly sensitive to parameter identification, researchers should be aware of the model precision in relation to the intended application. In fact, force
The sensitivity of flowline models of tidewater glaciers to parameter uncertainty
Directory of Open Access Journals (Sweden)
E. M. Enderlin
2013-10-01
Full Text Available Depth-integrated (1-D flowline models have been widely used to simulate fast-flowing tidewater glaciers and predict change because the continuous grounding line tracking, high horizontal resolution, and physically based calving criterion that are essential to realistic modeling of tidewater glaciers can easily be incorporated into the models while maintaining high computational efficiency. As with all models, the values for parameters describing ice rheology and basal friction must be assumed and/or tuned based on observations. For prognostic studies, these parameters are typically tuned so that the glacier matches observed thickness and speeds at an initial state, to which a perturbation is applied. While it is well know that ice flow models are sensitive to these parameters, the sensitivity of tidewater glacier models has not been systematically investigated. Here we investigate the sensitivity of such flowline models of outlet glacier dynamics to uncertainty in three key parameters that influence a glacier's resistive stress components. We find that, within typical observational uncertainty, similar initial (i.e., steady-state glacier configurations can be produced with substantially different combinations of parameter values, leading to differing transient responses after a perturbation is applied. In cases where the glacier is initially grounded near flotation across a basal over-deepening, as typically observed for rapidly changing glaciers, these differences can be dramatic owing to the threshold of stability imposed by the flotation criterion. The simulated transient response is particularly sensitive to the parameterization of ice rheology: differences in ice temperature of ~ 2 °C can determine whether the glaciers thin to flotation and retreat unstably or remain grounded on a marine shoal. Due to the highly non-linear dependence of tidewater glaciers on model parameters, we recommend that their predictions are accompanied by
Quantification of parameter uncertainty for robust control of shape memory alloy bending actuators
International Nuclear Information System (INIS)
Crews, John H; McMahan, Jerry A; Smith, Ralph C; Hannen, Jennifer C
2013-01-01
In this paper, we employ Bayesian parameter estimation techniques to derive gains for robust control of smart materials. Specifically, we demonstrate the feasibility of utilizing parameter uncertainty estimation provided by Markov chain Monte Carlo (MCMC) methods to determine controller gains for a shape memory alloy bending actuator. We treat the parameters in the equations governing the actuator’s temperature dynamics as uncertain and use the MCMC method to construct the probability densities for these parameters. The densities are then used to derive parameter bounds for robust control algorithms. For illustrative purposes, we construct a sliding mode controller based on the homogenized energy model and experimentally compare its performance to a proportional-integral controller. While sliding mode control is used here, the techniques described in this paper provide a useful starting point for many robust control algorithms. (paper)
International Nuclear Information System (INIS)
Miller, G.; Martz, H.; Bertelli, L.; Melo, D.
2008-01-01
A simplified biokinetic model for 137 Cs has six parameters representing transfer of material to and from various compartments. Using a Bayesian analysis, the joint probability distribution of these six parameters is determined empirically for two cases with quite a lot of bioassay data. The distribution is found to be a multivariate log-normal. Correlations between different parameters are obtained. The method utilises a fairly large number of pre-determined forward biokinetic calculations, whose results are stored in interpolation tables. Four different methods to sample the multidimensional parameter space with a limited number of samples are investigated: random, stratified, Latin Hypercube sampling with a uniform distribution of parameters and importance sampling using a lognormal distribution that approximates the posterior distribution. The importance sampling method gives much smaller sampling uncertainty. No sampling method-dependent differences are perceptible for the uniform distribution methods. (authors)
Sutton, Jonathan E.; Guo, Wei; Katsoulakis, Markos A.; Vlachos, Dionisios G.
2016-04-01
Kinetic models based on first principles are becoming common place in heterogeneous catalysis because of their ability to interpret experimental data, identify the rate-controlling step, guide experiments and predict novel materials. To overcome the tremendous computational cost of estimating parameters of complex networks on metal catalysts, approximate quantum mechanical calculations are employed that render models potentially inaccurate. Here, by introducing correlative global sensitivity analysis and uncertainty quantification, we show that neglecting correlations in the energies of species and reactions can lead to an incorrect identification of influential parameters and key reaction intermediates and reactions. We rationalize why models often underpredict reaction rates and show that, despite the uncertainty being large, the method can, in conjunction with experimental data, identify influential missing reaction pathways and provide insights into the catalyst active site and the kinetic reliability of a model. The method is demonstrated in ethanol steam reforming for hydrogen production for fuel cells.
Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...
Monte Carlo Simulation of Influence of Input Parameters Uncertainty on Output Data
International Nuclear Information System (INIS)
Sobek, Lukas
2010-01-01
Input parameters of a complex system in the probabilistic simulation are treated by means of probability density function (PDF). The result of the simulation have also probabilistic character. Monte Carlo simulation is widely used to obtain predictions concerning the probability of the risk. The Monte Carlo method was performed to calculate histograms of PDF for release rate given by uncertainty in distribution coefficient of radionuclides 135 Cs and 235 U.
Uncertainty reevaluation of T/H parameters of HANARO core design
Energy Technology Data Exchange (ETDEWEB)
Kim, Hark Rho; Park, Cheol; Kim, Heo Nil; Chae, Hee Taek
1999-03-01
HANARO core was designed by statistical thermal design method which was generally applied to power plant design. However, reevaluation of core thermal margin reflecting design changes as well as experiences through commissioning and operation is necessary for safe operation of reactor. For this objective, the revision of data for T/H design parameters and the reevaluation of their uncertainties were performed. (Author). 30 refs., 7 figs.
Parameter sensitivity and uncertainty analysis for a storm surge and wave model
Directory of Open Access Journals (Sweden)
L. A. Bastidas
2016-09-01
Full Text Available Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991 utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland. The sensitive model parameters (of 11 total considered include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.
Mockler, E. M.; Chun, K. P.; Sapriza-Azuri, G.; Bruen, M.; Wheater, H. S.
2016-11-01
Predictions of river flow dynamics provide vital information for many aspects of water management including water resource planning, climate adaptation, and flood and drought assessments. Many of the subjective choices that modellers make including model and criteria selection can have a significant impact on the magnitude and distribution of the output uncertainty. Hydrological modellers are tasked with understanding and minimising the uncertainty surrounding streamflow predictions before communicating the overall uncertainty to decision makers. Parameter uncertainty in conceptual rainfall-runoff models has been widely investigated, and model structural uncertainty and forcing data have been receiving increasing attention. This study aimed to assess uncertainties in streamflow predictions due to forcing data and the identification of behavioural parameter sets in 31 Irish catchments. By combining stochastic rainfall ensembles and multiple parameter sets for three conceptual rainfall-runoff models, an analysis of variance model was used to decompose the total uncertainty in streamflow simulations into contributions from (i) forcing data, (ii) identification of model parameters and (iii) interactions between the two. The analysis illustrates that, for our subjective choices, hydrological model selection had a greater contribution to overall uncertainty, while performance criteria selection influenced the relative intra-annual uncertainties in streamflow predictions. Uncertainties in streamflow predictions due to the method of determining parameters were relatively lower for wetter catchments, and more evenly distributed throughout the year when the Nash-Sutcliffe Efficiency of logarithmic values of flow (lnNSE) was the evaluation criterion.
International Nuclear Information System (INIS)
Sakai, Ryutaro; Munakata, Masahiro; Ohoka, Masao; Kameya, Hiroshi
2009-11-01
In the safety assessment for a geological disposal of radioactive waste, it is important to develop a methodology for long-term estimation of regional groundwater flow from data acquisition to numerical analyses. In the uncertainties associated with estimation of regional groundwater flow, there are the one that concerns parameters and the one that concerns the hydrologeological evolution. The uncertainties of parameters include measurement errors and their heterogeneity. The authors discussed the uncertainties of hydraulic conductivity as a significant parameter for regional groundwater flow analysis. This study suggests that hydraulic conductivities of rock mass are controlled by rock characteristics such as fractures, porosity and test conditions such as hydraulic gradient, water quality, water temperature and that there exists variations more than ten times in hydraulic conductivity by difference due to test conditions such as hydraulic gradient or due to rock type variations such as rock fractures, porosity. In addition this study demonstrated that confining pressure change caused by uplift and subsidence and change of hydraulic gradient under the long-term evolution of hydrogeological environment could possibly produce variations more than ten times of magnitude in hydraulic conductivity. It was also shown that the effect of water quality change on hydraulic conductivity was not negligible and that the replacement of fresh water and saline water caused by sea level change could induce 0.6 times in current hydraulic conductivities in case of Horonobe site. (author)
Directory of Open Access Journals (Sweden)
Koen Degeling
2017-12-01
Full Text Available Abstract Background Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Methods Two approaches, 1 using non-parametric bootstrapping and 2 using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Results Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500, the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25, yielding infeasible modeling outcomes. Conclusions Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.
Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik
2017-12-15
Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.
Wang, S.; Huang, G. H.; Baetz, B. W.; Huang, W.
2015-11-01
This paper presents a polynomial chaos ensemble hydrologic prediction system (PCEHPS) for an efficient and robust uncertainty assessment of model parameters and predictions, in which possibilistic reasoning is infused into probabilistic parameter inference with simultaneous consideration of randomness and fuzziness. The PCEHPS is developed through a two-stage factorial polynomial chaos expansion (PCE) framework, which consists of an ensemble of PCEs to approximate the behavior of the hydrologic model, significantly speeding up the exhaustive sampling of the parameter space. Multiple hypothesis testing is then conducted to construct an ensemble of reduced-dimensionality PCEs with only the most influential terms, which is meaningful for achieving uncertainty reduction and further acceleration of parameter inference. The PCEHPS is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability. A detailed comparison between the HYMOD hydrologic model, the ensemble of PCEs, and the ensemble of reduced PCEs is performed in terms of accuracy and efficiency. Results reveal temporal and spatial variations in parameter sensitivities due to the dynamic behavior of hydrologic systems, and the effects (magnitude and direction) of parametric interactions depending on different hydrological metrics. The case study demonstrates that the PCEHPS is capable not only of capturing both expert knowledge and probabilistic information in the calibration process, but also of implementing an acceleration of more than 10 times faster than the hydrologic model without compromising the predictive accuracy.
Sensitivity of Asteroid Impact Risk to Uncertainty in Asteroid Properties and Entry Parameters
Wheeler, Lorien; Mathias, Donovan; Dotson, Jessie L.; NASA Asteroid Threat Assessment Project
2017-10-01
A central challenge in assessing the threat posed by asteroids striking Earth is the large amount of uncertainty inherent throughout all aspects of the problem. Many asteroid properties are not well characterized and can range widely from strong, dense, monolithic irons to loosely bound, highly porous rubble piles. Even for an object of known properties, the specific entry velocity, angle, and impact location can swing the potential consequence from no damage to causing millions of casualties. Due to the extreme rarity of large asteroid strikes, there are also large uncertainties in how different types of asteroids will interact with the atmosphere during entry, how readily they may break up or ablate, and how much surface damage will be caused by the resulting airbursts or impacts.In this work, we use our Probabilistic Asteroid Impact Risk (PAIR) model to investigate the sensitivity of asteroid impact damage to uncertainties in key asteroid properties, entry parameters, or modeling assumptions. The PAIR model combines physics-based analytic models of asteroid entry and damage in a probabilistic Monte Carlo framework to assess the risk posed by a wide range of potential impacts. The model samples from uncertainty distributions of asteroid properties and entry parameters to generate millions of specific impact cases, and models the atmospheric entry and damage for each case, including blast overpressure, thermal radiation, tsunami inundation, and global effects. To assess the risk sensitivity, we alternately fix and vary the different input parameters and compare the effect on the resulting range of damage produced. The goal of these studies is to help guide future efforts in asteroid characterization and model refinement by determining which properties most significantly affect the potential risk.
Masterlark, Timothy; Donovan, Theodore; Feigl, Kurt L.; Haney, Matt; Thurber, Clifford H.; Tung, Sui
2016-01-01
The eruption cycle of a volcano is controlled in part by the upward migration of magma. The characteristics of the magma flux produce a deformation signature at the Earth's surface. Inverse analyses use geodetic data to estimate strategic controlling parameters that describe the position and pressurization of a magma chamber at depth. The specific distribution of material properties controls how observed surface deformation translates to source parameter estimates. Seismic tomography models describe the spatial distributions of material properties that are necessary for accurate models of volcano deformation. This study investigates how uncertainties in seismic tomography models propagate into variations in the estimates of volcano deformation source parameters inverted from geodetic data. We conduct finite element model-based nonlinear inverse analyses of interferometric synthetic aperture radar (InSAR) data for Okmok volcano, Alaska, as an example. We then analyze the estimated parameters and their uncertainties to characterize the magma chamber. Analyses are performed separately for models simulating a pressurized chamber embedded in a homogeneous domain as well as for a domain having a heterogeneous distribution of material properties according to seismic tomography. The estimated depth of the source is sensitive to the distribution of material properties. The estimated depths for the homogeneous and heterogeneous domains are 2666 ± 42 and 3527 ± 56 m below mean sea level, respectively (99% confidence). A Monte Carlo analysis indicates that uncertainties of the seismic tomography cannot account for this discrepancy at the 99% confidence level. Accounting for the spatial distribution of elastic properties according to seismic tomography significantly improves the fit of the deformation model predictions and significantly influences estimates for parameters that describe the location of a pressurized magma chamber.
Uncertainty analyses of the calibrated parameter values of a water quality model
Rode, M.; Suhr, U.; Lindenschmidt, K.-E.
2003-04-01
For river basin management water quality models are increasingly used for the analysis and evaluation of different management measures. However substantial uncertainties exist in parameter values depending on the available calibration data. In this paper an uncertainty analysis for a water quality model is presented, which considers the impact of available model calibration data and the variance of input variables. The investigation was conducted based on four extensive flowtime related longitudinal surveys in the River Elbe in the years 1996 to 1999 with varying discharges and seasonal conditions. For the model calculations the deterministic model QSIM of the BfG (Germany) was used. QSIM is a one dimensional water quality model and uses standard algorithms for hydrodynamics and phytoplankton dynamics in running waters, e.g. Michaelis Menten/Monod kinetics, which are used in a wide range of models. The multi-objective calibration of the model was carried out with the nonlinear parameter estimator PEST. The results show that for individual flow time related measuring surveys very good agreements between model calculation and measured values can be obtained. If these parameters are applied to deviating boundary conditions, substantial errors in model calculation can occur. These uncertainties can be decreased with an increased calibration database. More reliable model parameters can be identified, which supply reasonable results for broader boundary conditions. The extension of the application of the parameter set on a wider range of water quality conditions leads to a slight reduction of the model precision for the specific water quality situation. Moreover the investigations show that highly variable water quality variables like the algal biomass always allow a smaller forecast accuracy than variables with lower coefficients of variation like e.g. nitrate.
He, L; Huang, G H; Lu, H W
2010-04-15
Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
He, L., E-mail: li.he@ryerson.ca [Department of Civil Engineering, Faculty of Engineering, Architecture and Science, Ryerson University, 350 Victoria Street, Toronto, Ontario, M5B 2K3 (Canada); Huang, G.H. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada); College of Urban Environmental Sciences, Peking University, Beijing 100871 (China); Lu, H.W. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada)
2010-04-15
Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the 'true' ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes.
Williams, Q.
2018-05-01
The thermal conductivity of iron alloys at high pressures and temperatures is a critical parameter in governing ( a) the present-day heat flow out of Earth's core, ( b) the inferred age of Earth's inner core, and ( c) the thermal evolution of Earth's core and lowermost mantle. It is, however, one of the least well-constrained important geophysical parameters, with current estimates for end-member iron under core-mantle boundary conditions varying by about a factor of 6. Here, the current state of calculations, measurements, and inferences that constrain thermal conductivity at core conditions are reviewed. The applicability of the Wiedemann-Franz law, commonly used to convert electrical resistivity data to thermal conductivity data, is probed: Here, whether the constant of proportionality, the Lorenz number, is constant at extreme conditions is of vital importance. Electron-electron inelastic scattering and increases in Fermi-liquid-like behavior may cause uncertainties in thermal conductivities derived from both first-principles-associated calculations and electrical conductivity measurements. Additional uncertainties include the role of alloying constituents and local magnetic moments of iron in modulating the thermal conductivity. Thus, uncertainties in thermal conductivity remain pervasive, and hence a broad range of core heat flows and inner core ages appear to remain plausible.
International Nuclear Information System (INIS)
Heo, Jaeseok; Kim, Kyung Doo
2015-01-01
Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper
Energy Technology Data Exchange (ETDEWEB)
Heo, Jaeseok, E-mail: jheo@kaeri.re.kr; Kim, Kyung Doo, E-mail: kdkim@kaeri.re.kr
2015-10-15
Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper.
The effect of uncertainty of reactor parameters obtained using k0-NAA on result of analysis
International Nuclear Information System (INIS)
Sasajima, Fumio
2006-01-01
Neutron Activation Analysis using the k 0 method is a useful method allowing convenient and accurate simultaneous analysis of plural elements, eliminating the need for the use of comparative reference samples. As already well known, it is essential for the correct result of an analysis to obtain the α-factor and f-factor for a neutron spectrum in an irradiation field accurately when an attempt is made to use the k 0 method. For this reason, based on data obtained from the experiment conducted in the JRR-3 PN-3 system, how uncertainty of the measured values for α-factor and f-factor affects the result of an analysis was evaluated. The process of evaluation involved intentionally varying the values for reactor parameters followed by making an analysis of environmental reference samples (NIST SRM-1632c) using the k 0 method to examine the effect of these factors on the concentrations of 19 elements. The result of the evaluation revealed that the degree of the effect of uncertainty on the concentrations of 19 elements was at best approx. 1% under the condition of this experiment assuming that the factor α, a reactor parameter, had uncertainty of approx. 200%. (author)
Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.
2010-01-01
Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in
EarthServer: Visualisation and use of uncertainty as a data exploration tool
Walker, Peter; Clements, Oliver; Grant, Mike
2013-04-01
software from the EarthServer project we can produce a novel data offering that allows the use of traditional exploration and access mechanisms such as WMS and WCS. However the real benefits can be seen when utilising WCPS to explore the data . We will show two major benefits to this infrastructure. Firstly we will show that the visualisation of the combined chlorophyll and uncertainty datasets through a web based GIS portal gives users the ability to instantaneously assess the quality of the data they are exploring using traditional web based plotting techniques as well as through novel web based 3 dimensional visualisation. Secondly we will showcase the benefits available when combining these data with the WCPS standard. The uncertainty data can be utilised in queries using the standard WCPS query language. This allows selection of data either for download or use within the query, based on the respective uncertainty values as well as the possibility of incorporating both the chlorophyll data and uncertainty data into complex queries to produce additional novel data products. By filtering with uncertainty at the data source rather than the client we can minimise traffic over the network allowing huge datasets to be worked on with a minimal time penalty.
Hydrological model parameter dimensionality is a weak measure of prediction uncertainty
Pande, S.; Arkesteijn, L.; Savenije, H.; Bastidas, L. A.
2015-04-01
This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting) and its simplified version SIXPAR (Six Parameter Model), are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters) does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.
Arnold, B. W.; Gardner, P.
2013-12-01
Calibration of groundwater flow models for the purpose of evaluating flow and aquifer heterogeneity typically uses observations of hydraulic head in wells and appropriate boundary conditions. Environmental tracers have a wide variety of decay rates and input signals in recharge, resulting in a potentially broad source of additional information to constrain flow rates and heterogeneity. A numerical study was conducted to evaluate the reduction in uncertainty during model calibration using observations of various environmental tracers and combinations of tracers. A synthetic data set was constructed by simulating steady groundwater flow and transient tracer transport in a high-resolution, 2-D aquifer with heterogeneous permeability and porosity using the PFLOTRAN software code. Data on pressure and tracer concentration were extracted at well locations and then used as observations for automated calibration of a flow and transport model using the pilot point method and the PEST code. Optimization runs were performed to estimate parameter values of permeability at 30 pilot points in the model domain for cases using 42 observations of: 1) pressure, 2) pressure and CFC11 concentrations, 3) pressure and Ar-39 concentrations, and 4) pressure, CFC11, Ar-39, tritium, and He-3 concentrations. Results show significantly lower uncertainty, as indicated by the 95% linear confidence intervals, in permeability values at the pilot points for cases including observations of environmental tracer concentrations. The average linear uncertainty range for permeability at the pilot points using pressure observations alone is 4.6 orders of magnitude, using pressure and CFC11 concentrations is 1.6 orders of magnitude, using pressure and Ar-39 concentrations is 0.9 order of magnitude, and using pressure, CFC11, Ar-39, tritium, and He-3 concentrations is 1.0 order of magnitude. Data on Ar-39 concentrations result in the greatest parameter uncertainty reduction because its half-life of 269
pypet: A Python Toolkit for Data Management of Parameter Explorations.
Meyer, Robert; Obermayer, Klaus
2016-01-01
pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines.
pypet: A Python Toolkit for Data Management of Parameter Explorations
Directory of Open Access Journals (Sweden)
Robert Meyer
2016-08-01
Full Text Available pypet (Python parameter exploration toolkit is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches.pypet collects and stores both simulation parameters and results in a single HDF5 file.This collective storage allows fast and convenient loading of data for further analyses.pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2 quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines.
Directory of Open Access Journals (Sweden)
Simon van Mourik
2014-06-01
Full Text Available Multi-parameter models in systems biology are typically ‘sloppy’: some parameters or combinations of parameters may be hard to estimate from data, whereas others are not. One might expect that parameter uncertainty automatically leads to uncertain predictions, but this is not the case. We illustrate this by showing that the prediction uncertainty of each of six sloppy models varies enormously among different predictions. Statistical approximations of parameter uncertainty may lead to dramatic errors in prediction uncertainty estimation. We argue that prediction uncertainty assessment must therefore be performed on a per-prediction basis using a full computational uncertainty analysis. In practice this is feasible by providing a model with a sample or ensemble representing the distribution of its parameters. Within a Bayesian framework, such a sample may be generated by a Markov Chain Monte Carlo (MCMC algorithm that infers the parameter distribution based on experimental data. Matlab code for generating the sample (with the Differential Evolution Markov Chain sampler and the subsequent uncertainty analysis using such a sample, is supplied as Supplemental Information.
International Nuclear Information System (INIS)
Serra, Oscar
2000-01-01
Some studies were done about the effect of the uncertainty in the values of several thermo-hydraulic parameters on the core behaviour of the CAREM-25 reactor.By using the chain codes CITVAP-THERMIT and the perturbation the reference states, it was found that concerning to the total power, the effects were not very important, but were much bigger for the pressure.Furthermore were hardly significant in the presence of any perturbation on the void fraction calculation and the fuel temperature.The reactivity and the power peaking factor had highly important changes in the case of the coolant flow.We conclude that the use of this procedure is adequate and useful to our purpose
The Impact of Economic Parameter Uncertainty Growth on Regional Energy Demand Assessment
Directory of Open Access Journals (Sweden)
Olga Vasilyevna Mazurova
2017-06-01
Full Text Available The article deals with the forecasting studies based on the energy demand and prices in the region in terms of the complex interconnections between economy (and energy and the growth of uncertainty of the future development of the country and territories. The authors propose a methodological approach, which combines the assessment of the price elasticity of energy demand with the optimization of energy and fuel regional supply. In this case, the price elasticity of demand is determined taking into account the comparison of cost-effectiveness of using different types of fuel and energy by different consumers. The originality of the proposed approach consists in simulating the behaviour of suppliers’ (energy companies and large customers’ (power plants, boiler rooms, industry, transport, population depending on energy price changes, the existing and new technologies, energy-saving activities and restrictions on fuel supplies. To take into account the uncertainty of future economic and energy conditions, some parameters such as prospective technical and economic parameters, price, technological parameters are set as the intervals of possible values with different probability levels. This approach allows making multivariate studies with different combinations of the expected conditions and receiving as a result the range of the projected values of studied indicators. The multivariate calculations show that the fuel demand has a nonlinear dependence on the consumer characteristics, pricing, projection horizon, and the nature of the future conditions uncertainty. The authors have shown that this effect can be significant and should be considered in the forecasts of the development of fuel and energy sector. The methodological approach and quantitative evaluation can be used to improve the economic and energy development strategies of the country and regions
Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu; Zhu, Feng
2017-10-01
Accurate material parameters are critical to construct the high biofidelity finite element (FE) models. However, it is hard to obtain the brain tissue parameters accurately because of the effects of irregular geometry and uncertain boundary conditions. Considering the complexity of material test and the uncertainty of friction coefficient, a computational inverse method for viscoelastic material parameters identification of brain tissue is presented based on the interval analysis method. Firstly, the intervals are used to quantify the friction coefficient in the boundary condition. And then the inverse problem of material parameters identification under uncertain friction coefficient is transformed into two types of deterministic inverse problem. Finally the intelligent optimization algorithm is used to solve the two types of deterministic inverse problems quickly and accurately, and the range of material parameters can be easily acquired with no need of a variety of samples. The efficiency and convergence of this method are demonstrated by the material parameters identification of thalamus. The proposed method provides a potential effective tool for building high biofidelity human finite element model in the study of traffic accident injury.
Rypdal, Martin; Sirnes, Espen; Løvsletten, Ola; Rypdal, Kristoffer
2013-08-01
Maximum likelihood estimation techniques for multifractal processes are applied to high-frequency data in order to quantify intermittency in the fluctuations of asset prices. From time records as short as one month these methods permit extraction of a meaningful intermittency parameter λ characterising the degree of volatility clustering. We can therefore study the time evolution of volatility clustering and test the statistical significance of this variability. By analysing data from the Oslo Stock Exchange, and comparing the results with the investment grade spread, we find that the estimates of λ are lower at times of high market uncertainty.
Schiavazzi, Daniele E.; Baretta, Alessia; Pennati, Giancarlo; Hsia, Tain-Yen; Marsden, Alison L.
2017-01-01
Summary Computational models of cardiovascular physiology can inform clinical decision-making, providing a physically consistent framework to assess vascular pressures and flow distributions, and aiding in treatment planning. In particular, lumped parameter network (LPN) models that make an analogy to electrical circuits offer a fast and surprisingly realistic method to reproduce the circulatory physiology. The complexity of LPN models can vary significantly to account, for example, for cardiac and valve function, respiration, autoregulation, and time-dependent hemodynamics. More complex models provide insight into detailed physiological mechanisms, but their utility is maximized if one can quickly identify patient specific parameters. The clinical utility of LPN models with many parameters will be greatly enhanced by automated parameter identification, particularly if parameter tuning can match non-invasively obtained clinical data. We present a framework for automated tuning of 0D lumped model parameters to match clinical data. We demonstrate the utility of this framework through application to single ventricle pediatric patients with Norwood physiology. Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, we show the ability to automatically determine physiologically consistent point estimates of the parameters and to quantify uncertainty induced by errors and assumptions in the collected clinical data. We show that multi-level estimation, that is, updating the parameter prior information through sub-model analysis, can lead to a significant reduction in the parameter marginal posterior variance. We first consider virtual patient conditions, with clinical targets generated through model solutions, and second application to a cohort of four single-ventricle patients with Norwood physiology. PMID:27155892
Exploring prediction uncertainty of spatial data in geostatistical and machine learning Approaches
Klump, J. F.; Fouedjio, F.
2017-12-01
Geostatistical methods such as kriging with external drift as well as machine learning techniques such as quantile regression forest have been intensively used for modelling spatial data. In addition to providing predictions for target variables, both approaches are able to deliver a quantification of the uncertainty associated with the prediction at a target location. Geostatistical approaches are, by essence, adequate for providing such prediction uncertainties and their behaviour is well understood. However, they often require significant data pre-processing and rely on assumptions that are rarely met in practice. Machine learning algorithms such as random forest regression, on the other hand, require less data pre-processing and are non-parametric. This makes the application of machine learning algorithms to geostatistical problems an attractive proposition. The objective of this study is to compare kriging with external drift and quantile regression forest with respect to their ability to deliver reliable prediction uncertainties of spatial data. In our comparison we use both simulated and real world datasets. Apart from classical performance indicators, comparisons make use of accuracy plots, probability interval width plots, and the visual examinations of the uncertainty maps provided by the two approaches. By comparing random forest regression to kriging we found that both methods produced comparable maps of estimated values for our variables of interest. However, the measure of uncertainty provided by random forest seems to be quite different to the measure of uncertainty provided by kriging. In particular, the lack of spatial context can give misleading results in areas without ground truth data. These preliminary results raise questions about assessing the risks associated with decisions based on the predictions from geostatistical and machine learning algorithms in a spatial context, e.g. mineral exploration.
Badawy, B.; Fletcher, C. G.
2017-12-01
The parameterization of snow processes in land surface models is an important source of uncertainty in climate simulations. Quantifying the importance of snow-related parameters, and their uncertainties, may therefore lead to better understanding and quantification of uncertainty within integrated earth system models. However, quantifying the uncertainty arising from parameterized snow processes is challenging due to the high-dimensional parameter space, poor observational constraints, and parameter interaction. In this study, we investigate the sensitivity of the land simulation to uncertainty in snow microphysical parameters in the Canadian LAnd Surface Scheme (CLASS) using an uncertainty quantification (UQ) approach. A set of training cases (n=400) from CLASS is used to sample each parameter across its full range of empirical uncertainty, as determined from available observations and expert elicitation. A statistical learning model using support vector regression (SVR) is then constructed from the training data (CLASS output variables) to efficiently emulate the dynamical CLASS simulations over a much larger (n=220) set of cases. This approach is used to constrain the plausible range for each parameter using a skill score, and to identify the parameters with largest influence on the land simulation in CLASS at global and regional scales, using a random forest (RF) permutation importance algorithm. Preliminary sensitivity tests indicate that snow albedo refreshment threshold and the limiting snow depth, below which bare patches begin to appear, have the highest impact on snow output variables. The results also show a considerable reduction of the plausible ranges of the parameters values and hence reducing their uncertainty ranges, which can lead to a significant reduction of the model uncertainty. The implementation and results of this study will be presented and discussed in details.
International Nuclear Information System (INIS)
Kovtonyuk, A.; Petruzzi, A.; D'Auria, F.
2015-01-01
The objective of the Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) benchmark is to progress on the issue of the quantification of the uncertainty of the physical models in system thermal-hydraulic codes by considering a concrete case: the physical models involved in the prediction of core reflooding. The PREMIUM benchmark consists of five phases. This report presents the results of Phase II dedicated to the identification of the uncertain code parameters associated with physical models used in the simulation of reflooding conditions. This identification is made on the basis of the Test 216 of the FEBA/SEFLEX programme according to the following steps: - identification of influential phenomena; - identification of the associated physical models and parameters, depending on the used code; - quantification of the variation range of identified input parameters through a series of sensitivity calculations. A procedure for the identification of potentially influential code input parameters has been set up in the Specifications of Phase II of PREMIUM benchmark. A set of quantitative criteria has been as well proposed for the identification of influential IP and their respective variation range. Thirteen participating organisations, using 8 different codes (7 system thermal-hydraulic codes and 1 sub-channel module of a system thermal-hydraulic code) submitted Phase II results. The base case calculations show spread in predicted cladding temperatures and quench front propagation that has been characterized. All the participants, except one, predict a too fast quench front progression. Besides, the cladding temperature time trends obtained by almost all the participants show oscillatory behaviour which may have numeric origins. Adopted criteria for identification of influential input parameters differ between the participants: some organisations used the set of criteria proposed in Specifications 'as is', some modified the quantitative thresholds
Parameter uncertainty and model predictions: a review of Monte Carlo results
International Nuclear Information System (INIS)
Gardner, R.H.; O'Neill, R.V.
1979-01-01
Studies of parameter variability by Monte Carlo analysis are reviewed using repeated simulations of the model with randomly selected parameter values. At the beginning of each simulation, parameter values are chosen from specific frequency distributions. This process is continued for a number of iterations sufficient to converge on an estimate of the frequency distribution of the output variables. The purpose was to explore the general properties of error propagaton in models. Testing the implicit assumptions of analytical methods and pointing out counter-intuitive results produced by the Monte Carlo approach are additional points covered
Boehlen, T T; Dosanjh, M; Ferrari, A; Fossati, P; Haberer, T; Mairani, A; Patera, V
2012-01-01
Uncertainties in determining clinically used relative biological effectiveness (RBE) values for ion beam therapy carry the risk of absolute and relative misestimations of RBE-weighted doses for clinical scenarios. This study assesses the consequences of hypothetical misestimations of input parameters to the RBE modelling for carbon ion treatment plans by a variational approach. The impact of the variations on resulting cell survival and RBE values is evaluated as a function of the remaining ion range. In addition, the sensitivity to misestimations in RBE modelling is compared for single fields and two opposed fields using differing optimization criteria. It is demonstrated for single treatment fields that moderate variations (up to +/-50\\%) of representative nominal input parameters for four tumours result mainly in a misestimation of the RBE-weighted dose in the planning target volume (PTV) by a constant factor and only smaller RBE-weighted dose gradients. Ensuring a more uniform radiation quality in the PTV...
Rabies epidemic model with uncertainty in parameters: crisp and fuzzy approaches
Ndii, M. Z.; Amarti, Z.; Wiraningsih, E. D.; Supriatna, A. K.
2018-03-01
A deterministic mathematical model is formulated to investigate the transmission dynamics of rabies. In particular, we investigate the effects of vaccination, carrying capacity and the transmission rate on the rabies epidemics and allow for uncertainty in the parameters. We perform crisp and fuzzy approaches. We find that, in the case of crisp parameters, rabies epidemics may be interrupted when the carrying capacity and the transmission rate are not high. Our findings suggest that limiting the growth of dog population and reducing the potential contact between susceptible and infectious dogs may aid in interrupting rabies epidemics. We extend the work by considering a fuzzy carrying capacity and allow for low, medium, and high level of carrying capacity. The result confirms the results obtained by using crisp carrying capacity, that is, when the carrying capacity is not too high, the vaccination could confine the disease effectively.
Cosmological parameter uncertainties from SALT-II type Ia supernova light curve models
International Nuclear Information System (INIS)
Mosher, J.; Sako, M.; Guy, J.; Astier, P.; Betoule, M.; El-Hage, P.; Pain, R.; Regnault, N.; Kessler, R.; Frieman, J. A.; Marriner, J.; Biswas, R.; Kuhlmann, S.; Schneider, D. P.
2014-01-01
We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ∼120 low-redshift (z < 0.1) SNe Ia, ∼255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ∼290 SNLS SNe Ia (z ≤ 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w input – w recovered ) ranging from –0.005 ± 0.012 to –0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty; the average bias on w is –0.014 ± 0.007.
GPI-repetitive control for linear systems with parameter uncertainty / variation
Directory of Open Access Journals (Sweden)
John A. Cortés-Romero
2015-01-01
Full Text Available Robust repetitive control problems for uncertain linear systems have been considered by different approaches. This article proposes the use of Repetitive Control and Generalized Proportional Integral (GPI Control in a complementary fashion. The conditioning and coupling of these techniques has been done in a time discrete context. Repetitive control is a control technique, based on the internal model principle, which yields perfect asymptotic tracking and rejection of periodic signals. On the other hand, GPI control is established as a robust linear control system design technique that is able to reject structured time polynomial additive perturbation, in particular, parameter uncertainty that can be locally approximated by time polynomial signal. GPI control provides a suitable stability and robustness conditions for the proper Repetitive Control operation. A stability analysis is presented under the frequency response framework using plant samples for different parameter uncertainty conditions. We carry out some comparative stability analysis with other complementary control approaches that has been effective for this kind of task, enhancing a better robustness and an improved performance for the GPI case. Illustrative simulation examples are presented which validate the proposed approach.
Cosmological Parameter Uncertainties from SALT-II Type Ia Supernova Light Curve Models
Energy Technology Data Exchange (ETDEWEB)
Mosher, J. [Pennsylvania U.; Guy, J. [LBL, Berkeley; Kessler, R. [Chicago U., KICP; Astier, P. [Paris U., VI-VII; Marriner, J. [Fermilab; Betoule, M. [Paris U., VI-VII; Sako, M. [Pennsylvania U.; El-Hage, P. [Paris U., VI-VII; Biswas, R. [Argonne; Pain, R. [Paris U., VI-VII; Kuhlmann, S. [Argonne; Regnault, N. [Paris U., VI-VII; Frieman, J. A. [Fermilab; Schneider, D. P. [Penn State U.
2014-08-29
We use simulated type Ia supernova (SN Ia) samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systematic biases in SN Ia distance measurements, and a bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: ~120 low-redshift (z < 0.1) SNe Ia, ~255 Sloan Digital Sky Survey SNe Ia (z < 0.4), and ~290 SNLS SNe Ia (z ≤ 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (w (input) – w (recovered)) ranging from –0.005 ± 0.012 to –0.024 ± 0.010. These biases are indistinguishable from each other within the uncertainty, the average bias on w is –0.014 ± 0.007.
An Inexpensive Way of Teaching Uncertainty and Mineral Exploration Drilling in the Classroom
Aquino, J. S.
2014-12-01
This presentation is all about inexpensive ways of teaching uncertainty and mineral exploration drilling in the classroom. These labs were developed as an off-shoot of my years of mineral industry experience before I transitioned to geoscience education. I have developed several classroom lab exercises that relate to the role of modeling, uncertainty and prediction in mineral exploration. These lessons are mostly less expensive ($Early in the semester, modeling is explored through the cube and toilet paper roll puzzle lab. This is then immediately followed by the penny experiment that gives a physical meaning to the concept of uncertainty. However, it is the end-of-semester shoebox drilling lab that serves as the culminating activity for modeling, uncertainty and prediction. An object (orebody) is hidden inside a shoebox and the students are challenged to design a drilling program to predict the location and topology of a "mineral deposit". The students' decision on the location of the first few drill holes will be based on how they analyze, synthesize and evaluate simple surface topographic, geologic and geochemical +/- geophysical data overlain on top of the box. Before drilling, students are required to construct several geologic sections that will "model" the shape of the hidden orebody. Using bamboo skewers as their drilling equipment, students then commence their drilling and along the way learn the importance of drill spacing in decreasing uncertainty or increasing confidence. Lastly, the mineral separation lab gives them an opportunity to design another experiment that mimics mineral processing and learns a valuable lesson on the difficulties in recovery and how it relates to entropy (no such thing as 100% recoverability). The last two labs can be further enhanced with economic analysis through incorporation of drilling and processing costs. Students further appreciate the world of of mineral exploration with several YouTube videos on the use of 3D and 4D
Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.
2016-12-01
This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for
DEFF Research Database (Denmark)
Morales Rodriguez, Ricardo; Meyer, Anne S.; Gernaey, Krist
2011-01-01
This study presents the development of a systematic modelling framework for identification of the most critical variables and parameters under uncertainty, evaluated on a lignocellulosic ethanol production case study. The systematic framework starts with: (1) definition of the objectives; (2......, suitable for further analysis of the bioprocess. The uncertainty and sensitivity analysis identified the following most critical variables and parameters involved in the lignocellulosic ethanol production case study. For the operating cost, the enzyme loading showed the strongest impact, while reaction...
Janardhanan, S.; Datta, B.
2011-12-01
Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of
Directory of Open Access Journals (Sweden)
Douglas Domingues Bueno
2008-01-01
Full Text Available This paper deals with the study of algorithms for robust active vibration control in flexible structures considering uncertainties in system parameters. It became an area of enormous interest, mainly due to the countless demands of optimal performance in mechanical systems as aircraft, aerospace, and automotive structures. An important and difficult problem for designing active vibration control is to get a representative dynamic model. Generally, this model can be obtained using finite element method (FEM or an identification method using experimental data. Actuators and sensors may affect the dynamics properties of the structure, for instance, electromechanical coupling of piezoelectric material must be considered in FEM formulation for flexible and lightly damping structure. The nonlinearities and uncertainties involved in these structures make it a difficult task, mainly for complex structures as spatial truss structures. On the other hand, by using an identification method, it is possible to obtain the dynamic model represented through a state space realization considering this coupling. This paper proposes an experimental methodology for vibration control in a 3D truss structure using PZT wafer stacks and a robust control algorithm solved by linear matrix inequalities.
International Nuclear Information System (INIS)
Ward, R.C.; Kocher, D.C.; Hicks, B.B.; Hosker, R.P. Jr.; Ku, J.Y.; Rao, K.S.
1985-01-01
We have studied the sensitivity of results from the CRAC2 computer code, which predicts health impacts from a reactor-accident scenario, to uncertainties in selected meteorological models and parameters. The sources of uncertainty examined include the models for plume rise and wet deposition and the meteorological bin-sampling procedure. An alternative plume-rise model usually had little effect on predicted health impacts. In an alternative wet-deposition model, the scavenging rate depends only on storm type, rather than on rainfall rate and atmospheric stability class as in the CRAC2 model. Use of the alternative wet-deposition model in meteorological bin-sampling runs decreased predicted mean early injuries by as much as a factor of 2-3 and, for large release heights and sensible heat rates, decreased mean early fatalities by nearly an order of magnitude. The bin-sampling procedure in CRAC2 was expanded by dividing each rain bin into four bins that depend on rainfall rate. Use of the modified bin structure in conjunction with the CRAC2 wet-deposition model changed all predicted health impacts by less than a factor of 2. 9 references
Sela, Itamar; Ashkenazy, Haim; Katoh, Kazutaka; Pupko, Tal
2015-07-01
Inference of multiple sequence alignments (MSAs) is a critical part of phylogenetic and comparative genomics studies. However, from the same set of sequences different MSAs are often inferred, depending on the methodologies used and the assumed parameters. Much effort has recently been devoted to improving the ability to identify unreliable alignment regions. Detecting such unreliable regions was previously shown to be important for downstream analyses relying on MSAs, such as the detection of positive selection. Here we developed GUIDANCE2, a new integrative methodology that accounts for: (i) uncertainty in the process of indel formation, (ii) uncertainty in the assumed guide tree and (iii) co-optimal solutions in the pairwise alignments, used as building blocks in progressive alignment algorithms. We compared GUIDANCE2 with seven methodologies to detect unreliable MSA regions using extensive simulations and empirical benchmarks. We show that GUIDANCE2 outperforms all previously developed methodologies. Furthermore, GUIDANCE2 also provides a set of alternative MSAs which can be useful for downstream analyses. The novel algorithm is implemented as a web-server, available at: http://guidance.tau.ac.il. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Energy Technology Data Exchange (ETDEWEB)
Secchi, Piercesare [MOX, Department of Mathematics, Polytechnic of Milan (Italy); Zio, Enrico [Department of Energy, Polytechnic of Milan, Via Ponzio 34/3, 20133 Milano (Italy)], E-mail: enrico.zio@polimi.it; Di Maio, Francesco [Department of Energy, Polytechnic of Milan, Via Ponzio 34/3, 20133 Milano (Italy)
2008-12-15
For licensing purposes, safety cases of Nuclear Power Plants (NPPs) must be presented at the Regulatory Authority with the necessary confidence on the models used to describe the plant safety behavior. In principle, this requires the repetition of a large number of model runs to account for the uncertainties inherent in the model description of the true plant behavior. The present paper propounds the use of bootstrapped Artificial Neural Networks (ANNs) for performing the numerous model output calculations needed for estimating safety margins with appropriate confidence intervals. Account is given both to the uncertainties inherent in the plant model and to those introduced by the ANN regression models used for performing the repeated safety parameter evaluations. The proposed framework of analysis is first illustrated with reference to a simple analytical model and then to the estimation of the safety margin on the maximum fuel cladding temperature reached during a complete group distribution header blockage scenario in a RBMK-1500 nuclear reactor. The results are compared with those obtained by a traditional parametric approach.
International Nuclear Information System (INIS)
Marable, J.H.; de Saussure, G.; Weisbin, C.R.
1982-01-01
This chapter attempts to show how the various types of data presented and discussed in previous chapters can be combined and applied to the calculation of performance parameters of a reactor design model. Discusses derivation of least-squares adjustment; input data to the adjustment; the results of adjustment; and application to an LMFBR. Demonstrates that the least-squares formulae represent a logical, well-founded method for combining the results of integral and differential experiments. Includes calculational bias factors and their uncertainties. Concludes that the adjustment technique is a valuable tool, and that significant progress has been made with respect to its development and its applications. Recommends further work on the evaluation of covariance files, especially for calculational biases, and the inclusion of specific shielding factors as variables to be adjusted. The appendix features a calculation whose goal is to find the form of the projection operator which projects perpendicular to the calculational manifold
Uncertainty analysis for parameters of CFAST in the main control room fire scenario
Energy Technology Data Exchange (ETDEWEB)
Wang, Wanhong; Guo, Yun; Peng, Changhong [Univ. of Science and Technology of China No. 96, Anhui (China). School of Nuclear Science and Technology
2017-07-15
The fire accident is one of important initial events in the nuclear power plant. Moreover, the fire development process is extremely difficult and complex to predict accurately. As a result, the plant internal fire accidents have become one of the most realistic threat on the safety of the nuclear power plants. The main control room contains all the control and monitoring equipment that operators need. Once it is on fire, hostile environments would greatly impact on the safety of human operations. Therefore, fire probability safety analysis on the main control room has become a significant task. By using CFAST and Monte Carlo sampling method as a tool for fire modeling to simulate main control room on fire, we can examine uncertainty analysis for the important parameters of CFAST.
Tian, Li-Ping; Wang, Jianxin; Wu, Fang-Xiang
2012-09-01
The study of stability is essential for designing or controlling genetic regulatory networks, which can be described by nonlinear differential equations with time delays. Much attention has been paid to the study of delay-independent stability of genetic regulatory networks and as a result, many sufficient conditions have been derived for delay-independent stability. Although it might be more interesting in practice, delay-dependent stability of genetic regulatory networks has been studied insufficiently. Based on the linear matrix inequality (LMI) approach, in this study we will present some delay-dependent stability conditions for genetic regulatory networks. Then we extend these results to genetic regulatory networks with parameter uncertainties. To illustrate the effectiveness of our theoretical results, gene repressilatory networks are analyzed .
Directory of Open Access Journals (Sweden)
Chuanfeng Li
2017-01-01
Full Text Available Hypersonic vehicle is a typical parameter uncertain system with significant characteristics of strong coupling, nonlinearity, and external disturbance. In this paper, a combined system modeling approach is proposed to approximate the actual vehicle system. The state feedback control strategy is adopted based on the robust guaranteed cost control (RGCC theory, where the Lyapunov function is applied to get control law for nonlinear system and the problem is transformed into a feasible solution by linear matrix inequalities (LMI method. In addition, a nonfragile guaranteed cost controller solved by LMI optimization approach is employed to the linear error system, where a single hidden layer neural network (SHLNN is employed as an additive gain compensator to reduce excessive performance caused by perturbations and uncertainties. Simulation results show the stability and well tracking performance for the proposed strategy in controlling the vehicle system.
Michalik, Thomas; Multsch, Sebastian; Frede, Hans-Georg; Breuer, Lutz
2016-04-01
Water for agriculture is strongly limited in arid and semi-arid regions and often of low quality in terms of salinity. The application of saline waters for irrigation increases the salt load in the rooting zone and has to be managed by leaching to maintain a healthy soil, i.e. to wash out salts by additional irrigation. Dynamic simulation models are helpful tools to calculate the root zone water fluxes and soil salinity content in order to investigate best management practices. However, there is little information on structural and parameter uncertainty for simulations regarding the water and salt balance of saline irrigation. Hence, we established a multi-model system with four different models (AquaCrop, RZWQM, SWAP, Hydrus1D/UNSATCHEM) to analyze the structural and parameter uncertainty by using the Global Likelihood and Uncertainty Estimation (GLUE) method. Hydrus1D/UNSATCHEM and SWAP were set up with multiple sets of different implemented functions (e.g. matric and osmotic stress for root water uptake) which results in a broad range of different model structures. The simulations were evaluated against soil water and salinity content observations. The posterior distribution of the GLUE analysis gives behavioral parameters sets and reveals uncertainty intervals for parameter uncertainty. Throughout all of the model sets, most parameters accounting for the soil water balance show a low uncertainty, only one or two out of five to six parameters in each model set displays a high uncertainty (e.g. pore-size distribution index in SWAP and Hydrus1D/UNSATCHEM). The differences between the models and model setups reveal the structural uncertainty. The highest structural uncertainty is observed for deep percolation fluxes between the model sets of Hydrus1D/UNSATCHEM (~200 mm) and RZWQM (~500 mm) that are more than twice as high for the latter. The model sets show a high variation in uncertainty intervals for deep percolation as well, with an interquartile range (IQR) of
A practical method to assess model sensitivity and parameter uncertainty in C cycle models
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
2015-04-01
The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary
Zatarain-Salazar, J.; Reed, P. M.; Quinn, J.; Giuliani, M.; Castelletti, A.
2016-12-01
As we confront the challenges of managing river basin systems with a large number of reservoirs and increasingly uncertain tradeoffs impacting their operations (due to, e.g. climate change, changing energy markets, population pressures, ecosystem services, etc.), evolutionary many-objective direct policy search (EMODPS) solution strategies will need to address the computational demands associated with simulating more uncertainties and therefore optimizing over increasingly noisy objective evaluations. Diagnostic assessments of state-of-the-art many-objective evolutionary algorithms (MOEAs) to support EMODPS have highlighted that search time (or number of function evaluations) and auto-adaptive search are key features for successful optimization. Furthermore, auto-adaptive MOEA search operators are themselves sensitive to having a sufficient number of function evaluations to learn successful strategies for exploring complex spaces and for escaping from local optima when stagnation is detected. Fortunately, recent parallel developments allow coordinated runs that enhance auto-adaptive algorithmic learning and can handle scalable and reliable search with limited wall-clock time, but at the expense of the total number of function evaluations. In this study, we analyze this tradeoff between parallel coordination and depth of search using different parallelization schemes of the Multi-Master Borg on a many-objective stochastic control problem. We also consider the tradeoff between better representing uncertainty in the stochastic optimization, and simplifying this representation to shorten the function evaluation time and allow for greater search. Our analysis focuses on the Lower Susquehanna River Basin (LSRB) system where multiple competing objectives for hydropower production, urban water supply, recreation and environmental flows need to be balanced. Our results provide guidance for balancing exploration, uncertainty, and computational demands when using the EMODPS
Hinton, Denise; Kirk, Susan
2017-06-01
Background There is growing recognition that multiple sclerosis is a possible, albeit uncommon, diagnosis in childhood. However, very little is known about the experiences of families living with childhood multiple sclerosis and this is the first study to explore this in depth. Objective Our objective was to explore the experiences of parents of children with multiple sclerosis. Methods Qualitative in-depth interviews with 31 parents using a grounded theory approach were conducted. Parents were sampled and recruited via health service and voluntary sector organisations in the United Kingdom. Results Parents' accounts of life with childhood multiple sclerosis were dominated by feelings of uncertainty associated with four sources; diagnostic uncertainty, daily uncertainty, interaction uncertainty and future uncertainty. Parents attempted to manage these uncertainties using specific strategies, which could in turn create further uncertainties about their child's illness. However, over time, ongoing uncertainty appeared to give parents hope for their child's future with multiple sclerosis. Conclusion Illness-related uncertainties appear to play a role in generating hope among parents of a child with multiple sclerosis. However, this may lead parents to avoid sources of information and support that threatens their fragile optimism. Professionals need to be sensitive to the role hope plays in supporting parental coping with childhood multiple sclerosis.
Wu, Yiping; Liu, Shuguang; Huang, Zhihong; Yan, Wende
2014-01-01
Ecosystem models are useful tools for understanding ecological processes and for sustainable management of resources. In biogeochemical field, numerical models have been widely used for investigating carbon dynamics under global changes from site to regional and global scales. However, it is still challenging to optimize parameters and estimate parameterization uncertainty for complex process-based models such as the Erosion Deposition Carbon Model (EDCM), a modified version of CENTURY, that consider carbon, water, and nutrient cycles of ecosystems. This study was designed to conduct the parameter identifiability, optimization, sensitivity, and uncertainty analysis of EDCM using our developed EDCM-Auto, which incorporated a comprehensive R package—Flexible Modeling Framework (FME) and the Shuffled Complex Evolution (SCE) algorithm. Using a forest flux tower site as a case study, we implemented a comprehensive modeling analysis involving nine parameters and four target variables (carbon and water fluxes) with their corresponding measurements based on the eddy covariance technique. The local sensitivity analysis shows that the plant production-related parameters (e.g., PPDF1 and PRDX) are most sensitive to the model cost function. Both SCE and FME are comparable and performed well in deriving the optimal parameter set with satisfactory simulations of target variables. Global sensitivity and uncertainty analysis indicate that the parameter uncertainty and the resulting output uncertainty can be quantified, and that the magnitude of parameter-uncertainty effects depends on variables and seasons. This study also demonstrates that using the cutting-edge R functions such as FME can be feasible and attractive for conducting comprehensive parameter analysis for ecosystem modeling.
Zhang, Xuesong; Liang, Faming; Yu, Beibei; Zong, Ziliang
2011-01-01
Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow
Simulation of corn yields and parameters uncertainties analysis in Hebei and Sichuang, China
Fu, A.; Xue, Y.; Hartman, M. D.; Chandran, A.; Qiu, B.; Liu, Y.
2016-12-01
Corn is one of most important agricultural production in China. Research on the impacts of climate change and human activities on corn yields is important in understanding and mitigating the negative effects of environmental factors on corn yields and maintaining the stable corn production. Using climatic data, including daily temperature, precipitation, and solar radiation from 1948 to 2010, soil properties, observed corn yields, and farmland management information, corn yields in Sichuang and Hebei Provinces of China in the past 63 years were simulated using the Daycent model, and the results was evaluated using Root mean square errors, bias, simulation efficiency, and standard deviation. The primary climatic factors influencing corn yields were examined, the uncertainties of climatic factors was analyzed, and the uncertainties of human activity parameters were also studied by changing fertilization levels and cultivated ways. The results showed that: (1) Daycent model is capable to simulate corn yields in Sichuang and Hebei provinces of China. Observed and simulated corn yields have the similar increasing trend with time. (2) The minimum daily temperature is the primary factor influencing corn yields in Sichuang. In Hebei Province, daily temperature, precipitation and wind speed significantly affect corn yields.(3) When the global warming trend of original data was removed, simulated corn yields were lower than before, decreased by about 687 kg/hm2 from 1992 to 2010; When the fertilization levels, cultivated ways were increased and decreased by 50% and 75%, respectively in the Schedule file in Daycent model, the simulated corn yields increased by 1206 kg/hm2 and 776 kg/hm2, respectively, with the enhancement of fertilization level and the improvement of cultivated way. This study provides a scientific base for selecting a suitable fertilization level and cultivated way in corn fields in China.
International Nuclear Information System (INIS)
Ensslin, Torsten A.; Frommert, Mona
2011-01-01
The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequent reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.
Directory of Open Access Journals (Sweden)
Miroslav Badida
2008-06-01
Full Text Available Identification of the noise measuring uncertainties by declared measured values is unconditionally necessary and required by legislative. Uncertainty of the measurements expresses all errors that accrue during the measuring. B y indication of uncertainties the measure documents that the objective value is with certain probability found in the interval that is bounded by the measurement uncertainty. The paper deals with the methodology of the uncertainty calculation by noise measurements in living and working environments. metal processing industry and building materials industry.
Zatarain Salazar, Jazmin; Reed, Patrick M.; Quinn, Julianne D.; Giuliani, Matteo; Castelletti, Andrea
2017-11-01
promising new set of tools for effectively balancing exploration, uncertainty, and computational demands when using EMODPS.
Sensitivity Analysis of Uncertainty Parameter based on MARS-LMR Code on SHRT-45R of EBR II
Energy Technology Data Exchange (ETDEWEB)
Kang, Seok-Ju; Kang, Doo-Hyuk; Seo, Jae-Seung [System Engineering and Technology Co., Daejeon (Korea, Republic of); Bae, Sung-Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeong, Hae-Yong [Sejong University, Seoul (Korea, Republic of)
2016-10-15
In order to assess the uncertainty quantification of the MARS-LMR code, the code has been improved by modifying the source code to accommodate calculation process required for uncertainty quantification. In the present study, a transient of Unprotected Loss of Flow(ULOF) is selected as typical cases of as Anticipated Transient without Scram(ATWS) which belongs to DEC category. The MARS-LMR input generation for EBR II SHRT-45R and execution works are performed by using the PAPIRUS program. The sensitivity analysis is carried out with Uncertainty Parameter of the MARS-LMR code for EBR-II SHRT-45R. Based on the results of sensitivity analysis, dominant parameters with large sensitivity to FoM are picked out. Dominant parameters selected are closely related to the development process of ULOF event.
muView: A Visual Analysis System for Exploring Uncertainty in Myocardial Ischemia Simulations
Rosen, Paul
2016-05-23
In this paper we describe the Myocardial Uncertainty Viewer (muView or μView) system for exploring data stemming from the simulation of cardiac ischemia. The simulation uses a collection of conductivity values to understand how ischemic regions effect the undamaged anisotropic heart tissue. The data resulting from the simulation is multi-valued and volumetric, and thus, for every data point, we have a collection of samples describing cardiac electrical properties. μView combines a suite of visual analysis methods to explore the area surrounding the ischemic zone and identify how perturbations of variables change the propagation of their effects. In addition to presenting a collection of visualization techniques, which individually highlight different aspects of the data, the coordinated view system forms a cohesive environment for exploring the simulations. We also discuss the findings of our study, which are helping to steer further development of the simulation and strengthening our collaboration with the biomedical engineers attempting to understand the phenomenon.
muView: A Visual Analysis System for Exploring Uncertainty in Myocardial Ischemia Simulations
Rosen, Paul; Burton, Brett; Potter, Kristin; Johnson, Chris R.
2016-01-01
In this paper we describe the Myocardial Uncertainty Viewer (muView or μView) system for exploring data stemming from the simulation of cardiac ischemia. The simulation uses a collection of conductivity values to understand how ischemic regions effect the undamaged anisotropic heart tissue. The data resulting from the simulation is multi-valued and volumetric, and thus, for every data point, we have a collection of samples describing cardiac electrical properties. μView combines a suite of visual analysis methods to explore the area surrounding the ischemic zone and identify how perturbations of variables change the propagation of their effects. In addition to presenting a collection of visualization techniques, which individually highlight different aspects of the data, the coordinated view system forms a cohesive environment for exploring the simulations. We also discuss the findings of our study, which are helping to steer further development of the simulation and strengthening our collaboration with the biomedical engineers attempting to understand the phenomenon.
Adaptive multiscale MCMC algorithm for uncertainty quantification in seismic parameter estimation
Tan, Xiaosi
2014-08-05
Formulating an inverse problem in a Bayesian framework has several major advantages (Sen and Stoffa, 1996). It allows finding multiple solutions subject to flexible a priori information and performing uncertainty quantification in the inverse problem. In this paper, we consider Bayesian inversion for the parameter estimation in seismic wave propagation. The Bayes\\' theorem allows writing the posterior distribution via the likelihood function and the prior distribution where the latter represents our prior knowledge about physical properties. One of the popular algorithms for sampling this posterior distribution is Markov chain Monte Carlo (MCMC), which involves making proposals and calculating their acceptance probabilities. However, for large-scale problems, MCMC is prohibitevely expensive as it requires many forward runs. In this paper, we propose a multilevel MCMC algorithm that employs multilevel forward simulations. Multilevel forward simulations are derived using Generalized Multiscale Finite Element Methods that we have proposed earlier (Efendiev et al., 2013a; Chung et al., 2013). Our overall Bayesian inversion approach provides a substantial speed-up both in the process of the sampling via preconditioning using approximate posteriors and the computation of the forward problems for different proposals by using the adaptive nature of multiscale methods. These aspects of the method are discussed n the paper. This paper is motivated by earlier work of M. Sen and his collaborators (Hong and Sen, 2007; Hong, 2008) who proposed the development of efficient MCMC techniques for seismic applications. In the paper, we present some preliminary numerical results.
International Nuclear Information System (INIS)
Meyer, Philip D.; Gee, Glendon W.; Nicholson, Thomas J.
1999-01-01
This report addresses issues related to the analysis of uncertainty in dose assessments conducted as part of decommissioning analyses. The analysis is limited to the hydrologic aspects of the exposure pathway involving infiltration of water at the ground surface, leaching of contaminants, and transport of contaminants through the groundwater to a point of exposure. The basic conceptual models and mathematical implementations of three dose assessment codes are outlined along with the site-specific conditions under which the codes may provide inaccurate, potentially nonconservative results. In addition, the hydrologic parameters of the codes are identified and compared. A methodology for parameter uncertainty assessment is outlined that considers the potential data limitations and modeling needs of decommissioning analyses. This methodology uses generic parameter distributions based on national or regional databases, sensitivity analysis, probabilistic modeling, and Bayesian updating to incorporate site-specific information. Data sources for best-estimate parameter values and parameter uncertainty information are also reviewed. A follow-on report will illustrate the uncertainty assessment methodology using decommissioning test cases
International Nuclear Information System (INIS)
Meyer D, Philip; Gee W, Glendon
2000-01-01
This report addresses issues related to the analysis of uncertainty in dose assessments conducted as part of decommissioning analyses. The analysis is limited to the hydrologic aspects of the exposure pathway involving infiltration of water at the ground surface, leaching of contaminants, and transport of contaminants through the groundwater to a point of exposure. The basic conceptual models and mathematical implementations of three dose assessment codes are outlined along with the site-specific conditions under which the codes may provide inaccurate, potentially nonconservative results. In addition, the hydrologic parameters of the codes are identified and compared. A methodology for parameter uncertainty assessment is outlined that considers the potential data limitations and modeling needs of decommissioning analyses. This methodology uses generic parameter distributions based on national or regional databases, sensitivity analysis, probabilistic modeling, and Bayesian updating to incorporate site-specific information. Data sources for best-estimate parameter values and parameter uncertainty information are also reviewed. A follow-on report will illustrate the uncertainty assessment methodology using decommissioning test cases
Directory of Open Access Journals (Sweden)
Yan Han
2013-01-01
Full Text Available An interval-parameter fuzzy linear programming with stochastic vertices (IFLPSV method is developed for water resources management under uncertainty by coupling interval-parameter fuzzy linear programming (IFLP with stochastic programming (SP. As an extension of existing interval parameter fuzzy linear programming, the developed IFLPSV approach has advantages in dealing with dual uncertainty optimization problems, which uncertainty presents as interval parameter with stochastic vertices in both of the objective functions and constraints. The developed IFLPSV method improves upon the IFLP method by allowing dual uncertainty parameters to be incorporated into the optimization processes. A hybrid intelligent algorithm based on genetic algorithm and artificial neural network is used to solve the developed model. The developed method is then applied to water resources allocation in Beijing city of China in 2020, where water resources shortage is a challenging issue. The results indicate that reasonable solutions have been obtained, which are helpful and useful for decision makers. Although the amount of water supply from Guanting and Miyun reservoirs is declining with rainfall reduction, water supply from the South-to-North Water Transfer project will have important impact on water supply structure of Beijing city, particularly in dry year and extraordinary dry year.
The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, andParameter Estimation (UA/SA/PE API) (also known as Calibration, Optimization and Sensitivity and Uncertainty (CUSO)) was developed in a joint effort between several members of both ...
DEFF Research Database (Denmark)
Ramin, Elham; Sin, Gürkan; Mikkelsen, Peter Steen
2011-01-01
Uncertainty derived from one of the process models – such as one-dimensional secondary settling tank (SST) models – can impact the output of the other process models, e.g., biokinetic (ASM1), as well as the integrated wastewater treatment plant (WWTP) models. The model structure and parameter...... and from the last aerobic bioreactor upstream to the SST (Garrett/hydraulic method). For model structure uncertainty, two one-dimensional secondary settling tank (1-D SST) models are assessed, including a first-order model (the widely used Takács-model), in which the feasibility of using measured...... uncertainty of settler models can therefore propagate, and add to the uncertainties in prediction of any plant performance criteria. Here we present an assessment of the relative significance of secondary settling model performance in WWTP simulations. We perform a global sensitivity analysis (GSA) based...
Parameter uncertainty analysis for the annual phosphorus loss estimator (APLE) model
Technical abstract: Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analys...
Model parameter uncertainty analysis for annual field-scale P loss model
Phosphorous (P) loss models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. All P loss models, however, have an inherent amount of uncertainty associated with them. In this study, we conducted an uncertainty analysis with ...
Model parameter uncertainty analysis for an annual field-scale phosphorus loss model
Phosphorous (P) loss models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. All P loss models, however, have an inherent amount of uncertainty associated with them. In this study, we conducted an uncertainty analysis with ...
Directory of Open Access Journals (Sweden)
Adrian Nocoń
2015-09-01
Full Text Available This paper presents an analysis of the influence of uncertainty of power system mathematical model parameters on optimised parameters of PSS2A system stabilizers. Optimisation of power system stabilizer parameters was based on polyoptimisation (multi-criteria optimisation. Optimisation criteria were determined for disturbances occurring in a multi-machine power system, when taking into account transient waveforms associated with electromechanical swings (instantaneous power, angular speed and terminal voltage waveforms of generators. A genetic algorithm with floating-point encoding, tournament selection, mean crossover and perturbative mutations, modified for the needs of investigations, was used for optimisation. The impact of uncertainties on the quality of operation of power system stabilizers with optimised parameters has been evaluated using various deformation factors.
Exploring the notion of a 'capability for uncertainty' and the implications for leader development
Directory of Open Access Journals (Sweden)
Kathy Bennett
2016-10-01
Full Text Available Orientation: With uncertainty increasingly defining organisational contexts, executive leaders need to develop their ‘capability for uncertainty’ – the ability to engage with uncertainty in their organisational context and to lead others, while simultaneously managing their own experience of uncertainty. However, what constitutes such a holistic ‘capability for uncertainty’ is not clear. Research purpose: The purpose was to gain an understanding of what constitutes a capability for uncertainty. Motivation for the study: Gaining an understanding of what components constitute leaders’ capability for uncertainty would provide a basis for determining what interventions would be relevant for developing leaders towards achieving such a capability. Research approach, design and method: An interpretive qualitative approach was adopted, using interpretative phenomenological analysis to gain an understanding of what capability executive leaders developed through their lived experience of uncertainty. Two purposive samples of six executive leaders from two different South African companies (a private company and a state-owned company, which had both been experiencing long-term organisational uncertainty prior to and up to the time of the study, were used. Data were collected through semi-structured interviews. Main findings: The executives all developed their capability for uncertainty through lived experiences of uncertainty, to a greater or lesser extent. Five components were identified as constituting a holistic capability for uncertainty, as follows: a sense of positive identity, an acceptance of uncertainty, effective sense-making, learning agility and relevant leadership practices during organisational uncertainty. Practical/managerial implications: Organisations need to target and design leader development interventions to specifically develop these components of a holistic capability for uncertainty in executives and leaders, enabling
Energy Technology Data Exchange (ETDEWEB)
Nakagawa, Tsuneo; Shibata, Keiichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1997-05-01
Uncertainties have been estimated for the resonance parameters of {sup 56}Fe, {sup 239}Pu, {sup 240}Pu and {sup 238}U contained in JENDL-3.2. Errors of the parameters were determined from the measurements which the evaluation was based on. The estimated errors have been compiled in the MF32 of the ENDF format. The numerical results are given in tables. (author)
DeLannoy, Gabrielle J. M.; Reichle, Rolf H.; Vrugt, Jasper A.
2013-01-01
Uncertainties in L-band (1.4 GHz) radiative transfer modeling (RTM) affect the simulation of brightness temperatures (Tb) over land and the inversion of satellite-observed Tb into soil moisture retrievals. In particular, accurate estimates of the microwave soil roughness, vegetation opacity and scattering albedo for large-scale applications are difficult to obtain from field studies and often lack an uncertainty estimate. Here, a Markov Chain Monte Carlo (MCMC) simulation method is used to determine satellite-scale estimates of RTM parameters and their posterior uncertainty by minimizing the misfit between long-term averages and standard deviations of simulated and observed Tb at a range of incidence angles, at horizontal and vertical polarization, and for morning and evening overpasses. Tb simulations are generated with the Goddard Earth Observing System (GEOS-5) and confronted with Tb observations from the Soil Moisture Ocean Salinity (SMOS) mission. The MCMC algorithm suggests that the relative uncertainty of the RTM parameter estimates is typically less than 25 of the maximum a posteriori density (MAP) parameter value. Furthermore, the actual root-mean-square-differences in long-term Tb averages and standard deviations are found consistent with the respective estimated total simulation and observation error standard deviations of m3.1K and s2.4K. It is also shown that the MAP parameter values estimated through MCMC simulation are in close agreement with those obtained with Particle Swarm Optimization (PSO).
Lu, X.
2014-01-01
This book studies the variety of organizational strategy selection when coping with critical uncertainties during a crisis. In dealing with uncertainties, some organizations rely on organizational routines developed over time, while some others analyze uncertainty in an ad hoc way to provide a
Directory of Open Access Journals (Sweden)
Eleanor S Devenish Nelson
Full Text Available BACKGROUND: Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. METHODOLOGY/PRINCIPAL FINDINGS: We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. CONCLUSIONS/SIGNIFICANCE: Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species.
Directory of Open Access Journals (Sweden)
Douglas A. Fynan
2016-06-01
Full Text Available The Gaussian process model (GPM is a flexible surrogate model that can be used for nonparametric regression for multivariate problems. A unique feature of the GPM is that a prediction variance is automatically provided with the regression function. In this paper, we estimate the safety margin of a nuclear power plant by performing regression on the output of best-estimate simulations of a large-break loss-of-coolant accident with sampling of safety system configuration, sequence timing, technical specifications, and thermal hydraulic parameter uncertainties. The key aspect of our approach is that the GPM regression is only performed on the dominant input variables, the safety injection flow rate and the delay time for AC powered pumps to start representing sequence timing uncertainty, providing a predictive model for the peak clad temperature during a reflood phase. Other uncertainties are interpreted as contributors to the measurement noise of the code output and are implicitly treated in the GPM in the noise variance term, providing local uncertainty bounds for the peak clad temperature. We discuss the applicability of the foregoing method to reduce the use of conservative assumptions in best estimate plus uncertainty (BEPU and Level 1 probabilistic safety assessment (PSA success criteria definitions while dealing with a large number of uncertainties.
Vernon, Ian; Liu, Junli; Goldstein, Michael; Rowe, James; Topping, Jen; Lindsey, Keith
2018-01-02
Many mathematical models have now been employed across every area of systems biology. These models increasingly involve large numbers of unknown parameters, have complex structure which can result in substantial evaluation time relative to the needs of the analysis, and need to be compared to observed data of various forms. The correct analysis of such models usually requires a global parameter search, over a high dimensional parameter space, that incorporates and respects the most important sources of uncertainty. This can be an extremely difficult task, but it is essential for any meaningful inference or prediction to be made about any biological system. It hence represents a fundamental challenge for the whole of systems biology. Bayesian statistical methodology for the uncertainty analysis of complex models is introduced, which is designed to address the high dimensional global parameter search problem. Bayesian emulators that mimic the systems biology model but which are extremely fast to evaluate are embeded within an iterative history match: an efficient method to search high dimensional spaces within a more formal statistical setting, while incorporating major sources of uncertainty. The approach is demonstrated via application to a model of hormonal crosstalk in Arabidopsis root development, which has 32 rate parameters, for which we identify the sets of rate parameter values that lead to acceptable matches between model output and observed trend data. The multiple insights into the model's structure that this analysis provides are discussed. The methodology is applied to a second related model, and the biological consequences of the resulting comparison, including the evaluation of gene functions, are described. Bayesian uncertainty analysis for complex models using both emulators and history matching is shown to be a powerful technique that can greatly aid the study of a large class of systems biology models. It both provides insight into model behaviour
Xu, Zhuocan; Mace, Jay; Avalone, Linnea; Wang, Zhien
2015-04-01
The extreme variability of ice particle habits in precipitating clouds affects our understanding of these cloud systems in every aspect (i.e. radiation transfer, dynamics, precipitation rate, etc) and largely contributes to the uncertainties in the model representation of related processes. Ice particle mass-dimensional power law relationships, M=a*(D ^ b), are commonly assumed in models and retrieval algorithms, while very little knowledge exists regarding the uncertainties of these M-D parameters in real-world situations. In this study, we apply Optimal Estimation (OE) methodology to infer ice particle mass-dimensional relationship from ice particle size distributions and bulk water contents independently measured on board the University of Wyoming King Air during the Colorado Airborne Multi-Phase Cloud Study (CAMPS). We also utilize W-band radar reflectivity obtained on the same platform (King Air) offering a further constraint to this ill-posed problem (Heymsfield et al. 2010). In addition to the values of retrieved M-D parameters, the associated uncertainties are conveniently acquired in the OE framework, within the limitations of assumed Gaussian statistics. We find, given the constraints provided by the bulk water measurement and in situ radar reflectivity, that the relative uncertainty of mass-dimensional power law prefactor (a) is approximately 80% and the relative uncertainty of exponent (b) is 10-15%. With this level of uncertainty, the forward model uncertainty in radar reflectivity would be on the order of 4 dB or a factor of approximately 2.5 in ice water content. The implications of this finding are that inferences of bulk water from either remote or in situ measurements of particle spectra cannot be more certain than this when the mass-dimensional relationships are not known a priori which is almost never the case.
International Nuclear Information System (INIS)
Povilaitis, Mantas; Kelm, Stephan; Urbonavičius, Egidijus
2017-01-01
Highlights: • Uncertainty and sensitivity analysis for the Generic Containment severe accident. • Comparison of the analysis results with the uncertainties based in the user effect. • Demonstration of the similar importance of both the reducing the user effect and input uncertainties. - Abstract: Uncertainties in safety assessment of the nuclear power plants using computer codes come from several sources: choice of computer code, user effect (a strong impact of user choices on the simulation’s outcome) and uncertainty of various physical parameters. The “Generic Containment” activity was performed in the frames of the EU-FP7 project SARNET2 to investigate the influence of user effect and computer code choice on the results on the nuclear power plant scale. During this activity, a Generic Containment nodalisation was developed and used for exercise by the participants applying various computer codes. Even though the model of the Generic Containment and the transient scenario were precisely and uniquely defined, considerably different results were obtained not only among different codes but also among participants using the same code, showing significant influence of the user effect. This paper present analysis, which is an extension of the “Generic Containment” benchmark and investigates the effect of input parameter’s uncertainties in comparison to the user effect. Calculations were performed using the computer code ASTEC, the uncertainty and sensitivity of the results were estimated using GRS method and tool SUSA. The results of the present analysis show, that while there are differences between the uncertainty bands of the parameters, in general the deviation bands caused by parameters’ uncertainty and the user effect are comparable and of the same order. The properties of concrete and the surface areas may have more influence on containment pressure than the user effect and choice of computer code as identified in the SARNET2 Generic
Pande, S.; Arkesteijn, L.; Savenije, H.H.G.; Bastidas, L.A.
2015-01-01
This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is
Edouard, Simon; Vincendon, Béatrice; Ducrocq, Véronique
2018-05-01
Intense precipitation events in the Mediterranean often lead to devastating flash floods (FF). FF modelling is affected by several kinds of uncertainties and Hydrological Ensemble Prediction Systems (HEPS) are designed to take those uncertainties into account. The major source of uncertainty comes from rainfall forcing and convective-scale meteorological ensemble prediction systems can manage it for forecasting purpose. But other sources are related to the hydrological modelling part of the HEPS. This study focuses on the uncertainties arising from the hydrological model parameters and initial soil moisture with aim to design an ensemble-based version of an hydrological model dedicated to Mediterranean fast responding rivers simulations, the ISBA-TOP coupled system. The first step consists in identifying the parameters that have the strongest influence on FF simulations by assuming perfect precipitation. A sensitivity study is carried out first using a synthetic framework and then for several real events and several catchments. Perturbation methods varying the most sensitive parameters as well as initial soil moisture allow designing an ensemble-based version of ISBA-TOP. The first results of this system on some real events are presented. The direct perspective of this work will be to drive this ensemble-based version with the members of a convective-scale meteorological ensemble prediction system to design a complete HEPS for FF forecasting.
Directory of Open Access Journals (Sweden)
Shifei Yuan
2015-07-01
Full Text Available Accurate estimation of model parameters and state of charge (SoC is crucial for the lithium-ion battery management system (BMS. In this paper, the stability of the model parameters and SoC estimation under measurement uncertainty is evaluated by three different factors: (i sampling periods of 1/0.5/0.1 s; (ii current sensor precisions of ±5/±50/±500 mA; and (iii voltage sensor precisions of ±1/±2.5/±5 mV. Firstly, the numerical model stability analysis and parametric sensitivity analysis for battery model parameters are conducted under sampling frequency of 1–50 Hz. The perturbation analysis is theoretically performed of current/voltage measurement uncertainty on model parameter variation. Secondly, the impact of three different factors on the model parameters and SoC estimation was evaluated with the federal urban driving sequence (FUDS profile. The bias correction recursive least square (CRLS and adaptive extended Kalman filter (AEKF algorithm were adopted to estimate the model parameters and SoC jointly. Finally, the simulation results were compared and some insightful findings were concluded. For the given battery model and parameter estimation algorithm, the sampling period, and current/voltage sampling accuracy presented a non-negligible effect on the estimation results of model parameters. This research revealed the influence of the measurement uncertainty on the model parameter estimation, which will provide the guidelines to select a reasonable sampling period and the current/voltage sensor sampling precisions in engineering applications.
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
International Nuclear Information System (INIS)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K.
2013-01-01
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3) better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N par . One of the main goals of the present paper is to determine how large N par can be, while still maintaining reasonable computational efficiency; we find that N par = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
Energy Technology Data Exchange (ETDEWEB)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no [Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029, Blindern, NO-0315 Oslo (Norway)
2013-11-10
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3) better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.
Impact of nuclear data uncertainties on neutronics parameters of MYRRHA/XT-ADS
International Nuclear Information System (INIS)
Sugawara, T.; Stankovskiy, A.; Van den Eynde, G.; Sarotto, M.
2011-01-01
A flexible fast spectrum research reactor MYRRHA able to operate in subcritical (driven by a proton accelerator) and critical modes is being developed in SCK-CEN. In the framework of IP EUROTRANS programme the XT-ADS model has been investigated for MYRRHA. This paper reports the comparison of the sensitivity coefficients calculated for different calculation models and the uncertainties deduced from various covariance data for the discussion on the reliability of XT-ADS neutronics design. Sensitivity analysis is based on the comparison of three-dimensional heterogeneous and two-dimensional RZ calculation models. Three covariance data sets were employed to perform uncertainty analysis. The obtained sensitivity coefficients differ substantially between the 3D heterogeneous and RZ homogenized calculation models. The uncertainties deduced from the covariance data strongly depend on the covariance data variation. The covariance data of the nuclear data libraries is an open issue to discuss the reliability of the neutronics design. The uncertainties deduced from the covariance data for XT-ADS are 0.94% and 1.9% by the SCALE-6 44-group and TENDL-2009 covariance data, accordingly. The uncertainties exceed the 0.3% Δk (confidence level 1σ) target accuracy level. To achieve this target accuracy, the uncertainties should be improved by experiments under adequate conditions such as LBE or Pb moderated environment with MOX or Uranium fuel
Blum, David Arthur
Algae biodiesel is the sole sustainable and abundant transportation fuel source that can replace petrol diesel use; however, high competition and economic uncertainties exist, influencing independent venture capital decision making. Technology, market, management, and government action uncertainties influence competition and economic uncertainties in the venture capital industry. The purpose of this qualitative case study was to identify the best practice skills at IVC firms to predict uncertainty between early and late funding stages. The basis of the study was real options theory, a framework used to evaluate and understand the economic and competition uncertainties inherent in natural resource investment and energy derived from plant-based oils. Data were collected from interviews of 24 venture capital partners based in the United States who invest in algae and other renewable energy solutions. Data were analyzed by coding and theme development interwoven with the conceptual framework. Eight themes emerged: (a) expected returns model, (b) due diligence, (c) invest in specific sectors, (d) reduced uncertainty-late stage, (e) coopetition, (f) portfolio firm relationships, (g) differentiation strategy, and (h) modeling uncertainty and best practice. The most noteworthy finding was that predicting uncertainty at the early stage was impractical; at the expansion and late funding stages, however, predicting uncertainty was possible. The implications of these findings will affect social change by providing independent venture capitalists with best practice skills to increase successful exits, lessen uncertainty, and encourage increased funding of renewable energy firms, contributing to cleaner and healthier communities throughout the United States..
Exploring control parameters of two photon processes in solutions
Indian Academy of Sciences (India)
Here, we present the effect of several control parameters on the TPA process that are independent of .... as the typical selection rules and pathways of mole- cular transitions for ..... Inset in the graph shows the 780 beam spec- tra at two ...
Manceau, Jean-Charles; Loschetter, Annick; Rohmer, Jérémy; Le Cozannet, Gonéri; Lary Louis, de; Guénan Thomas, Le; Ken, Hnottavange-Telleen
2017-04-01
-Shafer theory has been used to represent and aggregate these pieces of information. The results of different aggregation rules as well as those of a classical probabilistic approach are compared with the purpose of highlighting the elements each of them could provide to the decision-maker (Manceau et al., 2016). The second example focuses on projections of future sea-level rise. Based on IPCC's constraints on the projection quantiles, and on the scientific community consensus level on the physical limits to future sea-level rise, a possibility distribution of the projections by 2100 under the RCP 8.5 scenario has been established. This possibility distribution has been confronted with a set of previously published probabilistic sea-level projections, with a focus on their ability to explore high ranges of sea-level rise (Le Cozannet et al., 2016). These two examples are complementary in the sense that they allow to address various aspects of the problem (e.g. representation of different types of information, conflict among experts, sources dependence). Moreover, we believe that the issues faced during these two experiences can be generalized to many risks/hazards assessment situations. References Manceau, JC., Loschetter, A., Rohmer, J., de Lary, L., Le Guénan, T., Hnottavange-Telleen, K. (2016). Dealing with uncertainty on parameters elicited from a pool of experts for CCS risk assessment. Congrès λμ 20 (St-Malo, France). Le Cozannet G., Manceau JC., Rohmer, J. (2016). Bounding probabilistic sea-level rise projections within the framework of the possibility theory. Accepted in Environmental Research Letters.
Zonta, Zivko J; Flotats, Xavier; Magrí, Albert
2014-08-01
The procedure commonly used for the assessment of the parameters included in activated sludge models (ASMs) relies on the estimation of their optimal value within a confidence region (i.e. frequentist inference). Once optimal values are estimated, parameter uncertainty is computed through the covariance matrix. However, alternative approaches based on the consideration of the model parameters as probability distributions (i.e. Bayesian inference), may be of interest. The aim of this work is to apply (and compare) both Bayesian and frequentist inference methods when assessing uncertainty for an ASM-type model, which considers intracellular storage and biomass growth, simultaneously. Practical identifiability was addressed exclusively considering respirometric profiles based on the oxygen uptake rate and with the aid of probabilistic global sensitivity analysis. Parameter uncertainty was thus estimated according to both the Bayesian and frequentist inferential procedures. Results were compared in order to evidence the strengths and weaknesses of both approaches. Since it was demonstrated that Bayesian inference could be reduced to a frequentist approach under particular hypotheses, the former can be considered as a more generalist methodology. Hence, the use of Bayesian inference is encouraged for tackling inferential issues in ASM environments.
International Nuclear Information System (INIS)
Baum, C; Alber, M; Birkner, M; Nuesslin, F
2004-01-01
Geometric uncertainties arise during treatment planning and treatment and mean that dose-dependent parameters such as EUD are random variables with a patient specific probability distribution. Treatment planning with highly conformal treatment techniques such as intensity modulated radiation therapy requires new evaluation tools which allow us to estimate this influence of geometrical uncertainties on the probable treatment dose for a planned dose distribution. Monte Carlo simulations of treatment courses with recalculation of the dose according to the daily geometric errors are a gold standard for such an evaluation. Distribution histograms which show the relative frequency of a treatment quality parameter in the treatment simulations can be used to evaluate the potential risks and chances of a planned dose distribution. As treatment simulations with dose recalculation are very time consuming for sufficient statistical accuracy, it is proposed to do treatment simulations in the dose parameter space where the result is mainly determined by the systematic and random component of the geometrical uncertainties. Comparison of the parameter space simulation method with the gold standard for prostate cases and a head and neck case shows good agreement as long as the number of fractions is high enough and the influence of tissue inhomogeneities and surface curvature on the dose is small
Directory of Open Access Journals (Sweden)
Mousong Wu
2016-02-01
Full Text Available Water and energy processes in frozen soils are important for better understanding hydrologic processes and water resources management in cold regions. To investigate the water and energy balance in seasonally frozen soils, CoupModel combined with the generalized likelihood uncertainty estimation (GLUE method was used. Simulation work on water and heat processes in frozen soil in northern China during the 2012/2013 winter was conducted. Ensemble simulations through the Monte Carlo sampling method were generated for uncertainty analysis. Behavioral simulations were selected based on combinations of multiple model performance index criteria with respect to simulated soil water and temperature at four depths (5 cm, 15 cm, 25 cm, and 35 cm. Posterior distributions for parameters related to soil hydraulic, radiation processes, and heat transport indicated that uncertainties in both input and model structures could influence model performance in modeling water and heat processes in seasonally frozen soils. Seasonal courses in water and energy partitioning were obvious during the winter. Within the day-cycle, soil evaporation/condensation and energy distributions were well captured and clarified as an important phenomenon in the dynamics of the energy balance system. The combination of the CoupModel simulations with the uncertainty-based calibration method provides a way of understanding the seasonal courses of hydrology and energy processes in cold regions with limited data. Additional measurements may be used to further reduce the uncertainty of regulating factors during the different stages of freezing–thawing.
Energy Technology Data Exchange (ETDEWEB)
Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia (Sandia National Laboratories, Livermore, CA); Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Eddy, John P.
2011-12-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.
Analysis of parameter uncertainties in the assessment of seismic risk for nuclear power plants
International Nuclear Information System (INIS)
Yucemen, S.M.
1981-04-01
Probabilistic and statistical methods are used to develop a procedure by which the seismic risk at a specific site can be systematically analyzed. The proposed probabilistic procedure provides a consisted method for the modelling, analysis and updating of uncertainties that are involved in the seismic risk analysis for nuclear power plants. Methods are proposed for including these uncertainties in the final value of calculated risks. Two specific case studies are presented in detail to illustrate the application of the probabilistic method of seismic risk evaluation and to investigate the sensitivity of results to different assumptions
Exploring uncertainty in glacier mass balance modelling with Monte Carlo simulation
Machguth, H.; Purves, R.S.; Oerlemans, J.; Hoelzle, M.; Paul, F.
2008-01-01
By means of Monte Carlo simulations we calculated uncertainty in modelled cumulative mass balance over 400 days at one particular point on the tongue of Morteratsch Glacier, Switzerland, using a glacier energy balance model of intermediate complexity. Before uncertainty assessment, the model was
Tyler Jon Smith; Lucy Amanda Marshall
2010-01-01
Model selection is an extremely important aspect of many hydrologic modeling studies because of the complexity, variability, and uncertainty that surrounds the current understanding of watershed-scale systems. However, development and implementation of a complete precipitation-runoff modeling framework, from model selection to calibration and uncertainty analysis, are...
Lehmkuhl, Markus; Peters, Hans Peter
2016-11-01
Based on 21 individual case studies, this article inventories the ways journalism deals with scientific uncertainty. The study identifies the decisions that impact a journalist's perception of a truth claim as unambiguous or ambiguous and the strategies to deal with uncertainty that arise from this perception. Key for understanding journalistic action is the outcome of three evaluations: What is the story about? How shall the story be told? What type of story is it? We reconstructed the strategies to overcome journalistic decision-making uncertainty in those cases in which they perceived scientific contingency as a problem. Journalism deals with uncertainty by way of omission, by contrasting the conflicting messages or by acknowledging the problem via the structure or language. One finding deserves particular mention: The lack of focus on scientific uncertainty is not only a problem of how journalists perceive and communicate but also a problem of how science communicates. © The Author(s) 2016.
International Nuclear Information System (INIS)
Ochs, Michael; Talerico, Caterina
2004-08-01
SKB is currently preparing license applications related to the deep repository for spent nuclear fuel and an encapsulation plant. The present report is one of several specific data reports feeding into the interim reporting for the latter application; it is concerned with the derivation and recommendation of radionuclide migration input parameters for a MX-80 bentonite buffer to PA models. Recommended values for the following parameters as well as the associated uncertainties are derived and documented for a total of 38 elements and oxidation states: diffusion-available porosity (ε); effective diffusivity (D e ); distribution coefficient (K d ). Because of the conditional nature of these parameters, particularly of K d , they were derived specifically for the conditions expected to be relevant for PA consequence calculations. K d values were generally evaluated for the specific porewater composition and solid/water ratio representative for MX-80 compacted to 1,590 kg/m 3 . Because of the highly conditional nature of K d , this was done for several porewater compositions which reflect possible variations in geochemical boundary conditions. D e and ε were derived as a function of density. Parameter derivation was based on systematic datasets available in the literature and/or on thermodynamic models. Associated uncertainties were assessed for a given set of PA conditions and as a function of variability in these conditions. In a final step, apparent diffusivity (D a ) values were calculated from the recommended parameters and compared with independent experimental measurements to arrive at selfconsistent sets of migration parameters
Reyes, J. J.; Adam, J. C.; Tague, C.
2016-12-01
Grasslands play an important role in agricultural production as forage for livestock; they also provide a diverse set of ecosystem services including soil carbon (C) storage. The partitioning of C between above and belowground plant compartments (i.e. allocation) is influenced by both plant characteristics and environmental conditions. The objectives of this study are to 1) develop and evaluate a hybrid C allocation strategy suitable for grasslands, and 2) apply this strategy to examine the importance of various parameters related to biogeochemical cycling, photosynthesis, allocation, and soil water drainage on above and belowground biomass. We include allocation as an important process in quantifying the model parameter uncertainty, which identifies the most influential parameters and what processes may require further refinement. For this, we use the Regional Hydro-ecologic Simulation System, a mechanistic model that simulates coupled water and biogeochemical processes. A Latin hypercube sampling scheme was used to develop parameter sets for calibration and evaluation of allocation strategies, as well as parameter uncertainty analysis. We developed the hybrid allocation strategy to integrate both growth-based and resource-limited allocation mechanisms. When evaluating the new strategy simultaneously for above and belowground biomass, it produced a larger number of less biased parameter sets: 16% more compared to resource-limited and 9% more compared to growth-based. This also demonstrates its flexible application across diverse plant types and environmental conditions. We found that higher parameter importance corresponded to sub- or supra-optimal resource availability (i.e. water, nutrients) and temperature ranges (i.e. too hot or cold). For example, photosynthesis-related parameters were more important at sites warmer than the theoretical optimal growth temperature. Therefore, larger values of parameter importance indicate greater relative sensitivity in
Bennett, A.; Nijssen, B.; Chegwidden, O.; Wood, A.; Clark, M. P.
2017-12-01
Model intercomparison experiments have been conducted to quantify the variability introduced during the model development process, but have had limited success in identifying the sources of this model variability. The Structure for Unifying Multiple Modeling Alternatives (SUMMA) has been developed as a framework which defines a general set of conservation equations for mass and energy as well as a common core of numerical solvers along with the ability to set options for choosing between different spatial discretizations and flux parameterizations. SUMMA can be thought of as a framework for implementing meta-models which allows for the investigation of the impacts of decisions made during the model development process. Through this flexibility we develop a hierarchy of definitions which allows for models to be compared to one another. This vocabulary allows us to define the notion of weak equivalence between model instantiations. Through this weak equivalence we develop the concept of model mimicry, which can be used to investigate the introduction of uncertainty and error during the modeling process as well as provide a framework for identifying modeling decisions which may complement or negate one another. We instantiate SUMMA instances that mimic the behaviors of the Variable Infiltration Capacity (VIC) model and the Precipitation Runoff Modeling System (PRMS) by choosing modeling decisions which are implemented in each model. We compare runs from these models and their corresponding mimics across the Columbia River Basin located in the Pacific Northwest of the United States and Canada. From these comparisons, we are able to determine the extent to which model implementation has an effect on the results, as well as determine the changes in sensitivity of parameters due to these implementation differences. By examining these changes in results and sensitivities we can attempt to postulate changes in the modeling decisions which may provide better estimation of
Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique
2018-01-22
We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.
Maina, Fadji Zaouna; Guadagnini, Alberto
2018-01-01
We study the contribution of typically uncertain subsurface flow parameters to gravity changes that can be recorded during pumping tests in unconfined aquifers. We do so in the framework of a Global Sensitivity Analysis and quantify the effects of uncertainty of such parameters on the first four statistical moments of the probability distribution of gravimetric variations induced by the operation of the well. System parameters are grouped into two main categories, respectively, governing groundwater flow in the unsaturated and saturated portions of the domain. We ground our work on the three-dimensional analytical model proposed by Mishra and Neuman (2011), which fully takes into account the richness of the physical process taking place across the unsaturated and saturated zones and storage effects in a finite radius pumping well. The relative influence of model parameter uncertainties on drawdown, moisture content, and gravity changes are quantified through (a) the Sobol' indices, derived from a classical decomposition of variance and (b) recently developed indices quantifying the relative contribution of each uncertain model parameter to the (ensemble) mean, skewness, and kurtosis of the model output. Our results document (i) the importance of the effects of the parameters governing the unsaturated flow dynamics on the mean and variance of local drawdown and gravity changes; (ii) the marked sensitivity (as expressed in terms of the statistical moments analyzed) of gravity changes to the employed water retention curve model parameter, specific yield, and storage, and (iii) the influential role of hydraulic conductivity of the unsaturated and saturated zones to the skewness and kurtosis of gravimetric variation distributions. The observed temporal dynamics of the strength of the relative contribution of system parameters to gravimetric variations suggest that gravity data have a clear potential to provide useful information for estimating the key hydraulic
Exploring parameter constraints on quintessential dark energy: The exponential model
International Nuclear Information System (INIS)
Bozek, Brandon; Abrahamse, Augusta; Albrecht, Andreas; Barnard, Michael
2008-01-01
We present an analysis of a scalar field model of dark energy with an exponential potential using the Dark Energy Task Force (DETF) simulated data models. Using Markov Chain Monte Carlo sampling techniques we examine the ability of each simulated data set to constrain the parameter space of the exponential potential for data sets based on a cosmological constant and a specific exponential scalar field model. We compare our results with the constraining power calculated by the DETF using their 'w 0 -w a ' parametrization of the dark energy. We find that respective increases in constraining power from one stage to the next produced by our analysis give results consistent with DETF results. To further investigate the potential impact of future experiments, we also generate simulated data for an exponential model background cosmology which cannot be distinguished from a cosmological constant at DETF 'Stage 2', and show that for this cosmology good DETF Stage 4 data would exclude a cosmological constant by better than 3σ
Exploring drivers of wetland hydrologic fluxes across parameters and space
Jones, C. N.; Cheng, F. Y.; Mclaughlin, D. L.; Basu, N. B.; Lang, M.; Alexander, L. C.
2017-12-01
Depressional wetlands provide diverse ecosystem services, ranging from critical habitat to the regulation of landscape hydrology. The latter is of particular interest, because while hydrologic connectivity between depressional wetlands and downstream waters has been a focus of both scientific research and policy, it remains difficult to quantify the mode, magnitude, and timing of this connectivity at varying spatial and temporary scales. To do so requires robust empirical and modeling tools that accurately represent surface and subsurface flowpaths between depressional wetlands and other landscape elements. Here, we utilize a parsimonious wetland hydrology model to explore drivers of wetland water fluxes in different archetypal wetland-rich landscapes. We validated the model using instrumented sites from regions that span North America: Prairie Pothole Region (south-central Canada), Delmarva Peninsula (Mid-Atlantic Coastal Plain), and Big Cypress Swamp (southern Florida). Then, using several national scale datasets (e.g., National Wetlands Inventory, USFWS; National Hydrography Dataset, USGS; Soil Survey Geographic Database, NRCS), we conducted a global sensitivity analysis to elucidate dominant drivers of simulated fluxes. Finally, we simulated and compared wetland hydrology in five contrasting landscapes dominated by depressional wetlands: prairie potholes, Carolina and Delmarva bays, pocosins, western vernal pools, and Texas coastal prairie wetlands. Results highlight specific drivers that vary across these regions. Largely, hydroclimatic variables (e.g., PET/P ratios) controlled the timing and magnitude of wetland connectivity, whereas both wetland morphology (e.g., storage capacity and watershed size) and soil characteristics (e.g., ksat and confining layer depth) controlled the duration and mode (surface vs. subsurface) of wetland connectivity. Improved understanding of the drivers of wetland hydrologic connectivity supports enhanced, region
Uncertainty in the use of MAMA software to measure particle morphological parameters from SEM images
Energy Technology Data Exchange (ETDEWEB)
Schwartz, Daniel S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tandon, Lav [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-06-05
The MAMA software package developed at LANL is designed to make morphological measurements on a wide variety of digital images of objects. At LANL, we have focused on using MAMA to measure scanning electron microscope (SEM) images of particles, as this is a critical part of our forensic analysis of interdicted radiologic materials. In order to successfully use MAMA to make such measurements, we must understand the level of uncertainty involved in the process, so that we can rigorously support our quantitative conclusions.
International Nuclear Information System (INIS)
Heo, Jaeseok; Kim, Kyung Doo
2015-01-01
Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM
Energy Technology Data Exchange (ETDEWEB)
Heo, Jaeseok; Kim, Kyung Doo [KAERI, Daejeon (Korea, Republic of)
2015-05-15
Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM.
Energy Technology Data Exchange (ETDEWEB)
Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.; Halappanavar, Mahantesh
2016-09-16
Securing cyber-systems on a continual basis against a multitude of adverse events is a challenging undertaking. Game-theoretic approaches, that model actions of strategic decision-makers, are increasingly being applied to address cybersecurity resource allocation challenges. Such game-based models account for multiple player actions and represent cyber attacker payoffs mostly as point utility estimates. Since a cyber-attacker’s payoff generation mechanism is largely unknown, appropriate representation and propagation of uncertainty is a critical task. In this paper we expand on prior work and focus on operationalizing the probabilistic uncertainty quantification framework, for a notional cyber system, through: 1) representation of uncertain attacker and system-related modeling variables as probability distributions and mathematical intervals, and 2) exploration of uncertainty propagation techniques including two-phase Monte Carlo sampling and probability bounds analysis.
Llopis-Albert, Carlos; Palacios-Marqués, Daniel; Merigó, José M.
2014-04-01
In this paper a methodology for the stochastic management of groundwater quality problems is presented, which can be used to provide agricultural advisory services. A stochastic algorithm to solve the coupled flow and mass transport inverse problem is combined with a stochastic management approach to develop methods for integrating uncertainty; thus obtaining more reliable policies on groundwater nitrate pollution control from agriculture. The stochastic inverse model allows identifying non-Gaussian parameters and reducing uncertainty in heterogeneous aquifers by constraining stochastic simulations to data. The management model determines the spatial and temporal distribution of fertilizer application rates that maximizes net benefits in agriculture constrained by quality requirements in groundwater at various control sites. The quality constraints can be taken, for instance, by those given by water laws such as the EU Water Framework Directive (WFD). Furthermore, the methodology allows providing the trade-off between higher economic returns and reliability in meeting the environmental standards. Therefore, this new technology can help stakeholders in the decision-making process under an uncertainty environment. The methodology has been successfully applied to a 2D synthetic aquifer, where an uncertainty assessment has been carried out by means of Monte Carlo simulation techniques.
International Nuclear Information System (INIS)
Pecchia, Marco; Vasiliev, Alexander; Leray, Olivier; Ferroukhi, Hakim; Pautz, Andreas
2015-01-01
A new methodology, referred to as manufacturing and technological parameters uncertainty quantification (MTUQ), is under development at Paul Scherrer Institut (PSI). Based on uncertainty and global sensitivity analysis methods, MTUQ aims at advancing state-of-the-art for the treatment of geometrical/material uncertainties in light water reactor computations, using the MCNPX Monte Carlo neutron transport code. The development is currently focused primarily on criticality safety evaluations (CSE). In that context, the key components are a dedicated modular interface with the MCNPX code and a user-friendly interface to model functional relationship between system variables. A unique feature is an automatic capability to parameterize variables belonging to so-called “repeated structures” such as to allow for perturbations of each individual element of a given system modelled with MCNPX. Concerning the statistical analysis capabilities, these are currently implemented through an interface with the ROOT platform to handle the random sampling design. This paper presents the current status of the MTUQ methodology development and a first assessment of an ongoing organisation for economic cooperation and development/nuclear energy agency benchmark dedicated to uncertainty analyses for CSE. The presented results illustrate the overall capabilities of MTUQ and underline its relevance in predicting more realistic results compared to a methodology previously applied at PSI for this particular benchmark. (author)
Vergara, H. J.; Kirstetter, P.; Hong, Y.; Gourley, J. J.; Wang, X.
2013-12-01
The Ensemble Kalman Filter (EnKF) is arguably the assimilation approach that has found the widest application in hydrologic modeling. Its relatively easy implementation and computational efficiency makes it an attractive method for research and operational purposes. However, the scientific literature featuring this approach lacks guidance on how the errors in the forecast need to be characterized so as to get the required corrections from the assimilation process. Moreover, several studies have indicated that the performance of the EnKF is 'sub-optimal' when assimilating certain hydrologic observations. Likewise, some authors have suggested that the underlying assumptions of the Kalman Filter and its dependence on linear dynamics make the EnKF unsuitable for hydrologic modeling. Such assertions are often based on ineffectiveness and poor robustness of EnKF implementations resulting from restrictive specification of error characteristics and the absence of a-priori information of error magnitudes. Therefore, understanding the capabilities and limitations of the EnKF to improve hydrologic forecasts require studying its sensitivity to the manner in which errors in the hydrologic modeling system are represented through ensembles. This study presents a methodology that explores various uncertainty representation configurations to characterize the errors in the hydrologic forecasts in a data assimilation context. The uncertainty in rainfall inputs is represented through a Generalized Additive Model for Location, Scale, and Shape (GAMLSS), which provides information about second-order statistics of quantitative precipitation estimates (QPE) error. The uncertainty in model parameters is described adding perturbations based on parameters covariance information. The method allows for the identification of rainfall and parameter perturbation combinations for which the performance of the EnKF is 'optimal' given a set of objective functions. In this process, information about
Wang, S.; Huang, G. H.; Huang, W.; Fan, Y. R.; Li, Z.
2015-10-01
In this study, a fractional factorial probabilistic collocation method is proposed to reveal statistical significance of hydrologic model parameters and their multi-level interactions affecting model outputs, facilitating uncertainty propagation in a reduced dimensional space. The proposed methodology is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability, as well as its capability of revealing complex and dynamic parameter interactions. A set of reduced polynomial chaos expansions (PCEs) only with statistically significant terms can be obtained based on the results of factorial analysis of variance (ANOVA), achieving a reduction of uncertainty in hydrologic predictions. The predictive performance of reduced PCEs is verified by comparing against standard PCEs and the Monte Carlo with Latin hypercube sampling (MC-LHS) method in terms of reliability, sharpness, and Nash-Sutcliffe efficiency (NSE). Results reveal that the reduced PCEs are able to capture hydrologic behaviors of the Xiangxi River watershed, and they are efficient functional representations for propagating uncertainties in hydrologic predictions.
International Nuclear Information System (INIS)
Di Salvo, J.
2002-01-01
The design studies of the future Material Testing Reactor Jules Horowitz require the development of an adapted neutronic calculation route. To guarantee good accuracy and save time cost, some approximations with deterministic modelling (APOLLO2 / CRONOS2) are needed. As no relevant integral experiments are yet available to ensure the accuracy of the calculation, the results need to be validated by a rigorous methodical approach, which is based on comparison against numerical benchmarks (Monte Carlo TRIPOLI4 code). In order to complete the validation results, sensitivity coefficients of main neutronic parameters to nuclear data are very useful to get an estimate of the final uncertainty on the calculation. Unfortunately, most of covariance information is missing in the recent evaluated files such as JEF-2.2. To generate missing covariance matrices, a method based on the comparison of different independent evaluations is used in this study. Special attention is paid to the determination of sensitivity coefficients, using perturbation methods and direct calculations. This study points out the importance of the non-diagonal elements of the covariance matrices as well as the neutron capture cross section uncertainty of the 27Al in the thermal range. In complement to uncertainty studies, it will be still necessary to obtain integral experimental validation of the Jules Horowitz Reactor neutronic parameters calculations. (author)
Parameter importance and uncertainty in predicting runoff pesticide reduction with filter strips.
Muñoz-Carpena, Rafael; Fox, Garey A; Sabbagh, George J
2010-01-01
Vegetative filter strips (VFS) are an environmental management tool used to reduce sediment and pesticide transport from surface runoff. Numerical models of VFS such as the Vegetative Filter Strip Modeling System (VFSMOD-W) are capable of predicting runoff, sediment, and pesticide reduction and can be useful tools to understand the effectiveness of VFS and environmental conditions under which they may be ineffective. However, as part of the modeling process, it is critical to identify input factor importance and quantify uncertainty in predicted runoff, sediment, and pesticide reductions. This research used state-of-the-art global sensitivity and uncertainty analysis tools, a screening method (Morris) and a variance-based method (extended Fourier Analysis Sensitivity Test), to evaluate VFSMOD-W under a range of field scenarios. The three VFS studies analyzed were conducted on silty clay loam and silt loam soils under uniform, sheet flow conditions and included atrazine, chlorpyrifos, cyanazine, metolachlor, pendimethalin, and terbuthylazine data. Saturated hydraulic conductivity was the most important input factor for predicting infiltration and runoff, explaining >75% of the total output variance for studies with smaller hydraulic loading rates ( approximately 100-150 mm equivalent depths) and approximately 50% for the higher loading rate ( approximately 280-mm equivalent depth). Important input factors for predicting sedimentation included hydraulic conductivity, average particle size, and the filter's Manning's roughness coefficient. Input factor importance for pesticide trapping was controlled by infiltration and, therefore, hydraulic conductivity. Global uncertainty analyses suggested a wide range of reductions for runoff (95% confidence intervals of 7-93%), sediment (84-100%), and pesticide (43-100%) . Pesticide trapping probability distributions fell between runoff and sediment reduction distributions as a function of the pesticides' sorption. Seemingly
Kulasiri, Don; Liang, Jingyi; He, Yao; Samarasinghe, Sandhya
2017-04-21
We investigate the epistemic uncertainties of parameters of a mathematical model that describes the dynamics of CaMKII-NMDAR complex related to memory formation in synapses using global sensitivity analysis (GSA). The model, which was published in this journal, is nonlinear and complex with Ca 2+ patterns with different level of frequencies as inputs. We explore the effects of parameter on the key outputs of the model to discover the most sensitive ones using GSA and partial ranking correlation coefficient (PRCC) and to understand why they are sensitive and others are not based on the biology of the problem. We also extend the model to add presynaptic neurotransmitter vesicles release to have action potentials as inputs of different frequencies. We perform GSA on this extended model to show that the parameter sensitivities are different for the extended model as shown by PRCC landscapes. Based on the results of GSA and PRCC, we reduce the original model to a less complex model taking the most important biological processes into account. We validate the reduced model against the outputs of the original model. We show that the parameter sensitivities are dependent on the inputs and GSA would make us understand the sensitivities and the importance of the parameters. A thorough phenomenological understanding of the relationships involved is essential to interpret the results of GSA and hence for the possible model reduction. Copyright © 2017 Elsevier Ltd. All rights reserved.
MPUQ-b: Bootstrapping Based Modal Parameter Uncertainty Quantification—Fundamental Principles
DEFF Research Database (Denmark)
Chauhan, S.; Ahmed, S. I.
2017-01-01
It is well known that modal parameters play a key role towards understanding the dynamics of a structure. Their estimation, by means of experiments, forms the crux of modal analysis. Modal parameters not only help in characterizing the dynamics of the structure but are also used for several other...
Uncertainty and sensitivity analysis of parameters affecting water hammer pressure wave behaviour
International Nuclear Information System (INIS)
Kaliatka, A.; Uspuras, E.; Vaisnoras, M.
2006-01-01
Pressure surges occurring in pipeline systems may be caused by fast control interference, start up and shut down processes and operation failure. They lead to water hammer upstream the closing valve and cavitational hammer downstream the valve, which may cause considerable damages to the pipeline and the support structures. Appearance of water hammer in thermal-hydraulic systems was widely studied employing different state-of-the-art thermal-hydraulic codes in many organizations. For the analysis water hammer test performed at Fraunhofer Institute for Environmental, Safety and Energy Technology (UMSICHT) at Oberhausen was considered. This paper presents the comparison of UMSICHT test facility experiment calculations employing the best estimate system code RELAP5/Mod3.3 to measured water hammer values after fast closure of a valve. The analysis revealed that the calculated first pressure peak, which has the highest value, matches the measured value very well. The performed analysis (as well as any other analyses) as a results of each individual calculation always contains uncertainty owing to initial conditions of installations, errors of measuring systems, errors caused by nodalization of objects at modelling, code correlations, etc. In this connection, results of uncertainty and sensitivity analysis of the initial conditions and code-selected models are shown in the paper. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Ochs, Michael; Talerico, Caterina [BMG Engineering Ltd, Zuerich (Switzerland)
2004-08-01
SKB is currently preparing license applications related to the deep repository for spent nuclear fuel and an encapsulation plant. The present report is one of several specific data reports feeding into the interim reporting for the latter application; it is concerned with the derivation and recommendation of radionuclide migration input parameters for a MX-80 bentonite buffer to PA models. Recommended values for the following parameters as well as the associated uncertainties are derived and documented for a total of 38 elements and oxidation states: diffusion-available porosity ({epsilon}); effective diffusivity (D{sub e}); distribution coefficient (K{sub d}). Because of the conditional nature of these parameters, particularly of K{sub d}, they were derived specifically for the conditions expected to be relevant for PA consequence calculations. K{sub d} values were generally evaluated for the specific porewater composition and solid/water ratio representative for MX-80 compacted to 1,590 kg/m{sup 3}. Because of the highly conditional nature of K{sub d}, this was done for several porewater compositions which reflect possible variations in geochemical boundary conditions. D{sub e} and {epsilon} were derived as a function of density. Parameter derivation was based on systematic datasets available in the literature and/or on thermodynamic models. Associated uncertainties were assessed for a given set of PA conditions and as a function of variability in these conditions. In a final step, apparent diffusivity (D{sub a}) values were calculated from the recommended parameters and compared with independent experimental measurements to arrive at selfconsistent sets of migration parameters.
International Nuclear Information System (INIS)
Potter, Kristin; Pascucci, Valerio; Johhson, Chris; Wilson, Andrew; Bremer, Peer-Timo; Williams, Dean; Doutriaux, Charles
2009-01-01
Climate scientists and meteorologists are working towards a better understanding of atmospheric conditions and global climate change. To explore the relationships present in numerical predictions of the atmosphere, ensemble datasets are produced that combine time- and spatially-varying simulations generated using multiple numeric models, sampled input conditions, and perturbed parameters. These data sets mitigate as well as describe the uncertainty present in the data by providing insight into the effects of parameter perturbation, sensitivity to initial conditions, and inconsistencies in model outcomes. As such, massive amounts of data are produced, creating challenges both in data analysis and in visualization. This work presents an approach to understanding ensembles by using a collection of statistical descriptors to summarize the data, and displaying these descriptors using variety of visualization techniques which are familiar to domain experts. The resulting techniques are integrated into the ViSUS/Climate Data and Analysis Tools (CDAT) system designed to provide a directly accessible, complex visualization framework to atmospheric researchers.
Hernández, Mario R.; Francés, Félix
2015-04-01
One phase of the hydrological models implementation process, significantly contributing to the hydrological predictions uncertainty, is the calibration phase in which values of the unknown model parameters are tuned by optimizing an objective function. An unsuitable error model (e.g. Standard Least Squares or SLS) introduces noise into the estimation of the parameters. The main sources of this noise are the input errors and the hydrological model structural deficiencies. Thus, the biased calibrated parameters cause the divergence model phenomenon, where the errors variance of the (spatially and temporally) forecasted flows far exceeds the errors variance in the fitting period, and provoke the loss of part or all of the physical meaning of the modeled processes. In other words, yielding a calibrated hydrological model which works well, but not for the right reasons. Besides, an unsuitable error model yields a non-reliable predictive uncertainty assessment. Hence, with the aim of prevent all these undesirable effects, this research focuses on the Bayesian joint inference (BJI) of both the hydrological and error model parameters, considering a general additive (GA) error model that allows for correlation, non-stationarity (in variance and bias) and non-normality of model residuals. As hydrological model, it has been used a conceptual distributed model called TETIS, with a particular split structure of the effective model parameters. Bayesian inference has been performed with the aid of a Markov Chain Monte Carlo (MCMC) algorithm called Dream-ZS. MCMC algorithm quantifies the uncertainty of the hydrological and error model parameters by getting the joint posterior probability distribution, conditioned on the observed flows. The BJI methodology is a very powerful and reliable tool, but it must be used correctly this is, if non-stationarity in errors variance and bias is modeled, the Total Laws must be taken into account. The results of this research show that the
Adaptive multiscale MCMC algorithm for uncertainty quantification in seismic parameter estimation
Tan, Xiaosi; Gibson, Richard L.; Leung, Wing Tat; Efendiev, Yalchin R.
2014-01-01
problem. In this paper, we consider Bayesian inversion for the parameter estimation in seismic wave propagation. The Bayes' theorem allows writing the posterior distribution via the likelihood function and the prior distribution where the latter represents
Directory of Open Access Journals (Sweden)
G. Lorenzo
2011-01-01
As main outcomes, a ranking of parameters' importance and an estimate of the failure probability, from a design target point of view, were achieved by sensitivity analysis and Monte Carlo simulations based on a response surface model.
Hua, Yongzhao; Dong, Xiwang; Li, Qingdong; Ren, Zhang
2017-05-18
This paper investigates the time-varying formation robust tracking problems for high-order linear multiagent systems with a leader of unknown control input in the presence of heterogeneous parameter uncertainties and external disturbances. The followers need to accomplish an expected time-varying formation in the state space and track the state trajectory produced by the leader simultaneously. First, a time-varying formation robust tracking protocol with a totally distributed form is proposed utilizing the neighborhood state information. With the adaptive updating mechanism, neither any global knowledge about the communication topology nor the upper bounds of the parameter uncertainties, external disturbances and leader's unknown input are required in the proposed protocol. Then, in order to determine the control parameters, an algorithm with four steps is presented, where feasible conditions for the followers to accomplish the expected time-varying formation tracking are provided. Furthermore, based on the Lyapunov-like analysis theory, it is proved that the formation tracking error can converge to zero asymptotically. Finally, the effectiveness of the theoretical results is verified by simulation examples.
Slob, Wout; Bakker, Martine I; Biesebeek, Jan Dirk Te; Bokkers, Bas G H
2014-08-01
Current methods for cancer risk assessment result in single values, without any quantitative information on the uncertainties in these values. Therefore, single risk values could easily be overinterpreted. In this study, we discuss a full probabilistic cancer risk assessment approach in which all the generally recognized uncertainties in both exposure and hazard assessment are quantitatively characterized and probabilistically evaluated, resulting in a confidence interval for the final risk estimate. The methodology is applied to three example chemicals (aflatoxin, N-nitrosodimethylamine, and methyleugenol). These examples illustrate that the uncertainty in a cancer risk estimate may be huge, making single value estimates of cancer risk meaningless. Further, a risk based on linear extrapolation tends to be lower than the upper 95% confidence limit of a probabilistic risk estimate, and in that sense it is not conservative. Our conceptual analysis showed that there are two possible basic approaches for cancer risk assessment, depending on the interpretation of the dose-incidence data measured in animals. However, it remains unclear which of the two interpretations is the more adequate one, adding an additional uncertainty to the already huge confidence intervals for cancer risk estimates. © 2014 Society for Risk Analysis.
Song, Qiankun; Yu, Qinqin; Zhao, Zhenjiang; Liu, Yurong; Alsaadi, Fuad E
2018-07-01
In this paper, the boundedness and robust stability for a class of delayed complex-valued neural networks with interval parameter uncertainties are investigated. By using Homomorphic mapping theorem, Lyapunov method and inequality techniques, sufficient condition to guarantee the boundedness of networks and the existence, uniqueness and global robust stability of equilibrium point is derived for the considered uncertain neural networks. The obtained robust stability criterion is expressed in complex-valued LMI, which can be calculated numerically using YALMIP with solver of SDPT3 in MATLAB. An example with simulations is supplied to show the applicability and advantages of the acquired result. Copyright © 2018 Elsevier Ltd. All rights reserved.
DEFF Research Database (Denmark)
Iglesias, J. E.; Sabuncu, M. R.; Van Leemput, Koen
2012-01-01
Many successful segmentation algorithms are based on Bayesian models in which prior anatomical knowledge is combined with the available image information. However, these methods typically have many free parameters that are estimated to obtain point estimates only, whereas a faithful Bayesian anal...
Akçay, A.E.; Biller, B.; Tayur, S.
2012-01-01
We consider a repeated newsvendor setting where the parameters of the demand distribution are unknown, and we study the problem of setting inventory targets using only a limited amount of historical demand data. We assume that the demand process is autocorrelated and represented by an
Uwizeye, U.A.; Groen, E.A.; Gerber, P.J.; Schulte, Rogier P.O.; Boer, de I.J.M.
2016-01-01
The study aims to illustrate a method to identify important input parameters that explain most of the output variance ofenvironmental assessment models. The method is tested for the computation of life-cycle nitrogen (N) use efficiencyindicators among mixed dairy production systems in Rwanda. We
DEFF Research Database (Denmark)
He, X.; Sonneborg, T.O.; Jørgensen, F.
2013-01-01
in three scenarios involving simulation of groundwater head distribution and travel time. The first scenario implied 100 stochastic geological models all assigning the same hydraulic parameters for the same geological units. In the second scenario the same 100 geological models were subjected to model...
Vanderborght, Jan; Tiktak, Aaldrik; Boesten, Jos J T I; Vereecken, Harry
2011-03-01
For the registration of pesticides in the European Union, model simulations for worst-case scenarios are used to demonstrate that leaching concentrations to groundwater do not exceed a critical threshold. A worst-case scenario is a combination of soil and climate properties for which predicted leaching concentrations are higher than a certain percentile of the spatial concentration distribution within a region. The derivation of scenarios is complicated by uncertainty about soil and pesticide fate parameters. As the ranking of climate and soil property combinations according to predicted leaching concentrations is different for different pesticides, the worst-case scenario for one pesticide may misrepresent the worst case for another pesticide, which leads to 'scenario uncertainty'. Pesticide fate parameter uncertainty led to higher concentrations in the higher percentiles of spatial concentration distributions, especially for distributions in smaller and more homogeneous regions. The effect of pesticide fate parameter uncertainty on the spatial concentration distribution was small when compared with the uncertainty of local concentration predictions and with the scenario uncertainty. Uncertainty in pesticide fate parameters and scenario uncertainty can be accounted for using higher percentiles of spatial concentration distributions and considering a range of pesticides for the scenario selection. Copyright © 2010 Society of Chemical Industry.
International Nuclear Information System (INIS)
Wulff, W.; Boyack, B.E.; Duffey, R.B.
1988-01-01
Comparisons of results from TRAC-PF1/MOD1 code calculations with measurements from Separate Effects Tests, and published experimental data for modeling parameters have been used to determine the uncertainty ranges of code input and modeling parameters which dominate the uncertainty in predicting the Peak Clad Temperature for a postulated Large Break Loss of Coolant Accident (LBLOCA) in a four-loop Westinghouse Pressurized Water Reactor. The uncertainty ranges are used for a detailed statistical analysis to calculate the probability distribution function for the TRAC code-predicted Peak Clad Temperature, as is described in an attendant paper. Measurements from Separate Effects Tests and Integral Effects Tests have been compared with results from corresponding TRAC-PF1/MOD1 code calculations to determine globally the total uncertainty in predicting the Peak Clad Temperature for LBLOCAs. This determination is in support of the detailed statistical analysis mentioned above. The analyses presented here account for uncertainties in input parameters, in modeling and scaling, in computing and in measurements. The analyses are an important part of the work needed to implement the Code Scalability, Applicability and Uncertainty (CSAU) methodology. CSAU is needed to determine the suitability of a computer code for reactor safety analyses and the uncertainty in computer predictions. The results presented here are used to estimate the safety margin of a particular nuclear reactor power plant for a postulated accident. 25 refs., 10 figs., 11 tabs
International Nuclear Information System (INIS)
Mavrotas, George; Florios, Kostas; Vlachou, Dimitra
2010-01-01
For more than 40 years, Mathematical Programming is the traditional tool for energy planning at the national or regional level aiming at cost minimization subject to specific technological, political and demand satisfaction constraints. The liberalization of the energy market along with the ongoing technical progress increased the level of competition and forced energy consumers, even at the unit level, to make their choices among a large number of alternative or complementary energy technologies, fuels and/or suppliers. In the present work we develop a modelling framework for energy planning in units of the tertiary sector giving special emphasis to model reduction and to the uncertainty of the economic parameters. In the given case study, the energy rehabilitation of a hospital in Athens is examined and the installation of a cogeneration, absorption and compression unit is examined for the supply of the electricity, heating and cooling load. The basic innovation of the given energy model lies in the uncertainty modelling through the combined use of Mathematical Programming (namely, Mixed Integer Linear Programming, MILP) and Monte Carlo simulation that permits the risk management for the most volatile parameters of the objective function such as the fuel costs and the interest rate. The results come in the form of probability distributions that provide fruitful information to the decision maker. The effect of model reduction through appropriate data compression of the load data is also addressed.
Energy Technology Data Exchange (ETDEWEB)
Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eldred, Michael S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jakeman, John Davis [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stephens, John Adam [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vigil, Dena M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wildey, Timothy Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bohnhoff, William J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hu, Kenneth T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dalbey, Keith R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bauman, Lara E [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hough, Patricia Diane [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2014-05-01
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Dafonte, C.; Fustes, D.; Manteiga, M.; Garabato, D.; Álvarez, M. A.; Ulla, A.; Allende Prieto, C.
2016-10-01
Aims: We present an innovative artificial neural network (ANN) architecture, called Generative ANN (GANN), that computes the forward model, that is it learns the function that relates the unknown outputs (stellar atmospheric parameters, in this case) to the given inputs (spectra). Such a model can be integrated in a Bayesian framework to estimate the posterior distribution of the outputs. Methods: The architecture of the GANN follows the same scheme as a normal ANN, but with the inputs and outputs inverted. We train the network with the set of atmospheric parameters (Teff, log g, [Fe/H] and [α/ Fe]), obtaining the stellar spectra for such inputs. The residuals between the spectra in the grid and the estimated spectra are minimized using a validation dataset to keep solutions as general as possible. Results: The performance of both conventional ANNs and GANNs to estimate the stellar parameters as a function of the star brightness is presented and compared for different Galactic populations. GANNs provide significantly improved parameterizations for early and intermediate spectral types with rich and intermediate metallicities. The behaviour of both algorithms is very similar for our sample of late-type stars, obtaining residuals in the derivation of [Fe/H] and [α/ Fe] below 0.1 dex for stars with Gaia magnitude Grvs satellite. Conclusions: Uncertainty estimation of computed astrophysical parameters is crucial for the validation of the parameterization itself and for the subsequent exploitation by the astronomical community. GANNs produce not only the parameters for a given spectrum, but a goodness-of-fit between the observed spectrum and the predicted one for a given set of parameters. Moreover, they allow us to obtain the full posterior distribution over the astrophysical parameters space once a noise model is assumed. This can be used for novelty detection and quality assessment.
3-D simulations of M9 earthquakes on the Cascadia Megathrust: Key parameters and uncertainty
Wirth, Erin; Frankel, Arthur; Vidale, John; Marafi, Nasser A.; Stephenson, William J.
2017-01-01
Geologic and historical records indicate that the Cascadia subduction zone is capable of generating large, megathrust earthquakes up to magnitude 9. The last great Cascadia earthquake occurred in 1700, and thus there is no direct measure on the intensity of ground shaking or specific rupture parameters from seismic recordings. We use 3-D numerical simulations to generate broadband (0-10 Hz) synthetic seismograms for 50 M9 rupture scenarios on the Cascadia megathrust. Slip consists of multiple high-stress drop subevents (~M8) with short rise times on the deeper portion of the fault, superimposed on a background slip distribution with longer rise times. We find a >4x variation in the intensity of ground shaking depending upon several key parameters, including the down-dip limit of rupture, the slip distribution and location of strong-motion-generating subevents, and the hypocenter location. We find that extending the down-dip limit of rupture to the top of the non-volcanic tremor zone results in a ~2-3x increase in peak ground acceleration for the inland city of Seattle, Washington, compared to a completely offshore rupture. However, our simulations show that allowing the rupture to extend to the up-dip limit of tremor (i.e., the deepest rupture extent in the National Seismic Hazard Maps), even when tapering the slip to zero at the down-dip edge, results in multiple areas of coseismic coastal uplift. This is inconsistent with coastal geologic evidence (e.g., buried soils, submerged forests), which suggests predominantly coastal subsidence for the 1700 earthquake and previous events. Defining the down-dip limit of rupture as the 1 cm/yr locking contour (i.e., mostly offshore) results in primarily coseismic subsidence at coastal sites. We also find that the presence of deep subevents can produce along-strike variations in subsidence and ground shaking along the coast. Our results demonstrate the wide range of possible ground motions from an M9 megathrust earthquake in
Studies of the Impact of Magnetic Field Uncertainties on Physics Parameters of the Mu2e Experiment
Energy Technology Data Exchange (ETDEWEB)
Bradascio, Federica [Pisa U.
2016-01-01
The Mu2e experiment at Fermilab will search for a signature of charged lepton flavor violation, an effect prohibitively too small to be observed within the Standard Model of particle physics. Therefore, its observation is a signal of new physics. The signature that Mu2e will search for is the ratio of the rate of neutrinoless coherent conversion of muons into electrons in the field of a nucleus, relative to the muon capture rate by the nucleus. The conversion process is an example of charged lepton flavor violation. This experiment aims at a sensitivity of four orders of magnitude higher than previous related experiments. The desired sensitivity implies highly demanding requirements of accuracy in the design and conduct of the experiment. It is therefore important to investigate the tolerance of the experiment to instrumental uncertainties and provide specifications that the design and construction must meet. This is the core of the work reported in this thesis. The design of the experiment is based on three superconducting solenoid magnets. The most important uncertainties in the magnetic field of the solenoids can arise from misalignments of the Transport Solenoid, which transfers the beam from the muon production area to the detector area and eliminates beam-originating backgrounds. In this thesis, the field uncertainties induced by possible misalignments and their impact on the physics parameters of the experiment are examined. The physics parameters include the muon and pion stopping rates and the scattering of beam electrons off the capture target, which determine the signal, intrinsic background and late-arriving background yields, respectively. Additionally, a possible test of the Transport Solenoid alignment with low momentum electrons is examined, as an alternative option to measure its field with conventional probes, which is technically difficult due to mechanical interference. Misalignments of the Transport Solenoid were simulated using standard
Directory of Open Access Journals (Sweden)
Haleigh A. Boswell
2015-12-01
Full Text Available Analysis of blood alcohol concentration is a routine analysis performed in many forensic laboratories. This analysis commonly utilizes static headspace sampling, followed by gas chromatography combined with flame ionization detection (GC-FID. Studies have shown several “optimal” methods for instrumental operating conditions, which are intended to yield accurate and precise data. Given that different instruments, sampling methods, application specific columns and parameters are often utilized, it is much less common to find information on the robustness of these reported conditions. A major problem can arise when these “optimal” conditions may not also be robust, thus producing data with higher than desired uncertainty or potentially inaccurate results. The goal of this research was to incorporate the principles of quality by design (QBD in the adjustment and determination of BAC (blood alcohol concentration instrumental headspace parameters, thereby ensuring that minor instrumental variations, which occur as a matter of normal work, do not appreciably affect the final results of this analysis. This study discusses both the QBD principles as well as the results of the experiments, which allow for determination of more favorable instrumental headspace conditions. Additionally, method detection limits will also be reported in order to determine a reporting threshold and the degree of uncertainty at the common threshold value of 0.08 g/dL. Furthermore, the comparison of two internal standards, n-propanol and t-butanol, will be investigated. The study showed that an altered parameter of 85 °C headspace oven temperature and 15 psi headspace vial pressurization produces the lowest percent relative standard deviation of 1.3% when t-butanol is implemented as an internal standard, at least for one very common platform. The study also showed that an altered parameter of 100 °C headspace oven temperature and 15-psi headspace vial pressurization
Jena, S.
2015-12-01
The overexploitation of groundwater resulted in abandoning many shallow tube wells in the river Basin in Eastern India. For the sustainability of groundwater resources, basin-scale modelling of groundwater flow is essential for the efficient planning and management of the water resources. The main intent of this study is to develope a 3-D groundwater flow model of the study basin using the Visual MODFLOW package and successfully calibrate and validate it using 17 years of observed data. The sensitivity analysis was carried out to quantify the susceptibility of aquifer system to the river bank seepage, recharge from rainfall and agriculture practices, horizontal and vertical hydraulic conductivities, and specific yield. To quantify the impact of parameter uncertainties, Sequential Uncertainty Fitting Algorithm (SUFI-2) and Markov chain Monte Carlo (MCMC) techniques were implemented. Results from the two techniques were compared and the advantages and disadvantages were analysed. Nash-Sutcliffe coefficient (NSE) and coefficient of determination (R2) were adopted as two criteria during calibration and validation of the developed model. NSE and R2 values of groundwater flow model for calibration and validation periods were in acceptable range. Also, the MCMC technique was able to provide more reasonable results than SUFI-2. The calibrated and validated model will be useful to identify the aquifer properties, analyse the groundwater flow dynamics and the change in groundwater levels in future forecasts.
Using systems gaming to explore decision-making under uncertainty in natural hazard crises
McCaughey, Jamie W.; Finnigan, David
2017-04-01
Faced with uncertain scientific forecasts of a potential hazard, it is perhaps natural to wait and see. As we wait, uncertainties do decrease, but so do our options to minimise impacts of the hazard. This tradeoff is fundamental to preparing for natural hazards, yet difficult to communicate. Interactive systems gaming is one promising way forward. We are developing in-person interactive games, drawing on role-playing and other table-top scenario exercises in natural hazards, as well as on game-based modeling of complex systems. Our games model an unfolding natural hazard crisis (such as volcanic unrest or an approaching typhoon) as a complex social-physical system. Participants take on the roles of diverse stakeholder groups (including government, scientists, media, farmers, city residents, and others) with differing expertise, responsibilities, and priorities. Interactions among these groups play out in a context of decreasing scientific uncertainty and decreasing options for actions to reduce societal risk. Key design challenges are (1) to engage players without trivialising the real-world context; (2) to provide the right level of guidance for players to navigate the system; and (3) to enable players to face realistic tradeoffs and see realistic consequences of their choices, without feeling frustrated that the game is set up for them to fail. We will first prototype the games with general public and secondary-school participants, then adjust this for specialist groups working in disaster management. We will illustrate participatory systems gaming techniques in our presentation 'A toolkit of systems gaming techniques' in the companion EGU session EOS6: 'Perform! A platform to discuss art & science projects with live presentation'.
Gras, Renaud
2015-03-01
Performing a single but complex mechanical test on small structures rather than on coupons to probe multiple strain states/histories for identification purposes is nowadays possible thanks to full-field measurements. The aim is to identify many parameters thanks to the heterogeneity of mechanical fields. Such an approach is followed herein, focusing on a blade root made of 3D woven composite. The performed test, which is analyzed using global Digital Image Correlation (DIC), provides heterogeneous kinematic fields due to the particular shape of the sample. This displacement field is further processed to identify the four in-plane material parameters of the macroscopic equivalent orthotropic behavior. The key point, which may limit the ability to draw reliable conclusions, is the presence of acquisition noise in the original images that has to be tracked along the DIC/identification processing to provide uncertainties on the identified parameters. A further regularization based on a priori knowledge is finally introduced to compensate for possible lack of experimental information needed for completing the identification.
Clark, Martyn; Samaniego, Luis; Freer, Jim
2014-05-01
Multi-model and multi-physics approaches are a popular tool in environmental modelling, with many studies focusing on optimally combining output from multiple model simulations to reduce predictive errors and better characterize predictive uncertainty. However, a careful and systematic analysis of different hydrological models reveals that individual models are simply small permutations of a master modeling template, and inter-model differences are overwhelmed by uncertainty in the choice of the parameter values in the model equations. Furthermore, inter-model differences do not explicitly represent the uncertainty in modeling a given process, leading to many situations where different models provide the wrong results for the same reasons. In other cases, the available morphological data does not support the very fine spatial discretization of the landscape that typifies many modern applications of process-based models. To make the uncertainty characterization problem worse, the uncertain parameter values in process-based models are often fixed (hard-coded), and the models lack the agility necessary to represent the tremendous heterogeneity in natural systems. This presentation summarizes results from a systematic analysis of uncertainty in process-based hydrological models, where we explicitly analyze the myriad of subjective decisions made throughout both the model development and parameter estimation process. Results show that much of the uncertainty is aleatory in nature - given a "complete" representation of dominant hydrologic processes, uncertainty in process parameterizations can be represented using an ensemble of model parameters. Epistemic uncertainty associated with process interactions and scaling behavior is still important, and these uncertainties can be represented using an ensemble of different spatial configurations. Finally, uncertainty in forcing data can be represented using ensemble methods for spatial meteorological analysis. Our systematic
Rohmer, Jeremy; Rousseau, Marie; Lemoine, Anne; Pedreros, Rodrigo; Lambert, Jerome; benki, Aalae
2017-04-01
and numerical simulations. In order to learn the uncertainty information on source parameters, we treat the problem within the Bayesian setting, which enables to incorporate in a flexible manner the different uncertainty sources. We propose to rely on an emerging technique called Approximate Bayesian Computation ABC, which has been developed to estimate the posterior distribution in modelling scenarios where the likelihood function is either unknown or cannot be explicitly defined. To overcome the computational issue, we combine ABC with statistical emulators (aka meta-model). We apply the proposed approach on the case study of Ligurian (North West of Italy) tsunami (1887) and discuss the results with a special attention paid to the impact of the observational error.
Ge, Junqiang; Yan, Renbin; Cappellari, Michele; Mao, Shude; Li, Hongyu; Lu, Youjun
2018-05-01
Using mock spectra based on Vazdekis/MILES library fitted within the wavelength region 3600-7350Å, we analyze the bias and scatter on the resulting physical parameters induced by the choice of fitting algorithms and observational uncertainties, but avoid effects of those model uncertainties. We consider two full-spectrum fitting codes: pPXF and STARLIGHT, in fitting for stellar population age, metallicity, mass-to-light ratio, and dust extinction. With pPXF we find that both the bias μ in the population parameters and the scatter σ in the recovered logarithmic values follows the expected trend μ ∝ σ ∝ 1/(S/N). The bias increases for younger ages and systematically makes recovered ages older, M*/Lr larger and metallicities lower than the true values. For reference, at S/N=30, and for the worst case (t = 108yr), the bias is 0.06 dex in M/Lr, 0.03 dex in both age and [M/H]. There is no significant dependence on either E(B-V) or the shape of the error spectrum. Moreover, the results are consistent for both our 1-SSP and 2-SSP tests. With the STARLIGHT algorithm, we find trends similar to pPXF, when the input E(B-V)values, with significantly underestimated dust extinction and [M/H], and larger ages and M*/Lr. Results degrade when moving from our 1-SSP to the 2-SSP tests. The STARLIGHT convergence to the true values can be improved by increasing Markov Chains and annealing loops to the "slow mode". For the same input spectrum, pPXF is about two order of magnitudes faster than STARLIGHT's "default mode" and about three order of magnitude faster than STARLIGHT's "slow mode".
Energy Technology Data Exchange (ETDEWEB)
Gomez-Magan, J. J.; Fernandez, I.; Gil, J.; Marrao, H.
2013-07-01
Work has been applied DAKOTA-SCAIS-MAAP-R for a loss of coolant in the primary sequence analysis, considering parameters and times uncertain, in a 3L-PWR plant Westinghouse design; which allows to verify the security analysis where temporary uncertainty represents an important parameter.
Energy Transitions towards Sustainability I: A Staged Exploration of Complexity and Deep Uncertainty
Pruyt, E.; Kwakkel, J.; Yucel, G.; Hamarat, C.
2011-01-01
This paper illustrates the use of Exploratory System Dynamics Modeling and Analysis – a multi-method combining System Dynamics and Exploratory Modeling and Analysis to explore and analyze uncertain dynamic issues and test deep policy robustness. This paper gives an overview of the current state of this multi-method by means of an illustration. The multimethod is applied to the transition of the electricity generation system, more specifically the battle between old and new electricity generat...
Directory of Open Access Journals (Sweden)
K. Butterbach-Bahl
2012-10-01
Full Text Available Assessing the uncertainties of simulation results of ecological models is becoming increasingly important, specifically if these models are used to estimate greenhouse gas emissions on site to regional/national levels. Four general sources of uncertainty effect the outcome of process-based models: (i uncertainty of information used to initialise and drive the model, (ii uncertainty of model parameters describing specific ecosystem processes, (iii uncertainty of the model structure, and (iv accurateness of measurements (e.g., soil-atmosphere greenhouse gas exchange which are used for model testing and development. The aim of our study was to assess the simulation uncertainty of the process-based biogeochemical model LandscapeDNDC. For this we set up a Bayesian framework using a Markov Chain Monte Carlo (MCMC method, to estimate the joint model parameter distribution. Data for model testing, parameter estimation and uncertainty assessment were taken from observations of soil fluxes of nitrous oxide (N2O, nitric oxide (NO and carbon dioxide (CO2 as observed over a 10 yr period at the spruce site of the Höglwald Forest, Germany. By running four independent Markov Chains in parallel with identical properties (except for the parameter start values, an objective criteria for chain convergence developed by Gelman et al. (2003 could be used. Our approach shows that by means of the joint parameter distribution, we were able not only to limit the parameter space and specify the probability of parameter values, but also to assess the complex dependencies among model parameters used for simulating soil C and N trace gas emissions. This helped to improve the understanding of the behaviour of the complex LandscapeDNDC model while simulating soil C and N turnover processes and associated C and N soil-atmosphere exchange. In a final step the parameter distribution of the most sensitive parameters determining soil-atmosphere C and N exchange were used to obtain
Energy Technology Data Exchange (ETDEWEB)
Meyer, Philip D.; Ye, Ming; Rockhold, Mark L.; Neuman, Shlomo P.; Cantrell, Kirk J.
2007-07-30
This report to the Nuclear Regulatory Commission (NRC) describes the development and application of a methodology to systematically and quantitatively assess predictive uncertainty in groundwater flow and transport modeling that considers the combined impact of hydrogeologic uncertainties associated with the conceptual-mathematical basis of a model, model parameters, and the scenario to which the model is applied. The methodology is based on a n extension of a Maximum Likelihood implementation of Bayesian Model Averaging. Model uncertainty is represented by postulating a discrete set of alternative conceptual models for a site with associated prior model probabilities that reflect a belief about the relative plausibility of each model based on its apparent consistency with available knowledge and data. Posterior model probabilities are computed and parameter uncertainty is estimated by calibrating each model to observed system behavior; prior parameter estimates are optionally included. Scenario uncertainty is represented as a discrete set of alternative future conditions affecting boundary conditions, source/sink terms, or other aspects of the models, with associated prior scenario probabilities. A joint assessment of uncertainty results from combining model predictions computed under each scenario using as weight the posterior model and prior scenario probabilities. The uncertainty methodology was applied to modeling of groundwater flow and uranium transport at the Hanford Site 300 Area. Eight alternative models representing uncertainty in the hydrogeologic and geochemical properties as well as the temporal variability were considered. Two scenarios represent alternative future behavior of the Columbia River adjacent to the site were considered. The scenario alternatives were implemented in the models through the boundary conditions. Results demonstrate the feasibility of applying a comprehensive uncertainty assessment to large-scale, detailed groundwater flow
International Nuclear Information System (INIS)
Frosio, Thomas; Bonaccorsi, Thomas; Blaise, Patrick
2016-01-01
Highlights: • Nuclear data uncertainty propagation for neutronic quantities in coupled problems. • Uncertainties are detailed for local isotopic concentrations and local power maps. • Correlations are built between space areas of the core and for different burnups. - Abstract: In a previous paper, a method was investigated to calculate sensitivity coefficients in coupled Boltzmann/Bateman problem for nuclear data (ND) uncertainties propagation on the reactivity. Different methodologies were discussed and applied on an actual example of multigroup cross section uncertainty problem for a 2D Material Testing Reactor (MTR) benchmark. It was shown that differences between methods arose from correlations between input parameters, as far as the method enables to take them into account. Those methods, unlike Monte Carlo (MC) sampling for uncertainty propagation and quantification (UQ), allow obtaining sensitivity coefficients, as well as correlations values between nuclear data, during the depletion calculation for the parameters of interest. This work is here extended to local parameters such as power factors and isotopic concentrations. It also includes fission yield (FY) uncertainty propagation, on both reactivity and power factors. Furthermore, it introduces a new methodology enabling to decorrelate direct and transmutation terms for local quantities: a Monte-Carlo method using built samples from a multidimensional Gaussian law is used to extend the previous studies, and propagate fission yield uncertainties from the CEA’s COMAC covariance file. It is shown that, for power factors, the most impacting ND are the scattering reactions, principally coming from 27 Al and (bounded hydrogen in) H 2 O. The overall effect is a reduction of the propagated uncertainties throughout the cycle thanks to negatively correlated terms. For fission yield (FY), the results show that neither reactivity nor local power factors are strongly affected by uncertainties. However, they
International Nuclear Information System (INIS)
Liang, Shidong; Jia, Haifeng; Xu, Changqing; Xu, Te; Melching, Charles
2016-01-01
Facing increasingly serious water pollution, the Chinese government is changing the environmental management strategy from solely pollutant concentration control to a Total Maximum Daily Load (TMDL) program, and water quality models are increasingly being applied to determine the allowable pollutant load in the TMDL. Despite the frequent use of models, few studies have focused on how parameter uncertainty in water quality models affect the allowable pollutant loads in the TMDL program, particularly for complicated and high-dimension water quality models. Uncertainty analysis for such models is limited by time-consuming simulation and high-dimensionality and nonlinearity in parameter spaces. In this study, an allowable pollutant load calculation platform was established using the Environmental Fluid Dynamics Code (EFDC), which is a widely applied hydrodynamic-water quality model. A Bayesian approach, i.e. the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm, which is a high-efficiency, multi-chain Markov Chain Monte Carlo (MCMC) method, was applied to assess the effects of parameter uncertainty on the water quality model simulations and its influence on the allowable pollutant load calculation in the TMDL program. Miyun Reservoir, which is the most important surface drinking water source for Beijing, suffers from eutrophication and was selected as a case study. The relations between pollutant loads and water quality indicators are obtained through a graphical method in the simulation platform. Ranges of allowable pollutant loads were obtained according to the results of parameter uncertainty analysis, i.e. Total Organic Carbon (TOC): 581.5–1030.6 t·yr"−"1; Total Phosphorus (TP): 23.3–31.0 t·yr"−"1; and Total Nitrogen (TN): 480–1918.0 t·yr"−"1. The wide ranges of allowable pollutant loads reveal the importance of parameter uncertainty analysis in a TMDL program for allowable pollutant load calculation and margin of safety (MOS
Kramm, Gerhard
2010-07-01
In this paper we discuss the meaning of feedback parameter, greenhouse effect and transient climate response usually related to the globally averaged energy balance model of Schneider and Mass. After scrutinizing this model and the corresponding planetary radiation balance we state that (a) the this globally averaged energy balance model is flawed by unsuitable physical considerations, (b) the planetary radiation balance for an Earth in the absence of an atmosphere is fraught by the inappropriate assumption of a uniform surface temperature, the so-called radiative equilibrium temperature of about 255 K, and (c) the effect of the radiative anthropogenic forcing, considered as a perturbation to the natural system, is much smaller than the uncertainty involved in the solution of the model of Schneider and Mass. This uncertainty is mainly related to the empirical constants suggested by various authors and used for predicting the emission of infrared radiation by the Earth's skin. Furthermore, after inserting the absorption of solar radiation by atmospheric constituents and the exchange of sensible and latent heat between the Earth and the atmosphere into the model of Schneider and Mass the surface temperatures become appreciably lesser than the radiative equilibrium temperature. Moreover, neither the model of Schneider and Mass nor the Dines-type two-layer energy balance model for the Earth-atmosphere system, both contain the planetary radiation balance for an Earth in the absence of an atmosphere as an asymptotic solution, do not provide evidence for the existence of the so-called atmospheric greenhouse effect if realistic empirical data are used.
Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim
2014-11-01
In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-08-23
The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful
Directory of Open Access Journals (Sweden)
Jantunen Matti J
2007-08-01
Full Text Available Abstract Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5 are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i plausibility of mortality outcomes and (ii lag, and parameter uncertainties (iii exposure-response coefficients for different mortality outcomes, and (iv exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure
Directory of Open Access Journals (Sweden)
Berman Barbara
2011-03-01
Full Text Available Abstract Background The Mayo Lung Project (MLP, a randomized controlled clinical trial of lung cancer screening conducted between 1971 and 1986 among male smokers aged 45 or above, demonstrated an increase in lung cancer survival since the time of diagnosis, but no reduction in lung cancer mortality. Whether this result necessarily indicates a lack of mortality benefit for screening remains controversial. A number of hypotheses have been proposed to explain the observed outcome, including over-diagnosis, screening sensitivity, and population heterogeneity (initial difference in lung cancer risks between the two trial arms. This study is intended to provide model-based testing for some of these important arguments. Method Using a micro-simulation model, the MISCAN-lung model, we explore the possible influence of screening sensitivity, systematic error, over-diagnosis and population heterogeneity. Results Calibrating screening sensitivity, systematic error, or over-diagnosis does not noticeably improve the fit of the model, whereas calibrating population heterogeneity helps the model predict lung cancer incidence better. Conclusions Our conclusion is that the hypothesized imperfection in screening sensitivity, systematic error, and over-diagnosis do not in themselves explain the observed trial results. Model fit improvement achieved by accounting for population heterogeneity suggests a higher risk of cancer incidence in the intervention group as compared with the control group.
International Nuclear Information System (INIS)
Landsberg, P.T.
1990-01-01
This paper explores how the quantum mechanics uncertainty relation can be considered to result from measurements. A distinction is drawn between the uncertainties obtained by scrutinising experiments and the standard deviation type of uncertainty definition used in quantum formalism. (UK)
International Nuclear Information System (INIS)
Piepel, Gregory F.; Cooley, Scott K.; Kuhn, William L.; Rector, David R.; Heredia-Langner, Alejandro
2015-01-01
This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to ''address uncertainties and increase confidence in the projected, full-scale mixing performance and operations'' in the Waste Treatment and Immobilization Plant (WTP).
Energy Technology Data Exchange (ETDEWEB)
Piepel, Gregory F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cooley, Scott K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kuhn, William L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rector, David R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Heredia-Langner, Alejandro [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-05-01
This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to “address uncertainties and increase confidence in the projected, full-scale mixing performance and operations” in the Waste Treatment and Immobilization Plant (WTP).
Samadi, A.; Amiri-Tokaldany, E.; Davoudi, M. H.; Darby, S. E.
2011-11-01
Composite river banks consist of a basal layer of non-cohesive material overlain by a cohesive layer of fine-grained material. In such banks, fluvial erosion of the lower, non-cohesive, layer typically occurs at a much higher rate than erosion of the upper part of the bank. Consequently, such banks normally develop a cantilevered bank profile, with bank retreat of the upper part of the bank taking place predominantly by the failure of these cantilevers. To predict the undesirable impacts of this type of bank retreat, a number of bank stability models have been presented in the literature. These models typically express bank stability by defining a factor of safety as the ratio of resisting and driving forces acting on the incipient failure block. These forces are affected by a range of controlling factors that include such aspects as the overhanging block geometry, and the geotechnical properties of the bank materials. In this paper, we introduce a new bank stability relation (for shear-type cantilever failures) that considers the hydrological status of cantilevered riverbanks, while beam-type failures are analyzed using a previously proposed relation. We employ these stability models to evaluate the effects of parameter uncertainty on the reliability of riverbank stability modeling of overhanging banks. This is achieved by employing a simple model of overhanging failure with respect to shear and beam failure mechanisms in a series of sensitivity tests and Monte Carlo analyses to identify, for each model parameter, the range of values that induce significant changes in the simulated factor of safety. The results show that care is required in parameterising (i) the geometrical shape of the overhanging-block and (ii) the bank material cohesion and unit weight, as predictions of bank stability are sensitive to variations of these factors.
Lam, H K; Leung, Frank H F
2005-04-01
This paper presents a fuzzy controller, which involves a fuzzy combination of local fuzzy and global switching state-feedback controllers, for nonlinear systems subject to parameter uncertainties with known bounds. The nonlinear system is represented by a fuzzy combined Takagi-Sugeno-Kang model, which is a fuzzy combination of the global and local fuzzy plant models. By combining the local fuzzy and global switching state-feedback controllers using fuzzy logic techniques, the advantages of both controllers can be retained and the undesirable chattering effect introduced by the global switching state-feedback controller can be eliminated. The steady-state error introduced by the global switching state-feedback controller when a saturation function is used can also be removed. Stability conditions, which are related to the system matrices of the local and global closed-loop systems, are derived to guarantee the closed-loop system stability. An application example will be given to demonstrate the merits of the proposed approach.
An open-source job management framework for parameter-space exploration: OACIS
Murase, Y.; Uchitane, T.; Ito, N.
2017-11-01
We present an open-source software framework for parameter-space exporation, named OACIS, which is useful to manage vast amount of simulation jobs and results in a systematic way. Recent development of high-performance computers enabled us to explore parameter spaces comprehensively, however, in such cases, manual management of the workflow is practically impossible. OACIS is developed aiming at reducing the cost of these repetitive tasks when conducting simulations by automating job submissions and data management. In this article, an overview of OACIS as well as a getting started guide are presented.
DEFF Research Database (Denmark)
Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo
2015-01-01
showed a lower dispersion around the base uncertainty value. Results are also obtained from the implementation of the analysis on a real case involving the finalization of a ring road around Næstved. Three different scenarios were tested. The resulting uncertainty in the travel time savings from...
Energy Technology Data Exchange (ETDEWEB)
Hunke, Elizabeth Clare [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Urrego Blanco, Jorge Rolando [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Urban, Nathan Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2018-02-12
Coupled climate models have a large number of input parameters that can affect output uncertainty. We conducted a sensitivity analysis of sea ice proper:es and Arc:c related climate variables to 5 parameters in the HiLAT climate model: air-ocean turbulent exchange parameter (C), conversion of water vapor to clouds (cldfrc_rhminl) and of ice crystals to snow (micro_mg_dcs), snow thermal conduc:vity (ksno), and maximum snow grain size (rsnw_mlt). We used an elementary effect (EE) approach to rank their importance for output uncertainty. EE is an extension of one-at-a-time sensitivity analyses, but it is more efficient in sampling multi-dimensional parameter spaces. We looked for emerging relationships among climate variables across the model ensemble, and used causal discovery algorithms to establish potential pathways for those relationships.
Hartley, Lisa; Fleay, Caroline; Tye, Marian E
2017-05-01
This paper explores the engagement in physical activity as a potential coping strategy for asylum seekers living in the Australian community without the right to work and with prolonged uncertainty, and benefits or barriers to undertaking such activity. Semi-structured in-depth interviews were held with 29 asylum seekers who had arrived to Australia by boat and were living in the community in the cities of Perth, Sydney or Melbourne in July-October 2013 after their release from immigration detention. The ratio of the numbers of men and women interviewed (23 men and 6 women) was comparable to the ratio of men and women who came by boat to Australia seeking asylum in 2012-2013. Nine participants reported that they participated in physical activity as a coping strategy. Seven other participants were so worried about their future and their families that they did not have the mental or physical energy to engage in physical activity. A further six wanted to participate in physical activity but faced a number of barriers to doing so. The seven remaining participants were either not asked about their physical activity engagement because they focused their discussion on other challenges or did not elaborate on why they were not engaging in physical activity. The findings suggest that physical activity, coupled with other coping strategies, are important for some asylum seekers in trying to manage the distress of being denied the right to work and living with prolonged uncertainty. In addition, these findings highlight the critical barrier that government policy plays in disabling engagement in physical activity, which further compounds social exclusion. This includes the lack of welfare support provided, which hinders people's financial ability to access activities and support in the community. © 2017 John Wiley & Sons Ltd.
Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Caubel, A.; Huth, N.; Marin, F.; Martiné, J.-F.
2014-06-01
Agro-land surface models (agro-LSM) have been developed from the integration of specific crop processes into large-scale generic land surface models that allow calculating the spatial distribution and variability of energy, water and carbon fluxes within the soil-vegetation-atmosphere continuum. When developing agro-LSM models, particular attention must be given to the effects of crop phenology and management on the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty of agro-LSM models is related to their usually large number of parameters. In this study, we quantify the parameter-values uncertainty in the simulation of sugarcane biomass production with the agro-LSM ORCHIDEE-STICS, using a multi-regional approach with data from sites in Australia, La Réunion and Brazil. In ORCHIDEE-STICS, two models are chained: STICS, an agronomy model that calculates phenology and management, and ORCHIDEE, a land surface model that calculates biomass and other ecosystem variables forced by STICS phenology. First, the parameters that dominate the uncertainty of simulated biomass at harvest date are determined through a screening of 67 different parameters of both STICS and ORCHIDEE on a multi-site basis. Secondly, the uncertainty of harvested biomass attributable to those most sensitive parameters is quantified and specifically attributed to either STICS (phenology, management) or to ORCHIDEE (other ecosystem variables including biomass) through distinct Monte Carlo runs. The uncertainty on parameter values is constrained using observations by calibrating the model independently at seven sites. In a third step, a sensitivity analysis is carried out by varying the most sensitive parameters to investigate their effects at continental scale. A Monte Carlo sampling method associated with the calculation of partial ranked correlation coefficients is used to quantify the sensitivity of harvested biomass to input
CSIR Research Space (South Africa)
Oosthuizen, Nadia
2017-07-01
Full Text Available frica Parameter and input data uncertainty estimation for the assessment of water resources in two sub-basins of the Limpopo River Basin Nadia Oosthuizen1,2, Denis A. Hughes2, Evison Kapangaziwiri1, Jean-Marc Mwenge Kahinda1, and Vuyelwa Mvandaba1,2 1...
Newell, P.; Yoon, H.; Martinez, M. J.; Bishop, J. E.; Arnold, B. W.; Bryant, S.
2013-12-01
It is essential to couple multiphase flow and geomechanical response in order to predict a consequence of geological storage of CO2. In this study, we estimate key hydrogeologic features to govern the geomechanical response (i.e., surface uplift) at a large-scale CO2 injection project at In Salah, Algeria using the Sierra Toolkit - a multi-physics simulation code developed at Sandia National Laboratories. Importantly, a jointed rock model is used to study the effect of postulated fractures in the injection zone on the surface uplift. The In Salah Gas Project includes an industrial-scale demonstration of CO2 storage in an active gas field where CO2 from natural gas production is being re-injected into a brine-filled portion of the structure downdip of the gas accumulation. The observed data include millimeter scale surface deformations (e.g., uplift) reported in the literature and injection well locations and rate histories provided by the operators. Our preliminary results show that the intrinsic permeability and Biot coefficient of the injection zone are important. Moreover pre-existing fractures within the injection zone affect the uplift significantly. Estimation of additional (i.e., anisotropy ratio) and coupled parameters will help us to develop models, which account for the complex relationship between mechanical integrity and CO2 injection-induced pressure changes. Uncertainty quantification of model predictions will be also performed using various algorithms including null-space Monte Carlo and polynomial-chaos expansion methods. This work will highlight that our coupled reservoir and geomechanical simulations associated with parameter estimation can provide a practical solution for designing operating conditions and understanding subsurface processes associated with the CO2 injection. This work is supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office
Zhang, Kejiang; Achari, Gopal; Li, Hua
2009-11-03
Traditionally, uncertainty in parameters are represented as probabilistic distributions and incorporated into groundwater flow and contaminant transport models. With the advent of newer uncertainty theories, it is now understood that stochastic methods cannot properly represent non random uncertainties. In the groundwater flow and contaminant transport equations, uncertainty in some parameters may be random, whereas those of others may be non random. The objective of this paper is to develop a fuzzy-stochastic partial differential equation (FSPDE) model to simulate conditions where both random and non random uncertainties are involved in groundwater flow and solute transport. Three potential solution techniques namely, (a) transforming a probability distribution to a possibility distribution (Method I) then a FSPDE becomes a fuzzy partial differential equation (FPDE), (b) transforming a possibility distribution to a probability distribution (Method II) and then a FSPDE becomes a stochastic partial differential equation (SPDE), and (c) the combination of Monte Carlo methods and FPDE solution techniques (Method III) are proposed and compared. The effects of these three methods on the predictive results are investigated by using two case studies. The results show that the predictions obtained from Method II is a specific case of that got from Method I. When an exact probabilistic result is needed, Method II is suggested. As the loss or gain of information during a probability-possibility (or vice versa) transformation cannot be quantified, their influences on the predictive results is not known. Thus, Method III should probably be preferred for risk assessments.
Directory of Open Access Journals (Sweden)
Chung P. W. H.
2006-11-01
Full Text Available This paper discusses the role of uncertainty in accessing petroleum exploration databases. Two distinct forms of uncertainty are identified : the uncertainty in the user's requirements, and the uncertainty in the data held. A general mechanism is described which is applicable to both. Cet article traite du rôle de l'incertitude dans l'accès aux bases de données d'exploration pétrolière. Deux sortes distinctes d'incertitudes sont identifiées : l'incertitude au niveau des requêtes de l'utilisateur et l'incertitude attachée aux données stockées. Nous décrivons un mécanisme général qui s'applique à ces deux types d'incertitude.
Energy Technology Data Exchange (ETDEWEB)
Destouches, C.; Beretz, D. [Service de Physique Experimentale, CEA-CAD/DEN/DER/SPEx, Departement d' Etudes des Reacteurs, 13108 St-Paul lez Durance Cedex (France); Devictor, N. [Service d' Etude des Systemes Innovant, CEA-CAD/DEN/DER/SESI, Departement d' Etudes des Reacteurs, 13108 St-Paul lez Durance Cedex (France); Gregoire, G. [Service de Physique Experimentale, CEA-CAD/DEN/DER/SPEx, Departement d' Etudes des Reacteurs, 13108 St-Paul lez Durance Cedex (France)
2006-07-01
One of the main objectives of reactor dosimetry is the determination of the physical parameters characterizing the neutronic field in which the studied sample is irradiated. The knowledge of the associated uncertainties represents a significant stake for nuclear industry as shows the high uncertainty value of 15% (k=1) commonly allowed for the calculated neutron flux (E> 1 MeV) on the vessel and internal structures. The study presented in this paper aims at determining then reducing uncertainties associated with the reactor dosimetry interpretation process. After a brief presentation of the interpretation process, input data uncertainties identification and quantification are performed in particular with regard to covariances. Then uncertainties propagation is carried out and analyzed by deterministic and stochastic methods on a representative case. Finally, a Monte Carlo sensitivity study based on Sobol indices is achieved on a case leading to derive the most penalizing input uncertainties. This paper concludes rising improvement axes to be studied for the input data knowledge. It highlights for example the need for having realistic variance-covariance matrices associated with input data (cross sections libraries, neutron computation code's outputs, ...). Lastly, the methodology principle presented in this paper is enough general to be easily transposable for other measurements data interpretation processes. (authors)
Energy Technology Data Exchange (ETDEWEB)
Bernard, D
2002-07-01
The aim of this thesis was to evaluate uncertainties of key neutron parameters of slab reactors. Uncertainties sources have many origins, technologic origin for parameters of fabrication and physical origin for nuclear data. First, each contribution of uncertainties is calculated and finally, a factor of uncertainties is associated to key slab parameter like reactivity, isotherm reactivity coefficient, control rod efficiency, power form factor before irradiation and lifetime. This factors of uncertainties were computed by Generalized Perturbations Theory in case of step 0 and by directs calculations in case of irradiation problems. One of neutronic conformity applications was about fabrication and nuclear data targets precision adjustments. Statistic (uncertainties) and deterministic (deviations) approaches were studied. Then neutronics key slab parameters uncertainties were reduced and so nuclear performances were optimised. (author)
Energy Technology Data Exchange (ETDEWEB)
Bernard, D
2002-07-01
The aim of this thesis was to evaluate uncertainties of key neutron parameters of slab reactors. Uncertainties sources have many origins, technologic origin for parameters of fabrication and physical origin for nuclear data. First, each contribution of uncertainties is calculated and finally, a factor of uncertainties is associated to key slab parameter like reactivity, isotherm reactivity coefficient, control rod efficiency, power form factor before irradiation and lifetime. This factors of uncertainties were computed by Generalized Perturbations Theory in case of step 0 and by directs calculations in case of irradiation problems. One of neutronic conformity applications was about fabrication and nuclear data targets precision adjustments. Statistic (uncertainties) and deterministic (deviations) approaches were studied. Then neutronics key slab parameters uncertainties were reduced and so nuclear performances were optimised. (author)
Directory of Open Access Journals (Sweden)
N. Oosthuizen
2018-05-01
Full Text Available The demand for water resources is rapidly growing, placing more strain on access to water and its management. In order to appropriately manage water resources, there is a need to accurately quantify available water resources. Unfortunately, the data required for such assessment are frequently far from sufficient in terms of availability and quality, especially in southern Africa. In this study, the uncertainty related to the estimation of water resources of two sub-basins of the Limpopo River Basin – the Mogalakwena in South Africa and the Shashe shared between Botswana and Zimbabwe – is assessed. Input data (and model parameters are significant sources of uncertainty that should be quantified. In southern Africa water use data are among the most unreliable sources of model input data because available databases generally consist of only licensed information and actual use is generally unknown. The study assesses how these uncertainties impact the estimation of surface water resources of the sub-basins. Data on farm reservoirs and irrigated areas from various sources were collected and used to run the model. Many farm dams and large irrigation areas are located in the upper parts of the Mogalakwena sub-basin. Results indicate that water use uncertainty is small. Nevertheless, the medium to low flows are clearly impacted. The simulated mean monthly flows at the outlet of the Mogalakwena sub-basin were between 22.62 and 24.68 Mm3 per month when incorporating only the uncertainty related to the main physical runoff generating parameters. The range of total predictive uncertainty of the model increased to between 22.15 and 24.99 Mm3 when water use data such as small farm and large reservoirs and irrigation were included. For the Shashe sub-basin incorporating only uncertainty related to the main runoff parameters resulted in mean monthly flows between 11.66 and 14.54 Mm3. The range of predictive uncertainty changed to between 11.66 and 17
DEFF Research Database (Denmark)
Morales Rodriguez, Ricardo; Meyer, Anne S.; Gernaey, Krist
2011-01-01
This study presents the development and application of a systematic model-based framework for bioprocess optimization, evaluated on a cellulosic ethanol production case study. The implementation of the framework involves the use of dynamic simulations, sophisticated uncertainty analysis (Monte...
Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward
2013-09-01
Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. © Her Majesty the Queen in Right of Canada 2013. Reproduced with the permission of the Minister of the Public Health Agency of Canada.
Dahm, T.; Heimann, S.; Isken, M.; Vasyura-Bathke, H.; Kühn, D.; Sudhaus, H.; Kriegerowski, M.; Daout, S.; Steinberg, A.; Cesca, S.
2017-12-01
Seismic source and moment tensor waveform inversion is often ill-posed or non-unique if station coverage is poor or signals are weak. Therefore, the interpretation of moment tensors can become difficult, if not the full model space is explored, including all its trade-offs and uncertainties. This is especially true for non-double couple components of weak or shallow earthquakes, as for instance found in volcanic, geothermal or mining environments.We developed a bootstrap-based probabilistic optimization scheme (Grond), which is based on pre-calculated Greens function full waveform databases (e.g. fomosto tool, doi.org/10.5880/GFZ.2.1.2017.001). Grond is able to efficiently explore the full model space, the trade-offs and the uncertainties of source parameters. The program is highly flexible with respect to the adaption to specific problems, the design of objective functions, and the diversity of empirical datasets.It uses an integrated, robust waveform data processing based on a newly developed Python toolbox for seismology (Pyrocko, see Heimann et al., 2017, http://doi.org/10.5880/GFZ.2.1.2017.001), and allows for visual inspection of many aspects of the optimization problem. Grond has been applied to the CMT moment tensor inversion using W-phases, to nuclear explosions in Korea, to meteorite atmospheric explosions, to volcano-tectonic events during caldera collapse and to intra-plate volcanic and tectonic crustal events.Grond can be used to optimize simultaneously seismological waveforms, amplitude spectra and static displacements of geodetic data as InSAR and GPS (e.g. KITE, Isken et al., 2017, http://doi.org/10.5880/GFZ.2.1.2017.002). We present examples of Grond optimizations to demonstrate the advantage of a full exploration of source parameter uncertainties for interpretation.
International Nuclear Information System (INIS)
Ánchel, F.; Barrachina, T.; Miró, R.; Verdú, G.; Juanas, J.; Macián-Juan, R.
2012-01-01
Highlights: ► Best-estimate codes are affected by the uncertainty in the methods and the models. ► Influence of the uncertainty in the macroscopic cross-sections in a BWR and PWR RIA accidents analysis. ► The fast diffusion coefficient, the scattering cross section and both fission cross sections are the most influential factors. ► The absorption cross sections very little influence. ► Using a normal pdf the results are more “conservative” comparing the power peak reached with uncertainty quantified with a uniform pdf. - Abstract: The Best Estimate analysis consists of a coupled thermal-hydraulic and neutronic description of the nuclear system's behavior; uncertainties from both aspects should be included and jointly propagated. This paper presents a study of the influence of the uncertainty in the macroscopic neutronic information that describes a three-dimensional core model on the most relevant results of the simulation of a Reactivity Induced Accident (RIA). The analyses of a BWR-RIA and a PWR-RIA have been carried out with a three-dimensional thermal-hydraulic and neutronic model for the coupled system TRACE-PARCS and RELAP-PARCS. The cross section information has been generated by the SIMTAB methodology based on the joint use of CASMO-SIMULATE. The statistically based methodology performs a Monte-Carlo kind of sampling of the uncertainty in the macroscopic cross sections. The size of the sampling is determined by the characteristics of the tolerance intervals by applying the Noether–Wilks formulas. A number of simulations equal to the sample size have been carried out in which the cross sections used by PARCS are directly modified with uncertainty, and non-parametric statistical methods are applied to the resulting sample of the values of the output variables to determine their intervals of tolerance.
Exploring the triplet parameters space to optimise the final focus of the FCC-hh
AUTHOR|(CDS)2141109; Abelleira, Jose; Seryi, Andrei; Cruz Alaniz, Emilia
2017-01-01
One of the main challenges when designing final focus systems of particle accelerators is maximising the beam stay clear in the strong quadrupole magnets of the inner triplet. Moreover it is desirable to keep the quadrupoles in the triplet as short as possible for space and costs reasons but also to reduce chromaticity and simplify corrections schemes. An algorithm that explores the triplet parameter space to optimise both these aspects was written. It uses thin lenses as a first approximation and MADX for more precise calculations. In cooperation with radiation studies, this algorithm was then applied to design an alternative triplet for the final focus of the Future Circular Collider (FCC-hh).
Energy Technology Data Exchange (ETDEWEB)
Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eldred, Michael S [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jakeman, John Davis [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stephens, John Adam [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vigil, Dena M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wildey, Timothy Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bohnhoff, William J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hu, Kenneth T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dalbey, Keith R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bauman, Lara E [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hough, Patricia Diane [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2014-05-01
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.
Energy Technology Data Exchange (ETDEWEB)
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
Energy Technology Data Exchange (ETDEWEB)
Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L. (Sandai National Labs, Livermore, CA); Watson, Jean-Paul; Kolda, Tamara Gibson (Sandai National Labs, Livermore, CA); Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J. (Sandai National Labs, Livermore, CA); Hough, Patricia Diane (Sandai National Labs, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Guinta, Anthony A.; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
Directory of Open Access Journals (Sweden)
C. Mesado
2012-01-01
Full Text Available In nuclear safety analysis, it is very important to be able to simulate the different transients that can occur in a nuclear power plant with a very high accuracy. Although the best estimate codes can simulate the transients and provide realistic system responses, the use of nonexact models, together with assumptions and estimations, is a source of uncertainties which must be properly evaluated. This paper describes a Rod Ejection Accident (REA simulated using the coupled code RELAP5/PARCSv2.7 with a perturbation on the cross-sectional sets in order to determine the uncertainties in the macroscopic neutronic information. The procedure to perform the uncertainty and sensitivity (U&S analysis is a sampling-based method which is easy to implement and allows different procedures for the sensitivity analyses despite its high computational time. DAKOTA-Jaguar software package is the selected toolkit for the U&S analysis presented in this paper. The size of the sampling is determined by applying the Wilks’ formula for double tolerance limits with a 95% of uncertainty and with 95% of statistical confidence for the output variables. Each sample has a corresponding set of perturbations that will modify the cross-sectional sets used by PARCS. Finally, the intervals of tolerance of the output variables will be obtained by the use of nonparametric statistical methods.
De Lannoy, G.J.M.; Reichle, R.H.; Vrugt, J.A.
2014-01-01
Uncertainties in L-band (1.4 GHz) microwave radiative transfer modeling (RTM) affect the simulation of brightness temperatures (Tb) over land and the inversion of satellite-observed Tb into soil moisture retrievals. In particular, accurate estimates of the microwave soil roughness, vegetation
Directory of Open Access Journals (Sweden)
V. E. P. Lemaire
2016-03-01
Full Text Available Because of its sensitivity to unfavorable weather patterns, air pollution is sensitive to climate change so that, in the future, a climate penalty could jeopardize the expected efficiency of air pollution mitigation measures. A common method to assess the impact of climate on air quality consists in implementing chemistry-transport models forced by climate projections. However, the computing cost of such methods requires optimizing ensemble exploration techniques. By using a training data set from a deterministic projection of climate and air quality over Europe, we identified the main meteorological drivers of air quality for eight regions in Europe and developed statistical models that could be used to predict air pollutant concentrations. The evolution of the key climate variables driving either particulate or gaseous pollution allows selecting the members of the EuroCordex ensemble of regional climate projections that should be used in priority for future air quality projections (CanESM2/RCA4; CNRM-CM5-LR/RCA4 and CSIRO-Mk3-6-0/RCA4 and MPI-ESM-LR/CCLM following the EuroCordex terminology. After having tested the validity of the statistical model in predictive mode, we can provide ranges of uncertainty attributed to the spread of the regional climate projection ensemble by the end of the century (2071–2100 for the RCP8.5. In the three regions where the statistical model of the impact of climate change on PM2.5 offers satisfactory performances, we find a climate benefit (a decrease of PM2.5 concentrations under future climate of −1.08 (±0.21, −1.03 (±0.32, −0.83 (±0.14 µg m−3, for respectively Eastern Europe, Mid-Europe and Northern Italy. In the British-Irish Isles, Scandinavia, France, the Iberian Peninsula and the Mediterranean, the statistical model is not considered skillful enough to draw any conclusion for PM2.5. In Eastern Europe, France, the Iberian Peninsula, Mid-Europe and Northern Italy, the statistical model of the
International Nuclear Information System (INIS)
Möbius, E.; Bochsler, P.; Heirtzler, D.; Kucharek, H.; Lee, M. A.; Leonard, T.; Schwadron, N. A.; Wu, X.; Petersen, L.; Valovcin, D.; Wurz, P.; Bzowski, M.; Kubiak, M. A.; Fuselier, S. A.; Crew, G.; Vanderspek, R.; McComas, D. J.; Saul, L.
2012-01-01
Neutral atom imaging of the interstellar gas flow in the inner heliosphere provides the most detailed information on physical conditions of the surrounding interstellar medium (ISM) and its interaction with the heliosphere. The Interstellar Boundary Explorer (IBEX) measured neutral H, He, O, and Ne for three years. We compare the He and combined O+Ne flow distributions for two interstellar flow passages in 2009 and 2010 with an analytical calculation, which is simplified because the IBEX orientation provides observations at almost exactly the perihelion of the gas trajectories. This method allows separate determination of the key ISM parameters: inflow speed, longitude, and latitude, as well as temperature. A combined optimization, as in complementary approaches, is thus not necessary. Based on the observed peak position and width in longitude and latitude, inflow speed, latitude, and temperature are found as a function of inflow longitude. The latter is then constrained by the variation of the observed flow latitude as a function of observer longitude and by the ratio of the widths of the distribution in longitude and latitude. Identical results are found for 2009 and 2010: an He flow vector somewhat outside previous determinations (λ ISM∞ = 79. 0 0+3. 0 0(–3. 0 5), β ISM∞ = –4. 0 9 ± 0. 0 2, V ISM∞ 23.5 + 3.0(–2.0) km s –1 , T He = 5000-8200 K), suggesting a larger inflow longitude and lower speed. The O+Ne temperature range, T O+Ne = 5300-9000 K, is found to be close to the upper range for He and consistent with an isothermal medium for all species within current uncertainties.
Image-based Exploration of Iso-surfaces for Large Multi- Variable Datasets using Parameter Space.
Binyahib, Roba S.
2013-05-13
With an increase in processing power, more complex simulations have resulted in larger data size, with higher resolution and more variables. Many techniques have been developed to help the user to visualize and analyze data from such simulations. However, dealing with a large amount of multivariate data is challenging, time- consuming and often requires high-end clusters. Consequently, novel visualization techniques are needed to explore such data. Many users would like to visually explore their data and change certain visual aspects without the need to use special clusters or having to load a large amount of data. This is the idea behind explorable images (EI). Explorable images are a novel approach that provides limited interactive visualization without the need to re-render from the original data [40]. In this work, the concept of EI has been used to create a workflow that deals with explorable iso-surfaces for scalar fields in a multivariate, time-varying dataset. As a pre-processing step, a set of iso-values for each scalar field is inferred and extracted from a user-assisted sampling technique in time-parameter space. These iso-values are then used to generate iso- surfaces that are then pre-rendered (from a fixed viewpoint) along with additional buffers (i.e. normals, depth, values of other fields, etc.) to provide a compressed representation of iso-surfaces in the dataset. We present a tool that at run-time allows the user to interactively browse and calculate a combination of iso-surfaces superimposed on each other. The result is the same as calculating multiple iso- surfaces from the original data but without the memory and processing overhead. Our tool also allows the user to change the (scalar) values superimposed on each of the surfaces, modify their color map, and interactively re-light the surfaces. We demonstrate the effectiveness of our approach over a multi-terabyte combustion dataset. We also illustrate the efficiency and accuracy of our
Directory of Open Access Journals (Sweden)
G. Baroni
2010-02-01
Full Text Available Data of soil hydraulic properties forms often a limiting factor in unsaturated zone modelling, especially at the larger scales. Investigations for the hydraulic characterization of soils are time-consuming and costly, and the accuracy of the results obtained by the different methodologies is still debated. However, we may wonder how the uncertainty in soil hydraulic parameters relates to the uncertainty of the selected modelling approach. We performed an intensive monitoring study during the cropping season of a 10 ha maize field in Northern Italy. The data were used to: i compare different methods for determining soil hydraulic parameters and ii evaluate the effect of the uncertainty in these parameters on different variables (i.e. evapotranspiration, average water content in the root zone, flux at the bottom boundary of the root zone simulated by two hydrological models of different complexity: SWAP, a widely used model of soil moisture dynamics in unsaturated soils based on Richards equation, and ALHyMUS, a conceptual model of the same dynamics based on a reservoir cascade scheme. We employed five direct and indirect methods to determine soil hydraulic parameters for each horizon of the experimental profile. Two methods were based on a parameter optimization of: a laboratory measured retention and hydraulic conductivity data and b field measured retention and hydraulic conductivity data. The remaining three methods were based on the application of widely used Pedo-Transfer Functions: c Rawls and Brakensiek, d HYPRES, and e ROSETTA. Simulations were performed using meteorological, irrigation and crop data measured at the experimental site during the period June – October 2006. Results showed a wide range of soil hydraulic parameter values generated with the different methods, especially for the saturated hydraulic conductivity K_{sat} and the shape parameter α of the van Genuchten curve. This is reflected in a variability of
GMC COLLISIONS AS TRIGGERS OF STAR FORMATION. I. PARAMETER SPACE EXPLORATION WITH 2D SIMULATIONS
Energy Technology Data Exchange (ETDEWEB)
Wu, Benjamin [Department of Physics, University of Florida, Gainesville, FL 32611 (United States); Loo, Sven Van [School of Physics and Astronomy, University of Leeds, Leeds LS2 9JT (United Kingdom); Tan, Jonathan C. [Departments of Astronomy and Physics, University of Florida, Gainesville, FL 32611 (United States); Bruderer, Simon, E-mail: benwu@phys.ufl.edu [Max Planck Institute for Extraterrestrial Physics, Giessenbachstrasse 1, D-85748 Garching (Germany)
2015-09-20
We utilize magnetohydrodynamic (MHD) simulations to develop a numerical model for giant molecular cloud (GMC)–GMC collisions between nearly magnetically critical clouds. The goal is to determine if, and under what circumstances, cloud collisions can cause pre-existing magnetically subcritical clumps to become supercritical and undergo gravitational collapse. We first develop and implement new photodissociation region based heating and cooling functions that span the atomic to molecular transition, creating a multiphase ISM and allowing modeling of non-equilibrium temperature structures. Then in 2D and with ideal MHD, we explore a wide parameter space of magnetic field strength, magnetic field geometry, collision velocity, and impact parameter and compare isolated versus colliding clouds. We find factors of ∼2–3 increase in mean clump density from typical collisions, with strong dependence on collision velocity and magnetic field strength, but ultimately limited by flux-freezing in 2D geometries. For geometries enabling flow along magnetic field lines, greater degrees of collapse are seen. We discuss observational diagnostics of cloud collisions, focussing on {sup 13}CO(J = 2–1), {sup 13}CO(J = 3–2), and {sup 12}CO(J = 8–7) integrated intensity maps and spectra, which we synthesize from our simulation outputs. We find that the ratio of J = 8–7 to lower-J emission is a powerful diagnostic probe of GMC collisions.
Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry
2016-04-01
Reliable estimates of carbon fluxes and states at regional scales are required to reduce uncertainties in regional carbon balance estimates and to support decision making in environmental politics. In this work the Community Land Model version 4.5 (CLM4.5-BGC) was applied at a high spatial resolution (1 km2) for the Rur catchment in western Germany. In order to improve the model-data consistency of net ecosystem exchange (NEE) and leaf area index (LAI) for this study area, five plant functional type (PFT)-specific CLM4.5-BGC parameters were estimated with time series of half-hourly NEE data for one year in 2011/2012, using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm, a Markov Chain Monte Carlo (MCMC) approach. The parameters were estimated separately for four different plant functional types (needleleaf evergreen temperate tree, broadleaf deciduous temperate tree, C3-grass and C3-crop) at four different sites. The four sites are located inside or close to the Rur catchment. We evaluated modeled NEE for one year in 2012/2013 with NEE measured at seven eddy covariance sites in the catchment, including the four parameter estimation sites. Modeled LAI was evaluated by means of LAI derived from remotely sensed RapidEye images of about 18 days in 2011/2012. Performance indices were based on a comparison between measurements and (i) a reference run with CLM default parameters, and (ii) a 60 instance CLM ensemble with parameters sampled from the DREAM posterior probability density functions (pdfs). The difference between the observed and simulated NEE sum reduced 23% if estimated parameters instead of default parameters were used as input. The mean absolute difference between modeled and measured LAI was reduced by 59% on average. Simulated LAI was not only improved in terms of the absolute value but in some cases also in terms of the timing (beginning of vegetation onset), which was directly related to a substantial improvement of the NEE estimates in
DEFF Research Database (Denmark)
Frutiger, Jerome; Marcarie, Camille; Abildskov, Jens
2016-01-01
of the prediction. The methodology is evaluated through development of a GC method for the prediction of the heat of combustion (ΔHco) for pure components. The results showed that robust regression lead to best performance statistics for parameter estimation. The bootstrap method is found to be a valid alternative......A rigorous methodology is developed that addresses numerical and statistical issues when developing group contribution (GC) based property models such as regression methods, optimization algorithms, performance statistics, outlier treatment, parameter identifiability, and uncertainty...... identifiability issues, reporting of the 95% confidence intervals of the predicted property values should be mandatory as opposed to reporting only single value prediction, currently the norm in literature. Moreover, inclusion of higher order groups (additional parameters) does not always lead to improved...
Gul, R; Bernhard, S
2015-11-01
In computational cardiovascular models, parameters are one of major sources of uncertainty, which make the models unreliable and less predictive. In order to achieve predictive models that allow the investigation of the cardiovascular diseases, sensitivity analysis (SA) can be used to quantify and reduce the uncertainty in outputs (pressure and flow) caused by input (electrical and structural) model parameters. In the current study, three variance based global sensitivity analysis (GSA) methods; Sobol, FAST and a sparse grid stochastic collocation technique based on the Smolyak algorithm were applied on a lumped parameter model of carotid bifurcation. Sensitivity analysis was carried out to identify and rank most sensitive parameters as well as to fix less sensitive parameters at their nominal values (factor fixing). In this context, network location and temporal dependent sensitivities were also discussed to identify optimal measurement locations in carotid bifurcation and optimal temporal regions for each parameter in the pressure and flow waves, respectively. Results show that, for both pressure and flow, flow resistance (R), diameter (d) and length of the vessel (l) are sensitive within right common carotid (RCC), right internal carotid (RIC) and right external carotid (REC) arteries, while compliance of the vessels (C) and blood inertia (L) are sensitive only at RCC. Moreover, Young's modulus (E) and wall thickness (h) exhibit less sensitivities on pressure and flow at all locations of carotid bifurcation. Results of network location and temporal variabilities revealed that most of sensitivity was found in common time regions i.e. early systole, peak systole and end systole. Copyright © 2015 Elsevier Inc. All rights reserved.
Struck, C.; Wilde, de P.J.C.J.; Hopfe, C.J.; Hensen, J.L.M.
2008-01-01
This paper describes research conducted to gather empirical evidence on extent, character and content of the option space in building design projects, from the perspective of a climate engineer using building performance simulation for concept evaluation. The goal is to support uncertainty analysis
Bothe, Jameson R.; Stein, Zachary W.; Al-Hashimi, Hashim M.
2014-01-01
Spin relaxation in the rotating frame (R1ρ) is a powerful NMR technique for characterizing fast microsecond timescale exchange processes directed toward short-lived excited states in biomolecules. At the limit of fast exchange, only kex = k1 + k−1 and Φıx = pGpE(Δω)2 can be determined from R1ρ data limiting the ability to characterize the structure and energetics of the excited state conformation. Here, we use simulations to examine the uncertainty with which exchange parameters can be determined for two state systems in intermediate-to-fast exchange using off-resonance R1ρ relaxation dispersion. R1ρ data computed by solving the Bloch-McConnell equations reveals small but significant asymmetry with respect to offset (R1ρ(ΔΩ) ≠ R1ρ(−ΔΩ)), which is a hallmark of slow-to-intermediate exchange, even under conditions of fast exchange for free precession chemical exchange line broadening (kex/Δω > 10). A grid search analysis combined with bootstrap and Monte-Carlo based statistical approaches for estimating uncertainty in exchange parameters reveals that both the sign and magnitude of Δω can be determined at a useful level of uncertainty for systems in fast exchange (kex/Δω exchange parameters. Results from simulations are complemented by analysis of experimental R1ρ data measured in three nucleic acid systems with exchange processes occurring on the slow (kex/Δω = 0.2; pE = ~ 0.7%), fast (kex/Δω = ~10–16; pE = ~13%) and very fast (kex = 39,000 s−1) chemical shift timescales. PMID:24819426
Chartier, Thomas; Scotti, Oona; Clément, Christophe; Jomard, Hervé; Baize, Stéphane
2017-09-01
We perform a fault-based probabilistic seismic hazard assessment (PSHA) exercise in the Upper Rhine Graben to quantify the relative influence of fault parameters on the hazard at the Fessenheim nuclear power plant site. Specifically, we show that the potentially active faults described in the companion paper (Jomard et al., 2017, hereafter Part 1) are the dominant factor in hazard estimates at the low annual probability of exceedance relevant for the safety assessment of nuclear installations. Geological information documenting the activity of the faults in this region, however, remains sparse, controversial and affected by a high degree of uncertainty. A logic tree approach is thus implemented to explore the epistemic uncertainty and quantify its impact on the seismic hazard estimates. Disaggregation of the peak ground acceleration (PGA) hazard at a 10 000-year return period shows that the Rhine River fault is the main seismic source controlling the hazard level at the site. Sensitivity tests show that the uncertainty on the slip rate of the Rhine River fault is the dominant factor controlling the variability of the seismic hazard level, greater than the epistemic uncertainty due to ground motion prediction equations (GMPEs). Uncertainty on slip rate estimates from 0.04 to 0.1 mm yr-1 results in a 40 to 50 % increase in hazard levels at the 10 000-year target return period. Reducing epistemic uncertainty in future fault-based PSHA studies at this site will thus require (1) performing in-depth field studies to better characterize the seismic potential of the Rhine River fault; (2) complementing GMPEs with more physics-based modelling approaches to better account for the near-field effects of ground motion and (3) improving the modelling of the background seismicity. Indeed, in this exercise, we assume that background earthquakes can only host M 6. 0 earthquakes have been recently identified at depth within the Upper Rhine Graben (see Part 1) but are not accounted
Energy Technology Data Exchange (ETDEWEB)
Plesko, Catherine S [Los Alamos National Laboratory; Clement, R Ryan [Los Alamos National Laboratory; Weaver, Robert P [Los Alamos National Laboratory; Bradley, Paul A [Los Alamos National Laboratory; Huebner, Walter F [Los Alamos National Laboratory
2009-01-01
The mitigation of impact hazards resulting from Earth-approaching asteroids and comets has received much attention in the popular press. However, many questions remain about the near-term and long-term, feasibility and appropriate application of all proposed methods. Recent and ongoing ground- and space-based observations of small solar-system body composition and dynamics have revolutionized our understanding of these bodies (e.g., Ryan (2000), Fujiwara et al. (2006), and Jedicke et al. (2006)). Ongoing increases in computing power and algorithm sophistication make it possible to calculate the response of these inhomogeneous objects to proposed mitigation techniques. Here we present the first phase of a comprehensive hazard mitigation planning effort undertaken by Southwest Research Institute and Los Alamos National Laboratory. We begin by reviewing the parameter space of the object's physical and chemical composition and trajectory. We then use the radiation hydrocode RAGE (Gittings et al. 2008), Monte Carlo N-Particle (MCNP) radiation transport (see Clement et al., this conference), and N-body dynamics codes to explore the effects these variations in object properties have on the coupling of energy into the object from a variety of mitigation techniques, including deflection and disruption by nuclear and conventional munitions, and a kinetic impactor.
Xie, Q.; Lu, S.; Costola, D.; Hensen, J.L.M.
2014-01-01
In performance-based fire protection design of buildings, much attention is paid to design parameters by fire engineers or experts. However, due to the time-consuming evacuation models, it is computationally prohibitive to adopt the conventional Monte Carlo simulation (MCS) to examine the effect of
Dangelmayr, Martin A; Reimus, Paul W; Johnson, Raymond H; Clay, James T; Stone, James J
2018-06-01
This research assesses the ability of a GC SCM to simulate uranium transport under variable geochemical conditions typically encountered at uranium in-situ recovery (ISR) sites. Sediment was taken from a monitoring well at the SRH site at depths 192 and 193 m below ground and characterized by XRD, XRF, TOC, and BET. Duplicate column studies on the different sediment depths, were flushed with synthesized restoration waters at two different alkalinities (160 mg/l CaCO 3 and 360 mg/l CaCO 3 ) to study the effect of alkalinity on uranium mobility. Uranium breakthrough occurred 25% - 30% earlier in columns with 360 mg/l CaCO 3 over columns fed with 160 mg/l CaCO 3 influent water. A parameter estimation program (PEST) was coupled to PHREEQC to derive site densities from experimental data. Significant parameter fittings were produced for all models, demonstrating that the GC SCM approach can model the impact of carbonate on uranium in flow systems. Derived site densities for the two sediment depths were between 141 and 178 μmol-sites/kg-soil, demonstrating similar sorption capacities despite heterogeneity in sediment mineralogy. Model sensitivity to alkalinity and pH was shown to be moderate compared to fitted site densities, when calcite saturation was allowed to equilibrate. Calcite kinetics emerged as a potential source of error when fitting parameters in flow conditions. Fitted results were compared to data from previous batch and column studies completed on sediments from the Smith-Ranch Highland (SRH) site, to assess variability in derived parameters. Parameters from batch experiments were lower by a factor of 1.1 to 3.4 compared to column studies completed on the same sediments. The difference was attributed to errors in solid-solution ratios and the impact of calcite dissolution in batch experiments. Column studies conducted at two different laboratories showed almost an order of magnitude difference in fitted site densities suggesting that experimental
Dangelmayr, Martin A.; Reimus, Paul W.; Johnson, Raymond H.; Clay, James T.; Stone, James J.
2018-06-01
This research assesses the ability of a GC SCM to simulate uranium transport under variable geochemical conditions typically encountered at uranium in-situ recovery (ISR) sites. Sediment was taken from a monitoring well at the SRH site at depths 192 and 193 m below ground and characterized by XRD, XRF, TOC, and BET. Duplicate column studies on the different sediment depths, were flushed with synthesized restoration waters at two different alkalinities (160 mg/l CaCO3 and 360 mg/l CaCO3) to study the effect of alkalinity on uranium mobility. Uranium breakthrough occurred 25% - 30% earlier in columns with 360 mg/l CaCO3 over columns fed with 160 mg/l CaCO3 influent water. A parameter estimation program (PEST) was coupled to PHREEQC to derive site densities from experimental data. Significant parameter fittings were produced for all models, demonstrating that the GC SCM approach can model the impact of carbonate on uranium in flow systems. Derived site densities for the two sediment depths were between 141 and 178 μmol-sites/kg-soil, demonstrating similar sorption capacities despite heterogeneity in sediment mineralogy. Model sensitivity to alkalinity and pH was shown to be moderate compared to fitted site densities, when calcite saturation was allowed to equilibrate. Calcite kinetics emerged as a potential source of error when fitting parameters in flow conditions. Fitted results were compared to data from previous batch and column studies completed on sediments from the Smith-Ranch Highland (SRH) site, to assess variability in derived parameters. Parameters from batch experiments were lower by a factor of 1.1 to 3.4 compared to column studies completed on the same sediments. The difference was attributed to errors in solid-solution ratios and the impact of calcite dissolution in batch experiments. Column studies conducted at two different laboratories showed almost an order of magnitude difference in fitted site densities suggesting that experimental methodology
Energy Technology Data Exchange (ETDEWEB)
Griffin, Joshua D. (Sandia National lababoratory, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L. (Sandia National lababoratory, Livermore, CA); Watson, Jean-Paul; Kolda, Tamara Gibson (Sandia National lababoratory, Livermore, CA); Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J. (Sandia National lababoratory, Livermore, CA); Hough, Patricia Diane (Sandia National lababoratory, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
Energy Technology Data Exchange (ETDEWEB)
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
Energy Technology Data Exchange (ETDEWEB)
Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L. (Sandai National Labs, Livermore, CA); Watson, Jean-Paul; Kolda, Tamara Gibson (Sandai National Labs, Livermore, CA); Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J. (Sandai National Labs, Livermore, CA); Hough, Patricia Diane (Sandai National Labs, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Energy Technology Data Exchange (ETDEWEB)
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
An International Workshop on Uncertainty, Sensitivity, and Parameter Estimation for Multimedia Environmental Modeling was held August 1921, 2003, at the U.S. Nuclear Regulatory Commission Headquarters in Rockville, Maryland, USA. The workshop was organized and convened by the Fe...
International Nuclear Information System (INIS)
Clifton, P.M.
1985-03-01
This study examines the sensitivity of the travel time distribution predicted by a reference case model to (1) scale of representation of the model parameters, (2) size of the model domain, (3) correlation range of log-transmissivity, and (4) cross correlations between transmissivity and effective thickness. The basis for the reference model is the preliminary stochastic travel time model previously documented by the Basalt Waste Isolation Project. Results of this study show the following. The variability of the predicted travel times can be adequately represented when the ratio between the size of the zones used to represent the model parameters and the log-transmissivity correlation range is less than about one-fifth. The size of the model domain and the types of boundary conditions can have a strong impact on the distribution of travel times. Longer log-transmissivity correlation ranges cause larger variability in the predicted travel times. Positive cross correlation between transmissivity and effective thickness causes a decrease in the travel time variability. These results demonstrate the need for a sound conceptual model prior to conducting a stochastic travel time analysis
Boada, Beatriz L.; Boada, Maria Jesus L.; Vargas-Melendez, Leandro; Diaz, Vicente
2018-01-01
Nowadays, one of the main objectives in road transport is to decrease the number of accident victims. Rollover accidents caused nearly 33% of all deaths from passenger vehicle crashes. Roll Stability Control (RSC) systems prevent vehicles from untripped rollover accidents. The lateral load transfer is the main parameter which is taken into account in the RSC systems. This parameter is related to the roll angle, which can be directly measured from a dual-antenna GPS. Nevertheless, this is a costly technique. For this reason, roll angle has to be estimated. In this paper, a novel observer based on H∞ filtering in combination with a neural network (NN) for the vehicle roll angle estimation is proposed. The design of this observer is based on four main criteria: to use a simplified vehicle model, to use signals of sensors which are installed onboard in current vehicles, to consider the inaccuracy in the system model and to attenuate the effect of the external disturbances. Experimental results show the effectiveness of the proposed observer.
Munilla, S; Cantet, R J C
2012-06-01
Consider the estimation of genetic (co)variance components from a maternal animal model (MAM) using a conjugated Bayesian approach. Usually, more uncertainty is expected a priori on the value of the maternal additive variance than on the value of the direct additive variance. However, it is not possible to model such differential uncertainty when assuming an inverted Wishart (IW) distribution for the genetic covariance matrix. Instead, consider the use of a generalized inverted Wishart (GIW) distribution. The GIW is essentially an extension of the IW distribution with a larger set of distinct parameters. In this study, the GIW distribution in its full generality is introduced and theoretical results regarding its use as the prior distribution for the genetic covariance matrix of the MAM are derived. In particular, we prove that the conditional conjugacy property holds so that parameter estimation can be accomplished via the Gibbs sampler. A sampling algorithm is also sketched. Furthermore, we describe how to specify the hyperparameters to account for differential prior opinion on the (co)variance components. A recursive strategy to elicit these parameters is then presented and tested using field records and simulated data. The procedure returned accurate estimates and reduced standard errors when compared with non-informative prior settings while improving the convergence rates. In general, faster convergence was always observed when a stronger weight was placed on the prior distributions. However, analyses based on the IW distribution have also produced biased estimates when the prior means were set to over-dispersed values. © 2011 Blackwell Verlag GmbH.
Directory of Open Access Journals (Sweden)
J. van Huissteden
2011-10-01
Full Text Available Marine Isotope Stage 3 (MIS 3 interstadials are marked by a sharp increase in the atmospheric methane (CH4 concentration, as recorded in ice cores. Wetlands are assumed to be the major source of this CH4, although several other hypotheses have been advanced. Modelling of CH4 emissions is crucial to quantify CH4 sources for past climates. Vegetation effects are generally highly generalized in modelling past and present-day CH4 fluxes, but should not be neglected. Plants strongly affect the soil-atmosphere exchange of CH4 and the net primary production of the vegetation supplies organic matter as substrate for methanogens. For modelling past CH4 fluxes from northern wetlands, assumptions on vegetation are highly relevant since paleobotanical data indicate large differences in Last Glacial (LG wetland vegetation composition as compared to modern wetland vegetation. Besides more cold-adapted vegetation, Sphagnum mosses appear to be much less dominant during large parts of the LG than at present, which particularly affects CH4 oxidation and transport. To evaluate the effect of vegetation parameters, we used the PEATLAND-VU wetland CO2/CH4 model to simulate emissions from wetlands in continental Europe during LG and modern climates. We tested the effect of parameters influencing oxidation during plant transport (fox, vegetation net primary production (NPP, parameter symbol Pmax, plant transport rate (Vtransp, maximum rooting depth (Zroot and root exudation rate (fex. Our model results show that modelled CH4 fluxes are sensitive to fox and Zroot in particular. The effects of Pmax, Vtransp and fex are of lesser relevance. Interactions with water table modelling are significant for Vtransp. We conducted experiments with different wetland vegetation types for Marine Isotope Stage 3 (MIS 3 stadial and interstadial climates and the present-day climate, by coupling PEATLAND-VU to high resolution climate model simulations for Europe. Experiments assuming
Berrittella, C.; van Huissteden, J.
2011-10-01
Marine Isotope Stage 3 (MIS 3) interstadials are marked by a sharp increase in the atmospheric methane (CH4) concentration, as recorded in ice cores. Wetlands are assumed to be the major source of this CH4, although several other hypotheses have been advanced. Modelling of CH4 emissions is crucial to quantify CH4 sources for past climates. Vegetation effects are generally highly generalized in modelling past and present-day CH4 fluxes, but should not be neglected. Plants strongly affect the soil-atmosphere exchange of CH4 and the net primary production of the vegetation supplies organic matter as substrate for methanogens. For modelling past CH4 fluxes from northern wetlands, assumptions on vegetation are highly relevant since paleobotanical data indicate large differences in Last Glacial (LG) wetland vegetation composition as compared to modern wetland vegetation. Besides more cold-adapted vegetation, Sphagnum mosses appear to be much less dominant during large parts of the LG than at present, which particularly affects CH4 oxidation and transport. To evaluate the effect of vegetation parameters, we used the PEATLAND-VU wetland CO2/CH4 model to simulate emissions from wetlands in continental Europe during LG and modern climates. We tested the effect of parameters influencing oxidation during plant transport (fox), vegetation net primary production (NPP, parameter symbol Pmax), plant transport rate (Vtransp), maximum rooting depth (Zroot) and root exudation rate (fex). Our model results show that modelled CH4 fluxes are sensitive to fox and Zroot in particular. The effects of Pmax, Vtransp and fex are of lesser relevance. Interactions with water table modelling are significant for Vtransp. We conducted experiments with different wetland vegetation types for Marine Isotope Stage 3 (MIS 3) stadial and interstadial climates and the present-day climate, by coupling PEATLAND-VU to high resolution climate model simulations for Europe. Experiments assuming dominance of
International Nuclear Information System (INIS)
Clifton, P.M.
1984-12-01
The deep basalt formations beneath the Hanford Site are being investigated for the Department of Energy (DOE) to assess their suitability as a host medium for a high level nuclear waste repository. Predicted performance of the proposed repository is an important part of the investigation. One of the performance measures being used to gauge the suitability of the host medium is pre-waste-emplacement groundwater travel times to the accessible environment. Many deterministic analyses of groundwater travel times have been completed by Rockwell and other independent organizations. Recently, Rockwell has completed a preliminary stochastic analysis of groundwater travel times. This document presents analyses that show the sensitivity of the results from the previous stochastic travel time study to: (1) scale of representation of model parameters, (2) size of the model domain, (3) correlation range of log-transmissivity, and (4) cross-correlation between transmissivity and effective thickness. 40 refs., 29 figs., 6 tabs
Essays on model uncertainty in financial models
Li, Jing
2018-01-01
This dissertation studies model uncertainty, particularly in financial models. It consists of two empirical chapters and one theoretical chapter. The first empirical chapter (Chapter 2) classifies model uncertainty into parameter uncertainty and misspecification uncertainty. It investigates the
Han, J.; Zhou, S.
2017-12-01
Asia, located in the conjoined areas of Eurasian, Pacific, and Indo-Australian plates, is the continent with highest seismicity. Earthquake catalogue on the bases of modern seismic network recordings has been established since around 1970 in Asia and the earthquake catalogue before 1970 was much more inaccurate because of few stations. With a history of less than 50 years of modern earthquake catalogue, researches in seismology are quite limited. After the appearance of improved Earth velocity structure model, modified locating method and high-accuracy Optical Character Recognition technique, travel time data of earthquakes from 1900 to 1970 can be included in research and more accurate locations can be determined for historical earthquakes. Hence, parameters of these historical earthquakes can be obtained more precisely and some research method such as ETAS model can be used in a much longer time scale. This work focuses on the following three aspects: (1) Relocating more than 300 historical major earthquakes (M≥7.0) in Asia based on the Shide Circulars, International Seismological Summary and EHB Bulletin instrumental records between 1900 and 1970. (2) Calculating the focal mechanisms of more than 50 events by first motion records of P wave of ISS. (3) Based on the geological data, tectonic stress field and the result of relocation, inferring focal mechanisms of historical major earthquakes.
Mohanty, B.; Jena, S.; Panda, R. K.
2016-12-01
The overexploitation of groundwater elicited in abandoning several shallow tube wells in the study Basin in Eastern India. For the sustainability of groundwater resources, basin-scale modelling of groundwater flow is indispensable for the effective planning and management of the water resources. The basic intent of this study is to develop a 3-D groundwater flow model of the study basin using the Visual MODFLOW Flex 2014.2 package and successfully calibrate and validate the model using 17 years of observed data. The sensitivity analysis was carried out to quantify the susceptibility of aquifer system to the river bank seepage, recharge from rainfall and agriculture practices, horizontal and vertical hydraulic conductivities, and specific yield. To quantify the impact of parameter uncertainties, Sequential Uncertainty Fitting Algorithm (SUFI-2) and Markov chain Monte Carlo (McMC) techniques were implemented. Results from the two techniques were compared and the advantages and disadvantages were analysed. Nash-Sutcliffe coefficient (NSE), Coefficient of Determination (R2), Mean Absolute Error (MAE), Mean Percent Deviation (Dv) and Root Mean Squared Error (RMSE) were adopted as criteria of model evaluation during calibration and validation of the developed model. NSE, R2, MAE, Dv and RMSE values for groundwater flow model during calibration and validation were in acceptable range. Also, the McMC technique was able to provide more reasonable results than SUFI-2. The calibrated and validated model will be useful to identify the aquifer properties, analyse the groundwater flow dynamics and the change in groundwater levels in future forecasts.
Exploring Parameter Tuning for Analysis and Optimization of a Computational Model
Mollee, J.S.; Fernandes de Mello Araujo, E.; Klein, M.C.A.
2017-01-01
Computational models of human processes are used for many different purposes and in many different types of applications. A common challenge in using such models is to find suitable parameter values. In many cases, the ideal parameter values are those that yield the most realistic simulation
Risk, unexpected uncertainty, and estimation uncertainty: Bayesian learning in unstable settings.
Directory of Open Access Journals (Sweden)
Elise Payzan-LeNestour
Full Text Available Recently, evidence has emerged that humans approach learning using Bayesian updating rather than (model-free reinforcement algorithms in a six-arm restless bandit problem. Here, we investigate what this implies for human appreciation of uncertainty. In our task, a Bayesian learner distinguishes three equally salient levels of uncertainty. First, the Bayesian perceives irreducible uncertainty or risk: even knowing the payoff probabilities of a given arm, the outcome remains uncertain. Second, there is (parameter estimation uncertainty or ambiguity: payoff probabilities are unknown and need to be estimated. Third, the outcome probabilities of the arms change: the sudden jumps are referred to as unexpected uncertainty. We document how the three levels of uncertainty evolved during the course of our experiment and how it affected the learning rate. We then zoom in on estimation uncertainty, which has been suggested to be a driving force in exploration, in spite of evidence of widespread aversion to ambiguity. Our data corroborate the latter. We discuss neural evidence that foreshadowed the ability of humans to distinguish between the three levels of uncertainty. Finally, we investigate the boundaries of human capacity to implement Bayesian learning. We repeat the experiment with different instructions, reflecting varying levels of structural uncertainty. Under this fourth notion of uncertainty, choices were no better explained by Bayesian updating than by (model-free reinforcement learning. Exit questionnaires revealed that participants remained unaware of the presence of unexpected uncertainty and failed to acquire the right model with which to implement Bayesian updating.
International Nuclear Information System (INIS)
Song Xiao-Na; Song Shuai; Liu Lei-Po; Tejado Balsera, Inés
2017-01-01
This paper investigates the mixed H ∞ and passive projective synchronization problem for fractional-order (FO) memristor-based neural networks. Our aim is to design a controller such that, though the unavoidable phenomena of time-delay and parameter uncertainty are fully considered, the resulting closed-loop system is asymptotically stable with a mixed H ∞ and passive performance level. By combining active and adaptive control methods, a novel hybrid control strategy is designed, which can guarantee the robust stability of the closed-loop system and also ensure a mixed H ∞ and passive performance level. Via the application of FO Lyapunov stability theory, the projective synchronization conditions are addressed in terms of linear matrix inequality techniques. Finally, two simulation examples are given to illustrate the effectiveness of the proposed method. (paper)
DEFF Research Database (Denmark)
Gaspar, Jozsef; Ricardez-Sandoval, Luis; Jørgensen, John Bagterp
2017-01-01
of the plant. Flexibility is particularly crucial from an economic and operational point of view since plants must balance the power production and the electricity demand on a daily basis. This work shows the impact of design decisions and uncertainties on the dynamic operation and economics of a CO2 capture...... plant using piperazine (PZ), compared to the benchmark MEA solvent. This is exemplified through dynamic model calculations. The results show that the capacity of the buffer tank is a key parameter for the flexibility of the plant. A small tank corresponds to lower capital cost but it leads to increased...... operation cost and also to flexibility/controllability issues. Both, the PZ and MEA plants present inverse response for small tanks. These plants are challenging to control....
Li, Yue; Yang, Hui; Wang, Tao; MacBean, Natasha; Bacour, Cédric; Ciais, Philippe; Zhang, Yiping; Zhou, Guangsheng; Piao, Shilong
2017-08-01
Reducing parameter uncertainty of process-based terrestrial ecosystem models (TEMs) is one of the primary targets for accurately estimating carbon budgets and predicting ecosystem responses to climate change. However, parameters in TEMs are rarely constrained by observations from Chinese forest ecosystems, which are important carbon sink over the northern hemispheric land. In this study, eddy covariance data from six forest sites in China are used to optimize parameters of the ORganizing Carbon and Hydrology In Dynamics EcosystEms TEM. The model-data assimilation through parameter optimization largely reduces the prior model errors and improves the simulated seasonal cycle and summer diurnal cycle of net ecosystem exchange, latent heat fluxes, and gross primary production and ecosystem respiration. Climate change experiments based on the optimized model are deployed to indicate that forest net primary production (NPP) is suppressed in response to warming in the southern China but stimulated in the northeastern China. Altered precipitation has an asymmetric impact on forest NPP at sites in water-limited regions, with the optimization-induced reduction in response of NPP to precipitation decline being as large as 61% at a deciduous broadleaf forest site. We find that seasonal optimization alters forest carbon cycle responses to environmental change, with the parameter optimization consistently reducing the simulated positive response of heterotrophic respiration to warming. Evaluations from independent observations suggest that improving model structure still matters most for long-term carbon stock and its changes, in particular, nutrient- and age-related changes of photosynthetic rates, carbon allocation, and tree mortality.
Uncertainty Analyses and Strategy
International Nuclear Information System (INIS)
Kevin Coppersmith
2001-01-01
The DOE identified a variety of uncertainties, arising from different sources, during its assessment of the performance of a potential geologic repository at the Yucca Mountain site. In general, the number and detail of process models developed for the Yucca Mountain site, and the complex coupling among those models, make the direct incorporation of all uncertainties difficult. The DOE has addressed these issues in a number of ways using an approach to uncertainties that is focused on producing a defensible evaluation of the performance of a potential repository. The treatment of uncertainties oriented toward defensible assessments has led to analyses and models with so-called ''conservative'' assumptions and parameter bounds, where conservative implies lower performance than might be demonstrated with a more realistic representation. The varying maturity of the analyses and models, and uneven level of data availability, result in total system level analyses with a mix of realistic and conservative estimates (for both probabilistic representations and single values). That is, some inputs have realistically represented uncertainties, and others are conservatively estimated or bounded. However, this approach is consistent with the ''reasonable assurance'' approach to compliance demonstration, which was called for in the U.S. Nuclear Regulatory Commission's (NRC) proposed 10 CFR Part 63 regulation (64 FR 8640 [DIRS 101680]). A risk analysis that includes conservatism in the inputs will result in conservative risk estimates. Therefore, the approach taken for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) provides a reasonable representation of processes and conservatism for purposes of site recommendation. However, mixing unknown degrees of conservatism in models and parameter representations reduces the transparency of the analysis and makes the development of coherent and consistent probability statements about projected repository
Full Two-Body Problem Mass Parameter Observability Explored Through Doubly Synchronous Systems
Davis, Alex Benjamin; Scheeres, Daniel
2018-04-01
The full two-body problem (F2BP) is often used to model binary asteroid systems, representing the bodies as two finite mass distributions whose dynamics are influenced by their mutual gravity potential. The emergent behavior of the F2BP is highly coupled translational and rotational mutual motion of the mass distributions. For these systems the doubly synchronous equilibrium occurs when both bodies are tidally-locked and in a circular co-orbit. Stable oscillations about this equilibrium can be shown, for the nonplanar system, to be combinations of seven fundamental frequencies of the system and the mutual orbit rate. The fundamental frequencies arise as the linear periods of center manifolds identified about the equilibrium which are heavily influenced by each body’s mass parameters. We leverage these eight dynamical constraints to investigate the observability of binary asteroid mass parameters via dynamical observations. This is accomplished by proving the nonsingularity of the relationship between the frequencies and mass parameters for doubly synchronous systems. Thus we can invert the relationship to show that given observations of the frequencies, we can solve for the mass parameters of a target system. In so doing we are able to predict the estimation covariance of the mass parameters based on observation quality and define necessary observation accuracies for desired mass parameter certainties. We apply these tools to 617 Patroclus, a doubly synchronous Trojan binary and flyby target of the LUCY mission, as well as the Pluto and Charon system in order to predict mutual behaviors of these doubly synchronous systems and to provide observational requirements for these systems’ mass parameters
International Nuclear Information System (INIS)
Ji Hongbin; Wu Liwu; Cao Zhen
2012-01-01
A hypothesis of regarding injecting test as 'anti-pumping' test is presented, and pumping test's 'match line method' is used to process data of injecting test. Accurate hydrogeologic parameters can be obtained by injecting test in the sandstone uranium deposits with low permeability and small pumping volume. Taking injecting test in a uranium deposit of Xinjiang for example, the hydrogeologic parameters of main ore-bearing aquifer were calculated by using the 'anti-pumping' hypothesis. Results calculated by the 'anti-pumping' hypothesis were compared with results calculated by water level recovery method. The results show that it is feasible to use 'anti-pumping' hypothesis to calculate the hydrogeologic parameters of main ore-bearing aquifer. (authors)
Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.
2015-09-18
The PEST++ Version 1 object-oriented parameter estimation code is here extended to Version 3 to incorporate additional algorithms and tools to further improve support for large and complex environmental modeling problems. PEST++ Version 3 includes the Gauss-Marquardt-Levenberg (GML) algorithm for nonlinear parameter estimation, Tikhonov regularization, integrated linear-based uncertainty quantification, options of integrated TCP/IP based parallel run management or external independent run management by use of a Version 2 update of the GENIE Version 1 software code, and utilities for global sensitivity analyses. The Version 3 code design is consistent with PEST++ Version 1 and continues to be designed to lower the barriers of entry for users as well as developers while providing efficient and optimized algorithms capable of accommodating large, highly parameterized inverse problems. As such, this effort continues the original focus of (1) implementing the most popular and powerful features of the PEST software suite in a fashion that is easy for novice or experienced modelers to use and (2) developing a software framework that is easy to extend.
Min-Chi Hsiao; Pen-Ning Yu; Dong Song; Liu, Charles Y; Heck, Christi N; Millett, David; Berger, Theodore W
2014-01-01
New interventions using neuromodulatory devices such as vagus nerve stimulation, deep brain stimulation and responsive neurostimulation are available or under study for the treatment of refractory epilepsy. Since the actual mechanisms of the onset and termination of the seizure are still unclear, most researchers or clinicians determine the optimal stimulation parameters through trial-and-error procedures. It is necessary to further explore what types of electrical stimulation parameters (these may include stimulation frequency, amplitude, duration, interval pattern, and location) constitute a set of optimal stimulation paradigms to suppress seizures. In a previous study, we developed an in vitro epilepsy model using hippocampal slices from patients suffering from mesial temporal lobe epilepsy. Using a planar multi-electrode array system, inter-ictal activity from human hippocampal slices was consistently recorded. In this study, we have further transferred this in vitro seizure model to a testbed for exploring the possible neurostimulation paradigms to inhibit inter-ictal spikes. The methodology used to collect the electrophysiological data, the approach to apply different electrical stimulation parameters to the slices are provided in this paper. The results show that this experimental testbed will provide a platform for testing the optimal stimulation parameters of seizure cessation. We expect this testbed will expedite the process for identifying the most effective parameters, and may ultimately be used to guide programming of new stimulating paradigms for neuromodulatory devices.
Energy Technology Data Exchange (ETDEWEB)
Serra, Oscar [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)
2000-07-01
Some studies were done about the effect of the uncertainty in the values of several thermo-hydraulic parameters on the core behaviour of the CAREM-25 reactor.By using the chain codes CITVAP-THERMIT and the perturbation the reference states, it was found that concerning to the total power, the effects were not very important, but were much bigger for the pressure.Furthermore were hardly significant in the presence of any perturbation on the void fraction calculation and the fuel temperature.The reactivity and the power peaking factor had highly important changes in the case of the coolant flow.We conclude that the use of this procedure is adequate and useful to our purpose.
Parametric uncertainty in optical image modeling
Potzick, James; Marx, Egon; Davidson, Mark
2006-10-01
Optical photomask feature metrology and wafer exposure process simulation both rely on optical image modeling for accurate results. While it is fair to question the accuracies of the available models, model results also depend on several input parameters describing the object and imaging system. Errors in these parameter values can lead to significant errors in the modeled image. These parameters include wavelength, illumination and objective NA's, magnification, focus, etc. for the optical system, and topography, complex index of refraction n and k, etc. for the object. In this paper each input parameter is varied over a range about its nominal value and the corresponding images simulated. Second order parameter interactions are not explored. Using the scenario of the optical measurement of photomask features, these parametric sensitivities are quantified by calculating the apparent change of the measured linewidth for a small change in the relevant parameter. Then, using reasonable values for the estimated uncertainties of these parameters, the parametric linewidth uncertainties can be calculated and combined to give a lower limit to the linewidth measurement uncertainty for those parameter uncertainties.
International Nuclear Information System (INIS)
Khairov, G.B.
1997-01-01
In this article an influence of petroleum industry and especially discovery of hydrocarbon deposits with high content of hydrogen sulfide to ecological safety and environment is shown. New criteria for zoning and geological estimation are presented. For the first time a parameter of territory ecological protection is offered. (author)
Roberts, James S.; Bao, Han; Huang, Chun-Wei; Gagne, Phill
Characteristic curve approaches for linking parameters from the generalized partial credit model were examined for cases in which common (anchor) items are calibrated separately in two groups. Three of these approaches are simple extensions of the test characteristic curve (TCC), item characteristic curve (ICC), and operating characteristic curve…
DEFF Research Database (Denmark)
Lee, Daniel Sang-Hoon; Naboni, Emanuele
2017-01-01
mass effect (and the implication on thermal comfort) and the given geometrical parameters of exposed soffit reinforced concrete beams are explored. The geometrical parameters of the beams are initially defined in means of structural optimisation. The beams consist of flange and web in likeness of T...... the resultant heat exchange behaviour, and the implication on thermal comfort indoor environment. However, the current paper presents the thermal mass characteristics of one geometrical type. The study is based on results derived from computational fluid dynamics (CFD) analysis, where Rhino 3D is used......The paper presents a research exploring the thermal mass effect of reinforced concrete beams with structurally optimised geometrical forms. Fully exposed concrete soffits in architectural contexts create more than just visual impacts on the indoor climate through their possible interferences...
Creating and Exploring Huge Parameter Spaces: Interactive Evolution as a Tool for Sound Generation
DEFF Research Database (Denmark)
Dahlstedt, Palle
2001-01-01
In this paper, a program is presented that applies interactive evolution to sound generation, i.e., preferred individuals are repeatedly selected from a population of genetically bred sound objects, created with various synthesis and pattern generation algorithms. This simplifies aural exploration...... applications. It is also shown how this technique can be used to simplify sound design in standard hardware synthesizers, a task normally avoided by most musicians, due to the required amount of technical understanding....
Saleem, M.; Resmi, L.; Misra, Kuntal; Pai, Archana; Arun, K. G.
2018-03-01
Short duration Gamma Ray Bursts (SGRB) and their afterglows are among the most promising electromagnetic (EM) counterparts of Neutron Star (NS) mergers. The afterglow emission is broad-band, visible across the entire electromagnetic window from γ-ray to radio frequencies. The flux evolution in these frequencies is sensitive to the multidimensional afterglow physical parameter space. Observations of gravitational wave (GW) from BNS mergers in spatial and temporal coincidence with SGRB and associated afterglows can provide valuable constraints on afterglow physics. We run simulations of GW-detected BNS events and assuming that all of them are associated with a GRB jet which also produces an afterglow, investigate how detections or non-detections in X-ray, optical and radio frequencies can be influenced by the parameter space. We narrow down the regions of afterglow parameter space for a uniform top-hat jet model, which would result in different detection scenarios. We list inferences which can be drawn on the physics of GRB afterglows from multimessenger astronomy with coincident GW-EM observations.
Zou, Xiao-Duan; Li, Jian-Yang; Clark, Beth Ellen; Golish, Dathon
2018-01-01
The OSIRIS-REx spacecraft, launched in September, 2016, will study the asteroid Bennu and return a sample from its surface to Earth in 2023. Bennu is a near-Earth carbonaceous asteroid which will provide insight into the formation and evolution of the solar system. OSIRIS-REx will first approach Bennu in August 2018 and will study the asteroid for approximately two years before sampling. OSIRIS-REx will develop its photometric model (including Lommel-Seelinger, ROLO, McEwen, Minnaert and Akimov) of Bennu with OCAM and OVIRS during the Detailed Survey mission phase. The model developed during this phase will be used to photometrically correct the OCAM and OVIRS data.Here we present the analysis of the error for the photometric corrections. Based on our testing data sets, we find:1. The model uncertainties is only correct when we use the covariance matrix to calculate, because the parameters are highly correlated.2. No evidence of domination of any parameter in each model.3. And both model error and the data error contribute to the final correction error comparably.4. We tested the uncertainty module on fake and real data sets, and find that model performance depends on the data coverage and data quality. These tests gave us a better understanding of how different model behave in different case.5. L-S model is more reliable than others. Maybe because the simulated data are based on L-S model. However, the test on real data (SPDIF) does show slight advantage of L-S, too. ROLO is not reliable to use when calculating bond albedo. The uncertainty of McEwen model is big in most cases. Akimov performs unphysical on SOPIE 1 data.6. Better use L-S as our default choice, this conclusion is based mainly on our test on SOPIE data and IPDIF.
International Nuclear Information System (INIS)
Thomas, R.E.
1982-03-01
An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software
Song, Xiao-Na; Song, Shuai; Tejado Balsera, Inés; Liu, Lei-Po
2017-10-01
This paper investigates the mixed H ∞ and passive projective synchronization problem for fractional-order (FO) memristor-based neural networks. Our aim is to design a controller such that, though the unavoidable phenomena of time-delay and parameter uncertainty are fully considered, the resulting closed-loop system is asymptotically stable with a mixed H ∞ and passive performance level. By combining active and adaptive control methods, a novel hybrid control strategy is designed, which can guarantee the robust stability of the closed-loop system and also ensure a mixed H ∞ and passive performance level. Via the application of FO Lyapunov stability theory, the projective synchronization conditions are addressed in terms of linear matrix inequality techniques. Finally, two simulation examples are given to illustrate the effectiveness of the proposed method. Supported by National Natural Science Foundation of China under Grant Nos. U1604146, U1404610, 61473115, 61203047, Science and Technology Research Project in Henan Province under Grant Nos. 152102210273, 162102410024, and Foundation for the University Technological Innovative Talents of Henan Province under Grant No. 18HASTIT019
Aguiló-Aguayo, Ingrid; Abreu, Corina; Hossain, Mohammad B; Altisent, Rosa; Brunton, Nigel; Viñas, Inmaculada; Rai, Dilip K
2015-03-02
The effects of various pulsed electric field (PEF) parameters on the extraction of polyacetylenes from carrot slices were investigated. Optimised conditions with regard to electric field strength (1-4 kV/cm), number of pulses (100-1500), pulse frequency (10-200 Hz) and pulse width (10-30 μs) were identified using response surface methodology (RSM) to maximise the extraction of falcarinol (FaOH), falcarindiol (FaDOH) and falcarindiol-3-acetate (FaDOAc) from carrot slices. Data obtained from RSM and experiments fitted significantly (p pulses of 10 μs at 10 Hz. The predicted values from the developed quadratic polynomial equation were in close agreement with the actual experimental values with low average mean deviations (E%) ranging from 0.68% to 3.58%.
Full parameter scan of the Zee model: exploring Higgs lepton flavor violation
Energy Technology Data Exchange (ETDEWEB)
Herrero-García, Juan [ARC Center of Excellence for Particle Physics at the Terascale, University of Adelaide,Adelaide, SA 5005 (Australia); Department of Physics, School of Engineering Sciences, KTH Royal Institute of Technology,AlbaNova University Center, Roslagstullsbacken 21, 106 91 Stockholm (Sweden); Ohlsson, Tommy; Riad, Stella; Wirén, Jens [Department of Physics, School of Engineering Sciences, KTH Royal Institute of Technology,AlbaNova University Center, Roslagstullsbacken 21, 106 91 Stockholm (Sweden)
2017-04-21
We study the general Zee model, which includes an extra Higgs scalar doublet and a new singly-charged scalar singlet. Neutrino masses are generated at one-loop level, and in order to describe leptonic mixing, both the Standard Model and the extra Higgs scalar doublets need to couple to leptons (in a type-III two-Higgs doublet model), which necessarily generates large lepton flavor violating signals, also in Higgs decays. Imposing all relevant phenomenological constraints and performing a full numerical scan of the parameter space, we find that both normal and inverted neutrino mass orderings can be fitted, although the latter is disfavored with respect to the former. In fact, inverted ordering can only be accommodated if θ{sub 23} turns out to be in the first octant. A branching ratio for h→τμ of up to 10{sup −2} is allowed, but it could be as low as 10{sup −6}. In addition, if future expected sensitivities of τ→μγ are achieved, normal ordering can be almost completely tested. Also, μe conversion is expected to probe large parts of the parameter space, excluding completely inverted ordering if no signal is observed. Furthermore, non-standard neutrino interactions are found to be smaller than 10{sup −6}, which is well below future experimental sensitivity. Finally, the results of our scan indicate that the masses of the additional scalars have to be below 2.5 TeV, and typically they are lower than that and therefore within the reach of the LHC and future colliders.
Directory of Open Access Journals (Sweden)
Ingrid Aguiló-Aguayo
2015-03-01
Full Text Available The effects of various pulsed electric field (PEF parameters on the extraction of polyacetylenes from carrot slices were investigated. Optimised conditions with regard to electric field strength (1–4 kV/cm, number of pulses (100–1500, pulse frequency (10–200 Hz and pulse width (10–30 μs were identified using response surface methodology (RSM to maximise the extraction of falcarinol (FaOH, falcarindiol (FaDOH and falcarindiol-3-acetate (FaDOAc from carrot slices. Data obtained from RSM and experiments fitted significantly (p < 0.0001 the proposed second-order response functions with high regression coefficients (R2 ranging from 0.82 to 0.75. Maximal FaOH (188%, FaDOH (164.9% and FaDOAc (166.8% levels relative to untreated samples were obtained from carrot slices after applying PEF treatments at 4 kV/cm with 100 number of pulses of 10 μs at 10 Hz. The predicted values from the developed quadratic polynomial equation were in close agreement with the actual experimental values with low average mean deviations (E% ranging from 0.68% to 3.58%.
Wang, Weicheng
2012-07-01
Thermal hydrolysis of triglycerides to form free fatty acid (FFA) is a well-established industry practice. Recently, this process has been employed as a first step in the production of biofuels from lipids. To that end, batch and continuous hydrolysis of various feedstocks has been examined at the laboratory scale. Canola, the primary feedstock in this paper, camelina and algal oils were converted to high quality FFA. For the different reaction temperatures, the continuous hydrolysis system was found to provide better yields than the laboratory batch system. In addition, CFD simulation with ANSYS-CFX was used to model the performance and reactant/product separation in the continuous, counter-flow reactor. The effects of reaction temperature, water-to-oil ratio (ratio of water and oil volumetric inflow rate), and preheating of the reactants were examined experimentally. Optimization of these parameters has resulted in an improved, continuous process with high mass yields (89-93%, for reactor temperature of 260°C and water-to-oil ratio of 4:1) and energy efficiency (76%, for reactor temperature of 250°C and water-to-oil ratio of 2:1). Based on the product quality and energy efficiency considerations, the reactor temperature of 260°C and water-to-oil ratio of 4:1 have provided the optimal condition for the lab scale continuous hydrolysis reaction. © 2012 Elsevier B.V.
Wang, Weicheng; Turner, Timothy L.; Stikeleather, Larry F.; Roberts, William L.
2012-01-01
Thermal hydrolysis of triglycerides to form free fatty acid (FFA) is a well-established industry practice. Recently, this process has been employed as a first step in the production of biofuels from lipids. To that end, batch and continuous hydrolysis of various feedstocks has been examined at the laboratory scale. Canola, the primary feedstock in this paper, camelina and algal oils were converted to high quality FFA. For the different reaction temperatures, the continuous hydrolysis system was found to provide better yields than the laboratory batch system. In addition, CFD simulation with ANSYS-CFX was used to model the performance and reactant/product separation in the continuous, counter-flow reactor. The effects of reaction temperature, water-to-oil ratio (ratio of water and oil volumetric inflow rate), and preheating of the reactants were examined experimentally. Optimization of these parameters has resulted in an improved, continuous process with high mass yields (89-93%, for reactor temperature of 260°C and water-to-oil ratio of 4:1) and energy efficiency (76%, for reactor temperature of 250°C and water-to-oil ratio of 2:1). Based on the product quality and energy efficiency considerations, the reactor temperature of 260°C and water-to-oil ratio of 4:1 have provided the optimal condition for the lab scale continuous hydrolysis reaction. © 2012 Elsevier B.V.
Exploring the hole cleaning parameters of horizontal wellbore using two-phase Eulerian CFD approach
Directory of Open Access Journals (Sweden)
Satish K Dewangan
2016-03-01
Full Text Available The present investigation deals with the flow through concentric annulus with the inner cylinder in rotation. This work has got its importance in the petroleum industries in relation to the wellbore drilling. In wellbore drilling, the issue of the hole-cleaning is very serious problem especially in case of the horizontal drilling process. The effect of the various parameters like slurry flow velocity, inner cylinder rotational speed, inlet solid concentration which affect hole cleaning was discussed. Their effect on the pressure drop, wall shear stress, mixture turbulence kinetic energy, and solid-phase velocity and slip velocity were analyzed, which are responsible for solid-phase distribution. Flow was considered to be steady, incompressible and two-phase slurry flow with water as carrier fluid and silica sand as the secondary phase. Eulerian approach was used for modeling the slurry flow. Silica sand was considered of spherical shape with particle size of 180 µm. ANSYS FLUENT software was used for modeling and solution. Plotting was done using Tecplot software and Microsoft Office.
Exploring power and parameter estimation of the BiSSE method for analyzing species diversification
Directory of Open Access Journals (Sweden)
Davis Matthew P
2013-02-01
Full Text Available Abstract Background There has been a considerable increase in studies investigating rates of diversification and character evolution, with one of the promising techniques being the BiSSE method (binary state speciation and extinction. This study uses simulations under a variety of different sample sizes (number of tips and asymmetries of rate (speciation, extinction, character change to determine BiSSE’s ability to test hypotheses, and investigate whether the method is susceptible to confounding effects. Results We found that the power of the BiSSE method is severely affected by both sample size and high tip ratio bias (one character state dominates among observed tips. Sample size and high tip ratio bias also reduced accuracy and precision of parameter estimation, and resulted in the inability to infer which rate asymmetry caused the excess of a character state. In low tip ratio bias scenarios with appropriate tip sample size, BiSSE accurately estimated the rate asymmetry causing character state excess, avoiding the issue of confounding effects. Conclusions Based on our findings, we recommend that future studies utilizing BiSSE that have fewer than 300 terminals and/or have datasets where high tip ratio bias is observed (i.e., fewer than 10% of species are of one character state should be extremely cautious with the interpretation of hypothesis testing results.
International Nuclear Information System (INIS)
Hummels, Cameron B.; Bryan, Greg L.
2012-01-01
We carry out adaptive mesh refinement cosmological simulations of Milky Way mass halos in order to investigate the formation of disk-like galaxies in a Λ-dominated cold dark matter model. We evolve a suite of five halos to z = 0 and find a gas disk formation in each; however, in agreement with previous smoothed particle hydrodynamics simulations (that did not include a subgrid feedback model), the rotation curves of all halos are centrally peaked due to a massive spheroidal component. Our standard model includes radiative cooling and star formation, but no feedback. We further investigate this angular momentum problem by systematically modifying various simulation parameters including: (1) spatial resolution, ranging from 1700 to 212 pc; (2) an additional pressure component to ensure that the Jeans length is always resolved; (3) low star formation efficiency, going down to 0.1%; (4) fixed physical resolution as opposed to comoving resolution; (5) a supernova feedback model that injects thermal energy to the local cell; and (6) a subgrid feedback model which suppresses cooling in the immediate vicinity of a star formation event. Of all of these, we find that only the last (cooling suppression) has any impact on the massive spheroidal component. In particular, a simulation with cooling suppression and feedback results in a rotation curve that, while still peaked, is considerably reduced from our standard runs.
International Nuclear Information System (INIS)
Depres, B.; Dossantos-Uzarralde, P.
2009-01-01
More than 150 researchers and engineers from universities and the industrial world met to discuss on the new methodologies developed around assessing uncertainty. About 20 papers were presented and the main topics were: methods to study the propagation of uncertainties, sensitivity analysis, nuclear data covariances or multi-parameter optimisation. This report gathers the contributions of CEA researchers and engineers
Abramson, Adam; Adar, Eilon; Lazarovitch, Naftali
2014-06-01
Groundwater is often the most or only feasible safe drinking water source in remote, low-resource areas, yet the economics of its development have not been systematically outlined. We applied AWARE (Assessing Water Alternatives in Remote Economies), a recently developed Decision Support System, to investigate the costs and benefits of groundwater access and abstraction for non-networked, rural supplies. Synthetic profiles of community water services (n = 17,962), defined across 13 parameters' values and ranges relevant to remote areas, were applied to the decision framework, and the parameter effects on economic outcomes were investigated. Regressions and analysis of output distributions indicate that the most important factors determining the cost of water improvements include the technological approach, the water service target, hydrological parameters, and population density. New source construction is less cost-effective than the use or improvement of existing wells, but necessary for expanding access to isolated households. We also explored three financing approaches - willingness-to-pay, -borrow, and -work - and found that they significantly impact the prospects of achieving demand-driven cost recovery. The net benefit under willingness to work, in which water infrastructure is coupled to community irrigation and cash payments replaced by labor commitments, is impacted most strongly by groundwater yield and managerial factors. These findings suggest that the cost-benefit dynamics of groundwater-based water supply improvements vary considerably by many parameters, and that the relative strengths of different development strategies may be leveraged for achieving optimal outcomes.
International Nuclear Information System (INIS)
Lohrenz, J.
1992-01-01
Oil and gas exploration is a unique kind of business. Businesses providing a vast and ever-changing panoply of products to markets are a focus of several disciplines' energetic study and analysis. The product inventory problem is robust, pertinent, and meaningful, and it merits the voluminous and protracted attention received from keen business practitioners. Prototypical business practitioners, be they trained by years of business hurly-burly, or sophisticated MBAs with arrays of mathematical algorithms and computers, are not normally prepared, however, to recognize the unique nature of exploration's inventories. Put together such a business practitioner with an explorationist and misunderstandings, hidden and open, are inevitable and predictably rife. The first purpose of this paper is to articulate the inherited inventory handling paradigms of business practitioners in relation to exploration's inventories. To do so, standard pedagogy in business administration is used and a case study of an exploration venture is presented. A second purpose is to show the burdens that the misunderstandings create. The result is not just business plans that go awry, but public policies that have effects opposite from those intended
The cost of uncertainty in capacity expansion problems
Energy Technology Data Exchange (ETDEWEB)
Jenhung Wang [National Chung Cheng Univ., Dept. of Business Administration, Chia-Yi (Taiwan); Sparrow, F.T. [Purdue Univ., School of Industrial Engineering, West Lafayette, IN (United States)
1999-07-01
The goals of this paper are to present a two-stage programming model of the capacity expansion problem under uncertainty of demand and explore the impact of the uncertainty on cost. The model is a mixed integer nonlinear programming (MINLP) model with the consideration of uncertainty used to maximise the expected presented value of utility profits over the planning horizon, under the constraints of rate of return and reserve margin regulation. The results reveal that the uncertainty harms the profit seriously. In this paper both microeconomics and mathematical programming are used to analyse the problem. We try to observe the economic behaviour of the utility with uncertainty involved. We also investigate the influence on the cost of uncertainty of each economic parameter. (Author)
Bagnardi, M.; Hooper, A. J.
2017-12-01
Inversions of geodetic observational data, such as Interferometric Synthetic Aperture Radar (InSAR) and Global Navigation Satellite System (GNSS) measurements, are often performed to obtain information about the source of surface displacements. Inverse problem theory has been applied to study magmatic processes, the earthquake cycle, and other phenomena that cause deformation of the Earth's interior and of its surface. Together with increasing improvements in data resolution, both spatial and temporal, new satellite missions (e.g., European Commission's Sentinel-1 satellites) are providing the unprecedented opportunity to access space-geodetic data within hours from their acquisition. To truly take advantage of these opportunities we must become able to interpret geodetic data in a rapid and robust manner. Here we present the open-source Geodetic Bayesian Inversion Software (GBIS; available for download at http://comet.nerc.ac.uk/gbis). GBIS is written in Matlab and offers a series of user-friendly and interactive pre- and post-processing tools. For example, an interactive function has been developed to estimate the characteristics of noise in InSAR data by calculating the experimental semi-variogram. The inversion software uses a Markov-chain Monte Carlo algorithm, incorporating the Metropolis-Hastings algorithm with adaptive step size, to efficiently sample the posterior probability distribution of the different source parameters. The probabilistic Bayesian approach allows the user to retrieve estimates of the optimal (best-fitting) deformation source parameters together with the associated uncertainties produced by errors in the data (and by scaling, errors in the model). The current version of GBIS (V1.0) includes fast analytical forward models for magmatic sources of different geometry (e.g., point source, finite spherical source, prolate spheroid source, penny-shaped sill-like source, and dipping-dike with uniform opening) and for dipping faults with uniform
A Bayesian approach to model uncertainty
International Nuclear Information System (INIS)
Buslik, A.
1994-01-01
A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given
Energy Technology Data Exchange (ETDEWEB)
Bernard, D
2001-12-01
The aim of this thesis was to evaluate uncertainties of key neutron parameters of slab reactors. Uncertainties sources have many origins, technologic origin for parameters of fabrication and physical origin for nuclear data. First, each contribution of uncertainties is calculated and finally, a factor of uncertainties is associated to key slab parameter like reactivity, isotherm reactivity coefficient, control rod efficiency, power form factor before irradiation and life-time. This factors of uncertainties were computed by Generalized Perturbations Theory in case of step 0 and by directs calculations in case of irradiation problems. One of neutronic conformity applications was about fabrication and nuclear data targets precision adjustments. Statistic (uncertainties) and deterministic (deviations) approaches were studied. Then, neutronics key slab parameters uncertainties were reduced and so nuclear performances were optimized. (author)
Energy Technology Data Exchange (ETDEWEB)
Bernard, D
2001-12-01
The aim of this thesis was to evaluate uncertainties of key neutron parameters of slab reactors. Uncertainties sources have many origins, technologic origin for parameters of fabrication and physical origin for nuclear data. First, each contribution of uncertainties is calculated and finally, a factor of uncertainties is associated to key slab parameter like reactivity, isotherm reactivity coefficient, control rod efficiency, power form factor before irradiation and life-time. This factors of uncertainties were computed by Generalized Perturbations Theory in case of step 0 and by directs calculations in case of irradiation problems. One of neutronic conformity applications was about fabrication and nuclear data targets precision adjustments. Statistic (uncertainties) and deterministic (deviations) approaches were studied. Then, neutronics key slab parameters uncertainties were reduced and so nuclear performances were optimized. (author)
Energy Technology Data Exchange (ETDEWEB)
Mashouf, S [Sunnybrook Odette Cancer Centre, Toronto, ON (Canada); University of Toronto, Dept. of Radiation Oncology, Toronto, ON (Canada); Ravi, A; Morton, G; Song, W [Sunnybrook Odette Cancer Centre, Toronto, ON (Canada); University of Toronto, Dept. of Radiation Oncology, Toronto, ON (Canada); Sunnybrook Research Institute, Toronto, ON (Canada)
2015-06-15
Purpose: There is a strong evidence relating post-implant dosimetry for permanent seed prostate brachytherpy to local control rates. The delineation of the prostate on CT images, however, represents a challenge as it is difficult to confidently identify the prostate borders from soft tissue surrounding it. This study aims at quantifying the sensitivity of clinically relevant dosimetric parameters to prostate contouring uncertainty. Methods: The post-implant CT images and plans for a cohort of 43 patients, who have received I–125 permanent prostate seed implant in our centre, were exported to MIM Symphony LDR brachytherapy treatment planning system (MIM Software Inc., Cleveland, OH). The prostate contours in post-implant CT images were expanded/contracted uniformly for margins of ±1.00mm, ±2.00mm, ±3.00mm, ±4.00mm and ±5.00mm (±0.01mm). The values for V100 and D90 were extracted from Dose Volume Histograms for each contour and compared. Results: The mean value of V100 and D90 was obtained as 92.3±8.4% and 108.4±12.3% respectively (Rx=145Gy). V100 was reduced by −3.2±1.5%, −7.2±3.0%, −12.8±4.0%, −19.0±4.8%, − 25.5±5.4% for expanded contours of prostate with margins of +1mm, +2mm, +3mm, +4mm, and +5mm, respectively, while it was increased by 1.6±1.2%, 2.4±2.4%, 2.7±3.2%, 2.9±4.2%, 2.9±5.1% for the contracted contours. D90 was reduced by −6.9±3.5%, −14.5±6.1%, −23.8±7.1%, − 33.6±8.5%, −40.6±8.7% and increased by 4.1±2.6%, 6.1±5.0%, 7.2±5.7%, 8.1±7.3% and 8.1±7.3% for the same set of contours. Conclusion: Systematic expansion errors of more than 1mm may likely render a plan sub-optimal. Conversely contraction errors may Result in labeling a plan likely as optimal. The use of MRI images to contour the prostate should results in better delineation of prostate organ which increases the predictive value of post-op plans. Since observers tend to overestimate the prostate volume on CT, compared with MRI, the impact of the
Lindley, Dennis V
2013-01-01
Praise for the First Edition ""...a reference for everyone who is interested in knowing and handling uncertainty.""-Journal of Applied Statistics The critically acclaimed First Edition of Understanding Uncertainty provided a study of uncertainty addressed to scholars in all fields, showing that uncertainty could be measured by probability, and that probability obeyed three basic rules that enabled uncertainty to be handled sensibly in everyday life. These ideas were extended to embrace the scientific method and to show how decisions, containing an uncertain element, could be rationally made.
Physical Uncertainty Bounds (PUB)
Energy Technology Data Exchange (ETDEWEB)
Vaughan, Diane Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Dean L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.
Koch, Michael
Measurement uncertainty is one of the key issues in quality assurance. It became increasingly important for analytical chemistry laboratories with the accreditation to ISO/IEC 17025. The uncertainty of a measurement is the most important criterion for the decision whether a measurement result is fit for purpose. It also delivers help for the decision whether a specification limit is exceeded or not. Estimation of measurement uncertainty often is not trivial. Several strategies have been developed for this purpose that will shortly be described in this chapter. In addition the different possibilities to take into account the uncertainty in compliance assessment are explained.
Campbell, J. L.; Lee, M.; Jones, B. N.; Andrushenko, S. M.; Holmes, N. G.; Maxwell, J. A.; Taylor, S. M.
2009-04-01
The detection sensitivities of the Alpha Particle X-ray Spectrometer (APXS) instruments on the Mars Exploration Rovers for a wide range of elements were experimentally determined in 2002 using spectra of geochemical reference materials. A flight spare instrument was similarly calibrated, and the calibration exercise was then continued for this unit with an extended set of geochemical reference materials together with pure elements and simple chemical compounds. The flight spare instrument data are examined in detail here using a newly developed fundamental parameters approach which takes precise account of all the physics inherent in the two X-ray generation techniques involved, namely, X-ray fluorescence and particle-induced X-ray emission. The objectives are to characterize the instrument as fully as possible, to test this new approach, and to determine the accuracy of calibration for major, minor, and trace elements. For some of the lightest elements the resulting calibration exhibits a dependence upon the mineral assemblage of the geological reference material; explanations are suggested for these observations. The results will assist in designing the overall calibration approach for the APXS on the Mars Science Laboratory mission.
Methodologies of Uncertainty Propagation Calculation
International Nuclear Information System (INIS)
Chojnacki, Eric
2002-01-01
After recalling the theoretical principle and the practical difficulties of the methodologies of uncertainty propagation calculation, the author discussed how to propagate input uncertainties. He said there were two kinds of input uncertainty: - variability: uncertainty due to heterogeneity, - lack of knowledge: uncertainty due to ignorance. It was therefore necessary to use two different propagation methods. He demonstrated this in a simple example which he generalised, treating the variability uncertainty by the probability theory and the lack of knowledge uncertainty by the fuzzy theory. He cautioned, however, against the systematic use of probability theory which may lead to unjustifiable and illegitimate precise answers. Mr Chojnacki's conclusions were that the importance of distinguishing variability and lack of knowledge increased as the problem was getting more and more complex in terms of number of parameters or time steps, and that it was necessary to develop uncertainty propagation methodologies combining probability theory and fuzzy theory
Directory of Open Access Journals (Sweden)
Rosenda Valdés Arencibia
2011-03-01
Full Text Available Este trabalho apresenta uma metodologia para estimar a incerteza associada à medição dos parâmetros que definem a geometria do cordão de solda, especificamente da área do cordão, de forma a atender à exigência estabelecida pela norma NBR ISO/IEC 17025. A qualidade geométrica dos corpos de prova utilizados durante as medições foi ainda avaliada através da medição dos desvios geométricos de planeza e de perpendicularidade. As seguintes etapas foram propostas e realizadas: identificação dos parâmetros que definem a geometria do cordão de solda; identificação e estudo das variáveis que afetam a medição destes parâmetros; adoção do modelo matemático para estimativa da incerteza de cada parâmetro; planejamento e execução dos experimentos para o levantamento dos dados, cálculo da incerteza e, finalmente, análise e discussão dos resultados. Através da análise dos resultados foi possível concluir que as incertezas provenientes da calibração do sistema de medição e relativa ao desvio de perpendicularidade contribuíram significativamente para a incerteza final. As análises despertaram uma preocupação com relação aos valores permissíveis para o desvio de perpendicularidade dos corpos de prova utilizados durante as medições.This work presents a methodology to estimate the uncertainty associated to the measurement of the weld bead geometric parameters in order to address the requirements of NBR ISO/IEC 17025 Standard. The specimen geometric quality was additionally evaluated through the measurement of flatness and perpendicularity deviations. The following steps were proposed and executed: identification of weld bead geometric parameters; identification and study of the variables that affect measurement of the identified parameters; adoption of a mathematical model to estimate the uncertainty for each parameter; planning and execution of the experiments for data obtaining; uncertainty determination, analysis
Bilcke, Joke; Chapman, Ruth; Atchison, Christina; Cromer, Deborah; Johnson, Helen; Willem, Lander; Cox, Martin; Edmunds, William John; Jit, Mark
2015-07-01
Two vaccines (Rotarix and RotaTeq) are highly effective at preventing severe rotavirus disease. Rotavirus vaccination has been introduced in the United Kingdom and other countries partly based on modeling and cost-effectiveness results. However, most of these models fail to account for the uncertainty about several vaccine characteristics and the mechanism of vaccine action. A deterministic dynamic transmission model of rotavirus vaccination in the United Kingdom was developed. This improves on previous models by 1) allowing for 2 different mechanisms of action for Rotarix and RotaTeq, 2) using clinical trial data to understand these mechanisms, and 3) accounting for uncertainty by using Markov Chain Monte Carlo. In the long run, Rotarix and RotaTeq are predicted to reduce the overall rotavirus incidence by 50% (39%-63%) and 44% (30%-62%), respectively but with an increase in incidence in primary school children and adults up to 25 y of age. The vaccines are estimated to give more protection than 1 or 2 natural infections. The duration of protection is highly uncertain but has only impact on the predicted reduction in rotavirus burden for values lower than 10 y. The 2 vaccine mechanism structures fit equally well with the clinical trial data. Long-term postvaccination dynamics cannot be predicted reliably with the data available. Accounting for the joint uncertainty of several vaccine characteristics resulted in more insight into which of these are crucial for determining the impact of rotavirus vaccination. Data for up to at least 10 y postvaccination and covering older children and adults are crucial to address remaining questions on the impact of widespread rotavirus vaccination. © The Author(s) 2015.
Directory of Open Access Journals (Sweden)
Sarah Jane Hobbs
2016-06-01
Full Text Available Background. Although the trot is described as a diagonal gait, contacts of the diagonal pairs of hooves are not usually perfectly synchronized. Although subtle, the timing dissociation between contacts of each diagonal pair could have consequences on gait dynamics and provide insight into the functional strategies employed. This study explores the mechanical effects of different diagonal dissociation patterns when speed was matched between individuals and how these effects link to moderate, natural changes in trotting speed. We anticipate that hind-first diagonal dissociation at contact increases with speed, diagonal dissociation at contact can reduce collision-based energy losses and predominant dissociation patterns will be evident within individuals. Methods. The study was performed in two parts: in the first 17 horses performed speed-matched trotting trials and in the second, five horses each performed 10 trotting trials that represented a range of individually preferred speeds. Standard motion capture provided kinematic data that were synchronized with ground reaction force (GRF data from a series of force plates. The data were analyzed further to determine temporal, speed, GRF, postural, mass distribution, moment, and collision dynamics parameters. Results. Fore-first, synchronous, and hind-first dissociations were found in horses trotting at (3.3 m/s ± 10%. In these speed-matched trials, mean centre of pressure (COP cranio-caudal location differed significantly between the three dissociation categories. The COP moved systematically and significantly (P = .001 from being more caudally located in hind-first dissociation (mean location = 0.41 ± 0.04 through synchronous (0.36 ± 0.02 to a more cranial location in fore-first dissociation (0.32 ± 0.02. Dissociation patterns were found to influence function, posture, and balance parameters. Over a moderate speed range, peak vertical forelimb GRF had a strong relationship with dissociation
Chowdhury, Nupur
2013-01-01
The medical product sector is characterised by a regulatory patchwork of European and national laws and guidelines operating concurrently with each other. Each of these sectors are characterised by different levels of regulatory uncertainty that may undermine the effectiveness of the regulatory
Uncertainty Propagation in OMFIT
Smith, Sterling; Meneghini, Orso; Sung, Choongki
2017-10-01
A rigorous comparison of power balance fluxes and turbulent model fluxes requires the propagation of uncertainties in the kinetic profiles and their derivatives. Making extensive use of the python uncertainties package, the OMFIT framework has been used to propagate covariant uncertainties to provide an uncertainty in the power balance calculation from the ONETWO code, as well as through the turbulent fluxes calculated by the TGLF code. The covariant uncertainties arise from fitting 1D (constant on flux surface) density and temperature profiles and associated random errors with parameterized functions such as a modified tanh. The power balance and model fluxes can then be compared with quantification of the uncertainties. No effort is made at propagating systematic errors. A case study will be shown for the effects of resonant magnetic perturbations on the kinetic profiles and fluxes at the top of the pedestal. A separate attempt at modeling the random errors with Monte Carlo sampling will be compared to the method of propagating the fitting function parameter covariant uncertainties. Work supported by US DOE under DE-FC02-04ER54698, DE-FG2-95ER-54309, DE-SC 0012656.
Liu, Baoding
2015-01-01
When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, c...
Energy Technology Data Exchange (ETDEWEB)
McFarquhar, Greg [Univ. of Illinois, Urbana, IL (United States)
2015-12-28
We proposed to analyze in-situ cloud data collected during ARM/ASR field campaigns to create databases of cloud microphysical properties and their uncertainties as needed for the development of improved cloud parameterizations for models and remote sensing retrievals, and for evaluation of model simulations and retrievals. In particular, we proposed to analyze data collected over the Southern Great Plains (SGP) during the Mid-latitude Continental Convective Clouds Experiment (MC3E), the Storm Peak Laboratory Cloud Property Validation Experiment (STORMVEX), the Small Particles in Cirrus (SPARTICUS) Experiment and the Routine AAF Clouds with Low Optical Water Depths (CLOWD) Optical Radiative Observations (RACORO) field campaign, over the North Slope of Alaska during the Indirect and Semi-Direct Aerosol Campaign (ISDAC) and the Mixed-Phase Arctic Cloud Experiment (M-PACE), and over the Tropical Western Pacific (TWP) during The Tropical Warm Pool International Cloud Experiment (TWP-ICE), to meet the following 3 objectives; derive statistical databases of single ice particle properties (aspect ratio AR, dominant habit, mass, projected area) and distributions of ice crystals (size distributions SDs, mass-dimension m-D, area-dimension A-D relations, mass-weighted fall speeds, single-scattering properties, total concentrations N, ice mass contents IWC), complete with uncertainty estimates; assess processes by which aerosols modulate cloud properties in arctic stratus and mid-latitude cumuli, and quantify aerosol’s influence in context of varying meteorological and surface conditions; and determine how ice cloud microphysical, single-scattering and fall-out properties and contributions of small ice crystals to such properties vary according to location, environment, surface, meteorological and aerosol conditions, and develop parameterizations of such effects.In this report we describe the accomplishments that we made on all 3 research objectives.
A new uncertainty importance measure
International Nuclear Information System (INIS)
Borgonovo, E.
2007-01-01
Uncertainty in parameters is present in many risk assessment problems and leads to uncertainty in model predictions. In this work, we introduce a global sensitivity indicator which looks at the influence of input uncertainty on the entire output distribution without reference to a specific moment of the output (moment independence) and which can be defined also in the presence of correlations among the parameters. We discuss its mathematical properties and highlight the differences between the present indicator, variance-based uncertainty importance measures and a moment independent sensitivity indicator previously introduced in the literature. Numerical results are discussed with application to the probabilistic risk assessment model on which Iman [A matrix-based approach to uncertainty and sensitivity analysis for fault trees. Risk Anal 1987;7(1):22-33] first introduced uncertainty importance measures
Uncertainty calculations made easier
International Nuclear Information System (INIS)
Hogenbirk, A.
1994-07-01
The results are presented of a neutron cross section sensitivity/uncertainty analysis performed in a complicated 2D model of the NET shielding blanket design inside the ITER torus design, surrounded by the cryostat/biological shield as planned for ITER. The calculations were performed with a code system developed at ECN Petten, with which sensitivity/uncertainty calculations become relatively simple. In order to check the deterministic neutron transport calculations (performed with DORT), calculations were also performed with the Monte Carlo code MCNP. Care was taken to model the 2.0 cm wide gaps between two blanket segments, as the neutron flux behind the vacuum vessel is largely determined by neutrons streaming through these gaps. The resulting neutron flux spectra are in excellent agreement up to the end of the cryostat. It is noted, that at this position the attenuation of the neutron flux is about 1 l orders of magnitude. The uncertainty in the energy integrated flux at the beginning of the vacuum vessel and at the beginning of the cryostat was determined in the calculations. The uncertainty appears to be strongly dependent on the exact geometry: if the gaps are filled with stainless steel, the neutron spectrum changes strongly, which results in an uncertainty of 70% in the energy integrated flux at the beginning of the cryostat in the no-gap-geometry, compared to an uncertainty of only 5% in the gap-geometry. Therefore, it is essential to take into account the exact geometry in sensitivity/uncertainty calculations. Furthermore, this study shows that an improvement of the covariance data is urgently needed in order to obtain reliable estimates of the uncertainties in response parameters in neutron transport calculations. (orig./GL)
A commentary on model uncertainty
International Nuclear Information System (INIS)
Apostolakis, G.
1994-01-01
A framework is proposed for the identification of model and parameter uncertainties in risk assessment models. Two cases are distinguished; in the first case, a set of mutually exclusive and exhaustive hypotheses (models) can be formulated, while, in the second, only one reference model is available. The relevance of this formulation to decision making and the communication of uncertainties is discussed
Shishebori, Davood; Babadi, Abolghasem Yousefi
2018-03-01
This study investigates the reliable multi-configuration capacitated logistics network design problem (RMCLNDP) under system disturbances, which relates to locating facilities, establishing transportation links, and also allocating their limited capacities to the customers conducive to provide their demand on the minimum expected total cost (including locating costs, link constructing costs, and also expected costs in normal and disturbance conditions). In addition, two types of risks are considered; (I) uncertain environment, (II) system disturbances. A two-level mathematical model is proposed for formulating of the mentioned problem. Also, because of the uncertain parameters of the model, an efficacious possibilistic robust optimization approach is utilized. To evaluate the model, a drug supply chain design (SCN) is studied. Finally, an extensive sensitivity analysis was done on the critical parameters. The obtained results show that the efficiency of the proposed approach is suitable and is worthwhile for analyzing the real practical problems.
Impact of dose-distribution uncertainties on rectal ntcp modeling I: Uncertainty estimates
International Nuclear Information System (INIS)
Fenwick, John D.; Nahum, Alan E.
2001-01-01
A trial of nonescalated conformal versus conventional radiotherapy treatment of prostate cancer has been carried out at the Royal Marsden NHS Trust (RMH) and Institute of Cancer Research (ICR), demonstrating a significant reduction in the rate of rectal bleeding reported for patients treated using the conformal technique. The relationship between planned rectal dose-distributions and incidences of bleeding has been analyzed, showing that the rate of bleeding falls significantly as the extent of the rectal wall receiving a planned dose-level of more than 57 Gy is reduced. Dose-distributions delivered to the rectal wall over the course of radiotherapy treatment inevitably differ from planned distributions, due to sources of uncertainty such as patient setup error, rectal wall movement and variation in the absolute rectal wall surface area. In this paper estimates of the differences between planned and treated rectal dose-distribution parameters are obtained for the RMH/ICR nonescalated conformal technique, working from a distribution of setup errors observed during the RMH/ICR trial, movement data supplied by Lebesque and colleagues derived from repeat CT scans, and estimates of rectal circumference variations extracted from the literature. Setup errors and wall movement are found to cause only limited systematic differences between mean treated and planned rectal dose-distribution parameter values, but introduce considerable uncertainties into the treated values of some dose-distribution parameters: setup errors lead to 22% and 9% relative uncertainties in the highly dosed fraction of the rectal wall and the wall average dose, respectively, with wall movement leading to 21% and 9% relative uncertainties. Estimates obtained from the literature of the uncertainty in the absolute surface area of the distensible rectal wall are of the order of 13%-18%. In a subsequent paper the impact of these uncertainties on analyses of the relationship between incidences of bleeding
Duerdoth, Ian
2009-01-01
The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…
DEFF Research Database (Denmark)
Heydorn, Kaj; Anglov, Thomas
2002-01-01
Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...
Rosado-Souza, Laise; Scossa, Federico; Chaves, Izabel S; Kleessen, Sabrina; Salvador, Luiz F D; Milagre, Jocimar C; Finger, Fernando; Bhering, Leonardo L; Sulpice, Ronan; Araújo, Wagner L; Nikoloski, Zoran; Fernie, Alisdair R; Nunes-Nesi, Adriano
2015-09-01
Collectively, the results presented improve upon the utility of an important genetic resource and attest to a complex genetic basis for differences in both leaf metabolism and fruit morphology between natural populations. Diversity of accessions within the same species provides an alternative method to identify physiological and metabolic traits that have large effects on growth regulation, biomass and fruit production. Here, we investigated physiological and metabolic traits as well as parameters related to plant growth and fruit production of 49 phenotypically diverse pepper accessions of Capsicum chinense grown ex situ under controlled conditions. Although single-trait analysis identified up to seven distinct groups of accessions, working with the whole data set by multivariate analyses allowed the separation of the 49 accessions in three clusters. Using all 23 measured parameters and data from the geographic origin for these accessions, positive correlations between the combined phenotypes and geographic origin were observed, supporting a robust pattern of isolation-by-distance. In addition, we found that fruit set was positively correlated with photosynthesis-related parameters, which, however, do not explain alone the differences in accession susceptibility to fruit abortion. Our results demonstrated that, although the accessions belong to the same species, they exhibit considerable natural intraspecific variation with respect to physiological and metabolic parameters, presenting diverse adaptation mechanisms and being a highly interesting source of information for plant breeders. This study also represents the first study combining photosynthetic, primary metabolism and growth parameters for Capsicum to date.
DEFF Research Database (Denmark)
Nguyen, Daniel Xuyen
This paper presents a model of trade that explains why firms wait to export and why many exporters fail. Firms face uncertain demands that are only realized after the firm enters the destination. The model retools the timing of uncertainty resolution found in productivity heterogeneity models....... This retooling addresses several shortcomings. First, the imperfect correlation of demands reconciles the sales variation observed in and across destinations. Second, since demands for the firm's output are correlated across destinations, a firm can use previously realized demands to forecast unknown demands...... in untested destinations. The option to forecast demands causes firms to delay exporting in order to gather more information about foreign demand. Third, since uncertainty is resolved after entry, many firms enter a destination and then exit after learning that they cannot profit. This prediction reconciles...
Hissink, E.M.; Bogaards, J.J.P.; Freidig, A.P.; Commandeur, J.N.M.; Vermeulen, N.P.E.; Bladeren, P.J. van
2002-01-01
A physiologically based pharmacokinetic (PBPK) model has been developed for trichloroethylene (1,1,2-trichloroethene, TRI) for rat and humans, based on in vitro metabolic parameters. These were obtained using individual cytochrome P450 and glutathione S-transferase enzymes. The main enzymes involved
Uncertainty Assessments in Fast Neutron Activation Analysis
International Nuclear Information System (INIS)
W. D. James; R. Zeisler
2000-01-01
Fast neutron activation analysis (FNAA) carried out with the use of small accelerator-based neutron generators is routinely used for major/minor element determinations in industry, mineral and petroleum exploration, and to some extent in research. While the method shares many of the operational procedures and therefore errors inherent to conventional thermal neutron activation analysis, its unique implementation gives rise to additional specific concerns that can result in errors or increased uncertainties of measured quantities. The authors were involved in a recent effort to evaluate irreversible incorporation of oxygen into a standard reference material (SRM) by direct measurement of oxygen by FNAA. That project required determination of oxygen in bottles of the SRM stored in varying environmental conditions and a comparison of the results. We recognized the need to accurately describe the total uncertainty of the measurements to accurately characterize any differences in the resulting average concentrations. It is our intent here to discuss the breadth of potential parameters that have the potential to contribute to the random and nonrandom errors of the method and provide estimates of the magnitude of uncertainty introduced. In addition, we will discuss the steps taken in this recent FNAA project to control quality, assess the uncertainty of the measurements, and evaluate results based on the statistical reproducibility
Mullins, Larry L; Wolfe-Christensen, Cortney; Pai, Ahna L Hoff; Carpentier, Melissa Y; Gillaspy, Stephen; Cheek, Jeff; Page, Melanie
2007-09-01
To examine the relationship of parent-reported overprotection (OP), perceived child vulnerability (PCV), and parenting stress (PS) to youth-reported illness uncertainty, and to explore potential developmental differences. Eighty-two children and 82 adolescents (n = 164) diagnosed with Type 1 diabetes mellitus (DM1) or asthma, completed a measure of illness uncertainty, while their parents completed measures of OP, PCV, and PS. After controlling for demographic and illness parameters, both PCV and PS significantly predicted youth illness uncertainty in the combined sample. Within the child group, only PS significantly predicted illness uncertainty, whereas only PCV significantly predicted uncertainty for adolescents. Specific parenting variables are associated with youth-reported illness uncertainty; however, their relationship varies according to developmental level. Although OP has been identified as a predictor of child psychological outcomes in other studies, it does not appear to be associated with illness uncertainty in youth with DM1 or asthma.
Uncertainty, joint uncertainty, and the quantum uncertainty principle
International Nuclear Information System (INIS)
Narasimhachar, Varun; Poostindouz, Alireza; Gour, Gilad
2016-01-01
Historically, the element of uncertainty in quantum mechanics has been expressed through mathematical identities called uncertainty relations, a great many of which continue to be discovered. These relations use diverse measures to quantify uncertainty (and joint uncertainty). In this paper we use operational information-theoretic principles to identify the common essence of all such measures, thereby defining measure-independent notions of uncertainty and joint uncertainty. We find that most existing entropic uncertainty relations use measures of joint uncertainty that yield themselves to a small class of operational interpretations. Our notion relaxes this restriction, revealing previously unexplored joint uncertainty measures. To illustrate the utility of our formalism, we derive an uncertainty relation based on one such new measure. We also use our formalism to gain insight into the conditions under which measure-independent uncertainty relations can be found. (paper)
Estimation of the uncertainties considered in NPP PSA level 2
International Nuclear Information System (INIS)
Kalchev, B.; Hristova, R.
2005-01-01
The main approaches of the uncertainties analysis are presented. The sources of uncertainties which should be considered in PSA level 2 for WWER reactor such as: uncertainties propagated from level 1 PSA; uncertainties in input parameters; uncertainties related to the modelling of physical phenomena during the accident progression and uncertainties related to the estimation of source terms are defined. The methods for estimation of the uncertainties are also discussed in this paper
International Nuclear Information System (INIS)
Yu, Tang-Qing; Vanden-Eijnden, Eric; Chen, Pei-Yang; Chen, Ming; Samanta, Amit; Tuckerman, Mark
2014-01-01
The problem of predicting polymorphism in atomic and molecular crystals constitutes a significant challenge both experimentally and theoretically. From the theoretical viewpoint, polymorphism prediction falls into the general class of problems characterized by an underlying rough energy landscape, and consequently, free energy based enhanced sampling approaches can be brought to bear on the problem. In this paper, we build on a scheme previously introduced by two of the authors in which the lengths and angles of the supercell are targeted for enhanced sampling via temperature accelerated adiabatic free energy dynamics [T. Q. Yu and M. E. Tuckerman, Phys. Rev. Lett. 107, 015701 (2011)]. Here, that framework is expanded to include general order parameters that distinguish different crystalline arrangements as target collective variables for enhanced sampling. The resulting free energy surface, being of quite high dimension, is nontrivial to reconstruct, and we discuss one particular strategy for performing the free energy analysis. The method is applied to the study of polymorphism in xenon crystals at high pressure and temperature using the Steinhardt order parameters without and with the supercell included in the set of collective variables. The expected fcc and bcc structures are obtained, and when the supercell parameters are included as collective variables, we also find several new structures, including fcc states with hcp stacking faults. We also apply the new method to the solid-liquid phase transition in copper at 1300 K using the same Steinhardt order parameters. Our method is able to melt and refreeze the system repeatedly, and the free energy profile can be obtained with high efficiency
Directory of Open Access Journals (Sweden)
Sheridan Few
2017-01-01
Full Text Available There exists considerable uncertainty over both shale and conventional gas resource availability and extraction costs, as well as the fugitive methane emissions associated with shale gas extraction and its possible role in mitigating climate change. This study uses a multi-region energy system model, TIAM (TIMES integrated assessment model, to consider the impact of a range of conventional and shale gas cost and availability assessments on mitigation scenarios aimed at achieving a limit to global warming of below 2 °C in 2100, with a 50% likelihood. When adding shale gas to the global energy mix, the reduction to the global energy system cost is relatively small (up to 0.4%, and the mitigation cost increases by 1%–3% under all cost assumptions. The impact of a “dash for shale gas”, of unavailability of carbon capture and storage, of increased barriers to investment in low carbon technologies, and of higher than expected leakage rates, are also considered; and are each found to have the potential to increase the cost and reduce feasibility of meeting global temperature goals. We conclude that the extraction of shale gas is not likely to significantly reduce the effort required to mitigate climate change under globally coordinated action, but could increase required mitigation effort if not handled sufficiently carefully.
Flood modelling : Parameterisation and inflow uncertainty
Mukolwe, M.M.; Di Baldassarre, G.; Werner, M.; Solomatine, D.P.
2014-01-01
This paper presents an analysis of uncertainty in hydraulic modelling of floods, focusing on the inaccuracy caused by inflow errors and parameter uncertainty. In particular, the study develops a method to propagate the uncertainty induced by, firstly, application of a stage–discharge rating curve
Victoria Marshall; Dil Hoda
2009-01-01
One of 18 articles inspired by the Meristem 2007 Forum, "Restorative Commons for Community Health." The articles include interviews, case studies, thought pieces, and interdisciplinary theoretical works that explore the relationship between human health and the urban...
Reliability analysis under epistemic uncertainty
International Nuclear Information System (INIS)
Nannapaneni, Saideep; Mahadevan, Sankaran
2016-01-01
This paper proposes a probabilistic framework to include both aleatory and epistemic uncertainty within model-based reliability estimation of engineering systems for individual limit states. Epistemic uncertainty is considered due to both data and model sources. Sparse point and/or interval data regarding the input random variables leads to uncertainty regarding their distribution types, distribution parameters, and correlations; this statistical uncertainty is included in the reliability analysis through a combination of likelihood-based representation, Bayesian hypothesis testing, and Bayesian model averaging techniques. Model errors, which include numerical solution errors and model form errors, are quantified through Gaussian process models and included in the reliability analysis. The probability integral transform is used to develop an auxiliary variable approach that facilitates a single-level representation of both aleatory and epistemic uncertainty. This strategy results in an efficient single-loop implementation of Monte Carlo simulation (MCS) and FORM/SORM techniques for reliability estimation under both aleatory and epistemic uncertainty. Two engineering examples are used to demonstrate the proposed methodology. - Highlights: • Epistemic uncertainty due to data and model included in reliability analysis. • A novel FORM-based approach proposed to include aleatory and epistemic uncertainty. • A single-loop Monte Carlo approach proposed to include both types of uncertainties. • Two engineering examples used for illustration.
The uncertainty budget in pharmaceutical industry
DEFF Research Database (Denmark)
Heydorn, Kaj
of their uncertainty, exactly as described in GUM [2]. Pharmaceutical industry has therefore over the last 5 years shown increasing interest in accreditation according to ISO 17025 [3], and today uncertainty budgets are being developed for all so-called critical measurements. The uncertainty of results obtained...... that the uncertainty of a particular result is independent of the method used for its estimation. Several examples of uncertainty budgets for critical parameters based on the bottom-up procedure will be discussed, and it will be shown how the top-down method is used as a means of verifying uncertainty budgets, based...
Optimal Taxation under Income Uncertainty
Xianhua Dai
2011-01-01
Optimal taxation under income uncertainty has been extensively developed in expected utility theory, but it is still open for inseparable utility function between income and effort. As an alternative of decision-making under uncertainty, prospect theory (Kahneman and Tversky (1979), Tversky and Kahneman (1992)) has been obtained empirical support, for example, Kahneman and Tversky (1979), and Camerer and Lowenstein (2003). It is beginning to explore optimal taxation in the context of prospect...
Pervaiz, S.; Anwar, S.; Kannan, S.; Almarfadi, A.
2018-04-01
Ti6Al4V is known as difficult-to-cut material due to its inherent properties such as high hot hardness, low thermal conductivity and high chemical reactivity. Though, Ti6Al4V is utilized by industrial sectors such as aeronautics, energy generation, petrochemical and bio-medical etc. For the metal cutting community, competent and cost-effective machining of Ti6Al4V is a challenging task. To optimize cost and machining performance for the machining of Ti6Al4V, finite element based cutting simulation can be a very useful tool. The aim of this paper is to develop a finite element machining model for the simulation of Ti6Al4V machining process. The study incorporates material constitutive models namely Power Law (PL) and Johnson – Cook (JC) material models to mimic the mechanical behaviour of Ti6Al4V. The study investigates cutting temperatures, cutting forces, stresses, and plastic strains with respect to different PL and JC material models with associated parameters. In addition, the numerical study also integrates different cutting tool rake angles in the machining simulations. The simulated results will be beneficial to draw conclusions for improving the overall machining performance of Ti6Al4V.
Calibration Under Uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Swiler, Laura Painton; Trucano, Timothy Guy
2005-03-01
This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.
Directory of Open Access Journals (Sweden)
Yongjiu Guo
2015-04-01
Full Text Available Canopy structural parameters and light radiation are important for evaluating the light use efficiency and grain yield of crops. Their spatial variation within canopies and temporal variation over growth stages could be simulated using dynamic models with strong application and predictability. Based on an optimized canopy structure vertical distribution model and the Beer-Lambert law combined with hyperspectral remote sensing (RS technology, we established a new dynamic model for simulating leaf area index (LAI, leaf angle (LA distribution and light radiation at different vertical heights and growth stages. The model was validated by measuring LAI, LA and light radiation in different leaf layers at different growth stages of two different types of rice (Oryza sativa L., i.e., japonica (Wuxiangjing14 and indica (Shanyou63. The results show that the simulated values were in good agreement with the observed values, with an average RRMSE (relative root mean squared error between simulated and observed LAI and LA values of 14.75% and 21.78%, respectively. The RRMSE values for simulated photosynthetic active radiation (PAR transmittance and interception rates were 14.25% and 9.22% for Wuxiangjing14 and 15.71% and 4.40% for Shanyou63, respectively. In addition, the corresponding RRMSE values for red (R, green (G and blue (B radiation transmittance and interception rates were 16.34%, 15.96% and 15.36% for Wuxiangjing14 and 5.75%, 8.23% and 5.03% for Shanyou63, respectively. The results indicate that the model performed well for different rice cultivars and under different cultivation conditions.
Reducing the top quark mass uncertainty with jet grooming
Andreassen, Anders; Schwartz, Matthew D.
2017-10-01
The measurement of the top quark mass has large systematic uncertainties coming from the Monte Carlo simulations that are used to match theory and experiment. We explore how much that uncertainty can be reduced by using jet grooming procedures. Using the ATLAS A14 tunes of pythia, we estimate the uncertainty from the choice of tuning parameters in what is meant by the Monte Carlo mass to be around 530 MeV without any corrections. This uncertainty can be reduced by 60% to 200 MeV by calibrating to the W mass and by 70% to 140 MeV by additionally applying soft-drop jet grooming (or to 170 MeV using trimming). At e + e - colliders, the associated uncertainty is around 110 MeV, reducing to 50 MeV after calibrating to the W mass. By analyzing the tuning parameters, we conclude that the importance of jet grooming after calibrating to the W -mass is to reduce sensitivity to the underlying event.
Model uncertainty in safety assessment
International Nuclear Information System (INIS)
Pulkkinen, U.; Huovinen, T.
1996-01-01
The uncertainty analyses are an essential part of any risk assessment. Usually the uncertainties of reliability model parameter values are described by probability distributions and the uncertainty is propagated through the whole risk model. In addition to the parameter uncertainties, the assumptions behind the risk models may be based on insufficient experimental observations and the models themselves may not be exact descriptions of the phenomena under analysis. The description and quantification of this type of uncertainty, model uncertainty, is the topic of this report. The model uncertainty is characterized and some approaches to model and quantify it are discussed. The emphasis is on so called mixture models, which have been applied in PSAs. Some of the possible disadvantages of the mixture model are addressed. In addition to quantitative analyses, also qualitative analysis is discussed shortly. To illustrate the models, two simple case studies on failure intensity and human error modeling are described. In both examples, the analysis is based on simple mixture models, which are observed to apply in PSA analyses. (orig.) (36 refs., 6 figs., 2 tabs.)
Model uncertainty in safety assessment
Energy Technology Data Exchange (ETDEWEB)
Pulkkinen, U; Huovinen, T [VTT Automation, Espoo (Finland). Industrial Automation
1996-01-01
The uncertainty analyses are an essential part of any risk assessment. Usually the uncertainties of reliability model parameter values are described by probability distributions and the uncertainty is propagated through the whole risk model. In addition to the parameter uncertainties, the assumptions behind the risk models may be based on insufficient experimental observations and the models themselves may not be exact descriptions of the phenomena under analysis. The description and quantification of this type of uncertainty, model uncertainty, is the topic of this report. The model uncertainty is characterized and some approaches to model and quantify it are discussed. The emphasis is on so called mixture models, which have been applied in PSAs. Some of the possible disadvantages of the mixture model are addressed. In addition to quantitative analyses, also qualitative analysis is discussed shortly. To illustrate the models, two simple case studies on failure intensity and human error modeling are described. In both examples, the analysis is based on simple mixture models, which are observed to apply in PSA analyses. (orig.) (36 refs., 6 figs., 2 tabs.).
Holistic uncertainty analysis in river basin modeling for climate vulnerability assessment
Taner, M. U.; Wi, S.; Brown, C.
2017-12-01
The challenges posed by uncertain future climate are a prominent concern for water resources managers. A number of frameworks exist for assessing the impacts of climate-related uncertainty, including internal climate variability and anthropogenic climate change, such as scenario-based approaches and vulnerability-based approaches. While in many cases climate uncertainty may be dominant, other factors such as future evolution of the river basin, hydrologic response and reservoir operations are potentially significant sources of uncertainty. While uncertainty associated with modeling hydrologic response has received attention, very little attention has focused on the range of uncertainty and possible effects of the water resources infrastructure and management. This work presents a holistic framework that allows analysis of climate, hydrologic and water management uncertainty in water resources systems analysis with the aid of a water system model designed to integrate component models for hydrology processes and water management activities. The uncertainties explored include those associated with climate variability and change, hydrologic model parameters, and water system operation rules. A Bayesian framework is used to quantify and model the uncertainties at each modeling steps in integrated fashion, including prior and the likelihood information about model parameters. The framework is demonstrated in a case study for the St. Croix Basin located at border of United States and Canada.
Stereo-particle image velocimetry uncertainty quantification
International Nuclear Information System (INIS)
Bhattacharya, Sayantan; Vlachos, Pavlos P; Charonko, John J
2017-01-01
Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric
Integration of software for scenario exploration
International Nuclear Information System (INIS)
Oyamada, Kiyoshi; Ikeda, Takao
1999-03-01
The scenario exploration methodology using shadow models is a variation of the environmental simulation method. Key aspect of the scenario exploration is the use of shadow models which are not corresponding to any specific assumptions on physical processes and, instead, abstract their general features relevant to the effects on nuclide transport in a general manner so that benefit of using simulation approach can be maximized. In developing the shadow models, all the modelling options that have not yet been denied by the experts are kept and parametrized in a very general framework. This, in turn, enables one to treat various types of the uncertainty in performance assessment, i.e., scenario uncertainty, conceptual model uncertainty, mathematical model uncertainty and parameter uncertainty, in a common framework of uncertainty / sensitivity analysis. Objective of the current study is to review / modify the tools which have been developed separately and, thence, not fully consistent from one to the other and to integrate them into a unified methodology and software. Tasks for this are; 1. modification / integration of tools for scenario exploration of nuclide transport in the EBS and the near-field host rock, 2. verification of the software modified and integrated, 3. installation of the software at JNC. (author)
Commonplaces and social uncertainty
DEFF Research Database (Denmark)
Lassen, Inger
2008-01-01
This article explores the concept of uncertainty in four focus group discussions about genetically modified food. In the discussions, members of the general public interact with food biotechnology scientists while negotiating their attitudes towards genetic engineering. Their discussions offer...... an example of risk discourse in which the use of commonplaces seems to be a central feature (Myers 2004: 81). My analyses support earlier findings that commonplaces serve important interactional purposes (Barton 1999) and that they are used for mitigating disagreement, for closing topics and for facilitating...
Uncertainty Characterization of Reactor Vessel Fracture Toughness
International Nuclear Information System (INIS)
Li, Fei; Modarres, Mohammad
2002-01-01
To perform fracture mechanics analysis of reactor vessel, fracture toughness (K Ic ) at various temperatures would be necessary. In a best estimate approach, K Ic uncertainties resulting from both lack of sufficient knowledge and randomness in some of the variables of K Ic must be characterized. Although it may be argued that there is only one type of uncertainty, which is lack of perfect knowledge about the subject under study, as a matter of practice K Ic uncertainties can be divided into two types: aleatory and epistemic. Aleatory uncertainty is related to uncertainty that is very difficult to reduce, if not impossible; epistemic uncertainty, on the other hand, can be practically reduced. Distinction between aleatory and epistemic uncertainties facilitates decision-making under uncertainty and allows for proper propagation of uncertainties in the computation process. Typically, epistemic uncertainties representing, for example, parameters of a model are sampled (to generate a 'snapshot', single-value of the parameters), but the totality of aleatory uncertainties is carried through the calculation as available. In this paper a description of an approach to account for these two types of uncertainties associated with K Ic has been provided. (authors)
Uncertainty analysis techniques
International Nuclear Information System (INIS)
Marivoet, J.; Saltelli, A.; Cadelli, N.
1987-01-01
The origin of the uncertainty affecting Performance Assessments, as well as their propagation to dose and risk results is discussed. The analysis is focused essentially on the uncertainties introduced by the input parameters, the values of which may range over some orders of magnitude and may be given as probability distribution function. The paper briefly reviews the existing sampling techniques used for Monte Carlo simulations and the methods for characterizing the output curves, determining their convergence and confidence limits. Annual doses, expectation values of the doses and risks are computed for a particular case of a possible repository in clay, in order to illustrate the significance of such output characteristics as the mean, the logarithmic mean and the median as well as their ratios. The report concludes that provisionally, due to its better robustness, such estimation as the 90th percentile may be substituted to the arithmetic mean for comparison of the estimated doses with acceptance criteria. In any case, the results obtained through Uncertainty Analyses must be interpreted with caution as long as input data distribution functions are not derived from experiments reasonably reproducing the situation in a well characterized repository and site
International Nuclear Information System (INIS)
Robouch, P.; Arana, G.; Eguskiza, M.; Etxebarria, N.
2000-01-01
The concepts of the Guide to the expression of Uncertainties in Measurements for chemical measurements (GUM) and the recommendations of the Eurachem document 'Quantifying Uncertainty in Analytical Methods' are applied to set up the uncertainty budget for k 0 -NAA. The 'universally applicable spreadsheet technique', described by KRAGTEN, is applied to the k 0 -NAA basic equations for the computation of uncertainties. The variance components - individual standard uncertainties - highlight the contribution and the importance of the different parameters to be taken into account. (author)
Climate Projections and Uncertainty Communication.
Joslyn, Susan L; LeClerc, Jared E
2016-01-01
Lingering skepticism about climate change might be due in part to the way climate projections are perceived by members of the public. Variability between scientists' estimates might give the impression that scientists disagree about the fact of climate change rather than about details concerning the extent or timing. Providing uncertainty estimates might clarify that the variability is due in part to quantifiable uncertainty inherent in the prediction process, thereby increasing people's trust in climate projections. This hypothesis was tested in two experiments. Results suggest that including uncertainty estimates along with climate projections leads to an increase in participants' trust in the information. Analyses explored the roles of time, place, demographic differences (e.g., age, gender, education level, political party affiliation), and initial belief in climate change. Implications are discussed in terms of the potential benefit of adding uncertainty estimates to public climate projections. Copyright © 2015 Cognitive Science Society, Inc.
Measurement uncertainty: Friend or foe?
Infusino, Ilenia; Panteghini, Mauro
2018-02-02
The definition and enforcement of a reference measurement system, based on the implementation of metrological traceability of patients' results to higher order reference methods and materials, together with a clinically acceptable level of measurement uncertainty, are fundamental requirements to produce accurate and equivalent laboratory results. The uncertainty associated with each step of the traceability chain should be governed to obtain a final combined uncertainty on clinical samples fulfilling the requested performance specifications. It is important that end-users (i.e., clinical laboratory) may know and verify how in vitro diagnostics (IVD) manufacturers have implemented the traceability of their calibrators and estimated the corresponding uncertainty. However, full information about traceability and combined uncertainty of calibrators is currently very difficult to obtain. Laboratory professionals should investigate the need to reduce the uncertainty of the higher order metrological references and/or to increase the precision of commercial measuring systems. Accordingly, the measurement uncertainty should not be considered a parameter to be calculated by clinical laboratories just to fulfil the accreditation standards, but it must become a key quality indicator to describe both the performance of an IVD measuring system and the laboratory itself. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Determination of Formula for Vickers Hardness Measurements Uncertainty
International Nuclear Information System (INIS)
Purba, Asli
2007-01-01
The purpose of formula determination is to obtain the formula of Vickers hardness measurements uncertainty. The approach to determine the formula: influenced parameters identification, creating a cause and effect diagram, determination of sensitivity, determination of standard uncertainty and determination of formula for Vickers hardness measurements uncertainty. The results is a formula for determination of Vickers hardness measurements uncertainty. (author)
A Bayesian framework for parameter estimation in dynamical models.
Directory of Open Access Journals (Sweden)
Flávio Codeço Coelho
Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.
Treatment of uncertainty in low-level waste performance assessment
International Nuclear Information System (INIS)
Kozak, M.W.; Olague, N.E.; Gallegos, D.P.; Rao, R.R.
1991-01-01
Uncertainties arise from a number of different sources in low-level waste performance assessment. In this paper the types of uncertainty are reviewed, and existing methods for quantifying and reducing each type of uncertainty are discussed. These approaches are examined in the context of the current low-level radioactive waste regulatory performance objectives, which are deterministic. The types of uncertainty discussed in this paper are model uncertainty, uncertainty about future conditions, and parameter uncertainty. The advantages and disadvantages of available methods for addressing uncertainty in low-level waste performance assessment are presented. 25 refs
International Nuclear Information System (INIS)
Van Cromphaut, Caroline; Resende, Valdirene G. de; De Grave, Eddy; Vandenberghe, Robert E.
2009-01-01
This contribution focuses on the Moessbauer spectra acquired by the Mars Exploration Rover Spirit which carried a MIMOS II Moessbauer spectrometer. Only those spectra which present a reasonable statistical quality were selected to for this study. Twenty five Moessbauer spectra have been considered. Common phases identified from the temperature dependent hyperfine parameters are olivine, pyroxene, hematite and magnetite. It is believed that the applied analysis method has provided accurate values for the various hyperfine data averaged over single 10 K temperature intervals in the range 210-260 K. The obtained results, to some extent forced to evolve consistently over the various ΔT intervals considered for a given soil/rock target, are in many cases different from previously published data. Possible reasons for these differences will be discussed.
Uncertainty covariances in robotics applications
International Nuclear Information System (INIS)
Smith, D.L.
1984-01-01
The application of uncertainty covariance matrices in the analysis of robot trajectory errors is explored. First, relevant statistical concepts are reviewed briefly. Then, a simple, hypothetical robot model is considered to illustrate methods for error propagation and performance test data evaluation. The importance of including error correlations is emphasized
Energy Technology Data Exchange (ETDEWEB)
Kaychouhi, A.; Mota, M.; Uribes, O.; Lora, C.
2014-07-01
This paper aims to show the development of the methodology of NPP Cofrentes has been used in the preparation, evaluation, analysis and calculations of uncertainty applied to the values of the parameters of category 2 and 3 of the Improved Operating Technical Specifications. (Author)
Uncertainties in radioecological assessment models
International Nuclear Information System (INIS)
Hoffman, F.O.; Miller, C.W.; Ng, Y.C.
1983-01-01
Environmental radiological assessments rely heavily on the use of mathematical models. The predictions of these models are inherently uncertain because models are inexact representations of real systems. The major sources of this uncertainty are related to bias in model formulation and imprecision in parameter estimation. The magnitude of uncertainty is a function of the questions asked of the model and the specific radionuclides and exposure pathways of dominant importance. It is concluded that models developed as research tools should be distinguished from models developed for assessment applications. Furthermore, increased model complexity does not necessarily guarantee increased accuracy. To improve the realism of assessment modeling, stochastic procedures are recommended that translate uncertain parameter estimates into a distribution of predicted values. These procedures also permit the importance of model parameters to be ranked according to their relative contribution to the overall predicted uncertainty. Although confidence in model predictions can be improved through site-specific parameter estimation and increased model validation, health risk factors and internal dosimetry models will probably remain important contributors to the amount of uncertainty that is irreducible. 41 references, 4 figures, 4 tables
Regulating renewable resources under uncertainty
DEFF Research Database (Denmark)
Hansen, Lars Gårn
) that a pro-quota result under uncertainty about prices and marginal costs is unlikely, requiring that the resource growth function is highly concave locally around the optimum and, 3) that quotas are always preferred if uncertainly about underlying structural economic parameters dominates. These results...... showing that quotas are preferred in a number of situations qualify the pro fee message dominating prior studies....
Chakraborty, Santanu; Sengupta, Chandana; Roy, Kunal
2005-04-01
Considering the current need for development of selective cyclooxygenase-2 (COX-2) inhibitors, an attempt has been made to explore physico-chemical requirements of 2-(5-phenyl-pyrazol-1-yl)-5-methanesulfonylpyridines for binding with COX-1 and COX-2 enzyme subtypes and also to explore the selectivity requirements. In this study, E-states of different common atoms of the molecules (calculated according to Kier & Hall), first order valence connectivity and physicochemical parameters (hydrophobicity pi, Hammett sigma and molar refractivity MR of different ring substituents) were used as independent variables along with suitable dummy parameters in the stepwise regression method. The best equation describing COX-1 binding affinity [n = 25, Q2 = 0.606, R(a)2 = 0.702, R2 = 0.752, R = 0.867, s = 0.447, F = 15.2 (df 4, 20)] suggests that the COX-1 binding affinity increases in the presence of a halogen substituent at R1 position and a p-alkoxy or p-methylthio substituent at R2 position. Furthermore, a difluoromethyl group is preferred over a trifluoromethyl group at R position for the COX-1 binding. The best equation describing COX-2 binding affinity [n = 32, Q2 = 0.622, R(a)2= 0.692, R2 = 0.732, R = 0.856, s = 0.265, F = 18.4 (df 4, 27)] shows that the COX-2 binding affinity increases with the presence of a halogen substituent at R1 position and increase of size of R2 substituents. However, it decreases in case of simultaneous presence of 3-chloro and 4-methoxy groups on the phenyl nucleus and in the presence of highly lipophilic R2 substituents. The best selectivity relation [n = 25, Q2 = 0.455, R(a)2 = 0.605, R2 = 0.670, R = 0.819, s = 0.423, F = 10.2 (df 4, 20)] suggests that the COX-2 selectivity decreases in the presence of p-alkoxy group and electron-withdrawing para substituents at R2 position. Again, a trifluoro group is conductive for the selectivity instead of a difluoromethyl group at R position. Furthermore, branching may also play significant role in
Report on the uncertainty methods study
International Nuclear Information System (INIS)
1998-06-01
The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI, has compared five methods for calculating the uncertainty in the predictions of advanced 'best estimate' thermal-hydraulic codes: the Pisa method (based on extrapolation from integral experiments) and four methods identifying and combining input uncertainties. Three of these, the GRS, IPSN and ENUSA methods, use subjective probability distributions, and one, the AEAT method, performs a bounding analysis. Each method has been used to calculate the uncertainty in specified parameters for the LSTF SB-CL-18 5% cold leg small break LOCA experiment in the ROSA-IV Large Scale Test Facility (LSTF). The uncertainty analysis was conducted essentially blind and the participants did not use experimental measurements from the test as input apart from initial and boundary conditions. Participants calculated uncertainty ranges for experimental parameters including pressurizer pressure, primary circuit inventory and clad temperature (at a specified position) as functions of time
Sketching Uncertainty into Simulations.
Ribicic, H; Waser, J; Gurbat, R; Sadransky, B; Groller, M E
2012-12-01
In a variety of application areas, the use of simulation steering in decision making is limited at best. Research focusing on this problem suggests that most user interfaces are too complex for the end user. Our goal is to let users create and investigate multiple, alternative scenarios without the need for special simulation expertise. To simplify the specification of parameters, we move from a traditional manipulation of numbers to a sketch-based input approach. Users steer both numeric parameters and parameters with a spatial correspondence by sketching a change onto the rendering. Special visualizations provide immediate visual feedback on how the sketches are transformed into boundary conditions of the simulation models. Since uncertainty with respect to many intertwined parameters plays an important role in planning, we also allow the user to intuitively setup complete value ranges, which are then automatically transformed into ensemble simulations. The interface and the underlying system were developed in collaboration with experts in the field of flood management. The real-world data they have provided has allowed us to construct scenarios used to evaluate the system. These were presented to a variety of flood response personnel, and their feedback is discussed in detail in the paper. The interface was found to be intuitive and relevant, although a certain amount of training might be necessary.
Unexpected uncertainty, volatility and decision-making
Directory of Open Access Journals (Sweden)
Amy Rachel Bland
2012-06-01
Full Text Available The study of uncertainty in decision making is receiving greater attention in the fields of cognitive and computational neuroscience. Several lines of evidence are beginning to elucidate different variants of uncertainty. Particularly, risk, ambiguity and expected and unexpected forms of uncertainty are well articulated in the literature. In this article we review both empirical and theoretical evidence arguing for the potential distinction between three forms of uncertainty; expected uncertainty, unexpected uncertainty and volatility. Particular attention will be devoted to exploring the distinction between unexpected uncertainty and volatility which has been less appreciated in the literature. This includes evidence from computational modelling, neuromodulation, neuroimaging and electrophysiological studies. We further address the possible differentiation of cognitive control mechanisms used to deal with these forms of uncertainty. Particularly we explore a role for conflict monitoring and the temporal integration of information into working memory. Finally, we explore whether the Dual Modes of Control theory provides a theoretical framework for understanding the distinction between unexpected uncertainty and volatility.
The uncertainties in estimating measurement uncertainties
International Nuclear Information System (INIS)
Clark, J.P.; Shull, A.H.
1994-01-01
All measurements include some error. Whether measurements are used for accountability, environmental programs or process support, they are of little value unless accompanied by an estimate of the measurements uncertainty. This fact is often overlooked by the individuals who need measurements to make decisions. This paper will discuss the concepts of measurement, measurements errors (accuracy or bias and precision or random error), physical and error models, measurement control programs, examples of measurement uncertainty, and uncertainty as related to measurement quality. Measurements are comparisons of unknowns to knowns, estimates of some true value plus uncertainty; and are no better than the standards to which they are compared. Direct comparisons of unknowns that match the composition of known standards will normally have small uncertainties. In the real world, measurements usually involve indirect comparisons of significantly different materials (e.g., measuring a physical property of a chemical element in a sample having a matrix that is significantly different from calibration standards matrix). Consequently, there are many sources of error involved in measurement processes that can affect the quality of a measurement and its associated uncertainty. How the uncertainty estimates are determined and what they mean is as important as the measurement. The process of calculating the uncertainty of a measurement itself has uncertainties that must be handled correctly. Examples of chemistry laboratory measurement will be reviewed in this report and recommendations made for improving measurement uncertainties
Towards quantifying uncertainty in predictions of Amazon 'dieback'.
Huntingford, Chris; Fisher, Rosie A; Mercado, Lina; Booth, Ben B B; Sitch, Stephen; Harris, Phil P; Cox, Peter M; Jones, Chris D; Betts, Richard A; Malhi, Yadvinder; Harris, Glen R; Collins, Mat; Moorcroft, Paul
2008-05-27
Simulations with the Hadley Centre general circulation model (HadCM3), including carbon cycle model and forced by a 'business-as-usual' emissions scenario, predict a rapid loss of Amazonian rainforest from the middle of this century onwards. The robustness of this projection to both uncertainty in physical climate drivers and the formulation of the land surface scheme is investigated. We analyse how the modelled vegetation cover in Amazonia responds to (i) uncertainty in the parameters specified in the atmosphere component of HadCM3 and their associated influence on predicted surface climate. We then enhance the land surface description and (ii) implement a multilayer canopy light interception model and compare with the simple 'big-leaf' approach used in the original simulations. Finally, (iii) we investigate the effect of changing the method of simulating vegetation dynamics from an area-based model (TRIFFID) to a more complex size- and age-structured approximation of an individual-based model (ecosystem demography). We find that the loss of Amazonian rainforest is robust across the climate uncertainty explored by perturbed physics simulations covering a wide range of global climate sensitivity. The introduction of the refined light interception model leads to an increase in simulated gross plant carbon uptake for the present day, but, with altered respiration, the net effect is a decrease in net primary productivity. However, this does not significantly affect the carbon loss from vegetation and soil as a consequence of future simulated depletion in soil moisture; the Amazon forest is still lost. The introduction of the more sophisticated dynamic vegetation model reduces but does not halt the rate of forest dieback. The potential for human-induced climate change to trigger the loss of Amazon rainforest appears robust within the context of the uncertainties explored in this paper. Some further uncertainties should be explored, particularly with respect to the
Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes
Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris
2017-12-01
Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.
Representing and managing uncertainty in qualitative ecological models
Nuttle, T.; Bredeweg, B.; Salles, P.; Neumann, M.
2009-01-01
Ecologists and decision makers need ways to understand systems, test ideas, and make predictions and explanations about systems. However, uncertainty about causes and effects of processes and parameter values is pervasive in models of ecological systems. Uncertainty associated with incomplete
Treatment of uncertainties in the geologic disposal of radioactive waste
International Nuclear Information System (INIS)
Cranwell, R.M.
1985-01-01
Uncertainty in the analysis of geologic waste disposal is generally considered to have three primary components: (1) computer code/model uncertainty, (2) model parameter uncertainty, and (3) scenario uncertainty. Computer code/model uncertainty arises from problems associated with determination of appropriate parameters for use in model construction, mathematical formulatin of models, and numerical techniques used in conjunction with the mathematical formulation of models. Model parameter uncertainty arises from problems associated with selection of appropriate values for model input, data interpretation and possible misuse of data, and variation of data. Scenario uncertainty arises from problems associated with the ''completeness' of scenarios, the definition of parameters which describe scenarios, and the rate or probability of scenario occurrence. The preceding sources of uncertainty are discussed below
Use of uncertainty data in neutron dosimetry
International Nuclear Information System (INIS)
Greenwood, L.R.
1980-01-01
Uncertainty and covariance data are required for neutron activation cross sections and nuclear decay data used to adjust neutron flux spectra measured at accelerators and reactors. Covariances must be evaluated in order to assess errors in derived damage parameters, such as nuclear displacements. The primary sources of error are discussed along with needed improvements in presently available uncertainty data
Predictive uncertainty in auditory sequence processing
DEFF Research Database (Denmark)
Hansen, Niels Chr.; Pearce, Marcus T
2014-01-01
in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models...
Park, DaeKil
2018-06-01
The dynamics of entanglement and uncertainty relation is explored by solving the time-dependent Schrödinger equation for coupled harmonic oscillator system analytically when the angular frequencies and coupling constant are arbitrarily time dependent. We derive the spectral and Schmidt decompositions for vacuum solution. Using the decompositions, we derive the analytical expressions for von Neumann and Rényi entropies. Making use of Wigner distribution function defined in phase space, we derive the time dependence of position-momentum uncertainty relations. To show the dynamics of entanglement and uncertainty relation graphically, we introduce two toy models and one realistic quenched model. While the dynamics can be conjectured by simple consideration in the toy models, the dynamics in the realistic quenched model is somewhat different from that in the toy models. In particular, the dynamics of entanglement exhibits similar pattern to dynamics of uncertainty parameter in the realistic quenched model.
Rohmer, Jeremy; Verdel, Thierry
2017-04-01
.g., Baudrit et al., 2007) for geo-hazard assessments. A graphical tool is then developed to explore: 1. the contribution of both types of uncertainty, aleatoric and epistemic; 2. the regions of the imprecise or random parameters which contribute the most to the imprecision on the failure probability P. The method is applied on two case studies (a mine pillar and a steep slope stability analysis, Rohmer and Verdel, 2014) to investigate the necessity for extra data acquisition on parameters whose imprecision can hardly be modelled by probabilities due to the scarcity of the available information (respectively the extraction ratio and the cliff geometry). References Baudrit, C., Couso, I., & Dubois, D. (2007). Joint propagation of probability and possibility in risk analysis: Towards a formal framework. International Journal of Approximate Reasoning, 45(1), 82-105. Rohmer, J., & Verdel, T. (2014). Joint exploration of regional importance of possibilistic and probabilistic uncertainty in stability analysis. Computers and Geotechnics, 61, 308-315.
Uncertainty in social dilemmas
Kwaadsteniet, Erik Willem de
2007-01-01
This dissertation focuses on social dilemmas, and more specifically, on environmental uncertainty in these dilemmas. Real-life social dilemma situations are often characterized by uncertainty. For example, fishermen mostly do not know the exact size of the fish population (i.e., resource size uncertainty). Several researchers have therefore asked themselves the question as to how such uncertainty influences people’s choice behavior. These researchers have repeatedly concluded that uncertainty...
Probabilistic Mass Growth Uncertainties
Plumer, Eric; Elliott, Darren
2013-01-01
Mass has been widely used as a variable input parameter for Cost Estimating Relationships (CER) for space systems. As these space systems progress from early concept studies and drawing boards to the launch pad, their masses tend to grow substantially, hence adversely affecting a primary input to most modeling CERs. Modeling and predicting mass uncertainty, based on historical and analogous data, is therefore critical and is an integral part of modeling cost risk. This paper presents the results of a NASA on-going effort to publish mass growth datasheet for adjusting single-point Technical Baseline Estimates (TBE) of masses of space instruments as well as spacecraft, for both earth orbiting and deep space missions at various stages of a project's lifecycle. This paper will also discusses the long term strategy of NASA Headquarters in publishing similar results, using a variety of cost driving metrics, on an annual basis. This paper provides quantitative results that show decreasing mass growth uncertainties as mass estimate maturity increases. This paper's analysis is based on historical data obtained from the NASA Cost Analysis Data Requirements (CADRe) database.
Sensitivity/uncertainty analysis for the Hiroshima dosimetry reevaluation effort
International Nuclear Information System (INIS)
Broadhead, B.L.; Lillie, R.A.; Pace, J.V. III; Cacuci, D.G.
1987-01-01
Uncertainty estimates and cross correlations by range/survivor location have been obtained for the free-in-air (FIA) tissue kerma for the Hiroshima atomic event. These uncertainties in the FIA kerma include contributions due to various modeling parameters and the basic cross section data and are given at three ground ranges, 700, 1000 and 1500 m. The estimated uncertainties are nearly constant over the given ground ranges and are approximately 27% for the prompt neutron kerma and secondary gamma kerma and 35% for the prompt gamma kerma. The total kerma uncertainty is dominated by the secondary gamma kerma uncertainties which are in turn largely due to the modeling parameter uncertainties
Uncertainty Quantification Bayesian Framework for Porous Media Flows
Demyanov, V.; Christie, M.; Erbas, D.
2005-12-01
Uncertainty quantification is an increasingly important aspect of many areas of applied science, where the challenge is to make reliable predictions about the performance of complex physical systems in the absence of complete or reliable data. Predicting flows of fluids through undersurface reservoirs is an example of a complex system where accuracy in prediction is needed (e.g. in oil industry it is essential for financial reasons). Simulation of fluid flow in oil reservoirs is usually carried out using large commercially written finite difference simulators solving conservation equations describing the multi-phase flow through the porous reservoir rocks, which is a highly computationally expensive task. This work examines a Bayesian Framework for uncertainty quantification in porous media flows that uses a stochastic sampling algorithm to generate models that match observed time series data. The framework is flexible for a wide range of general physical/statistical parametric models, which are used to describe the underlying hydro-geological process in its temporal dynamics. The approach is based on exploration of the parameter space and update of the prior beliefs about what the most likely model definitions are. Optimization problem for a highly parametric physical model usually have multiple solutions, which impact the uncertainty of the made predictions. Stochastic search algorithm (e.g. genetic algorithm) allows to identify multiple "good enough" models in the parameter space. Furthermore, inference of the generated model ensemble via MCMC based algorithm evaluates the posterior probability of the generated models and quantifies uncertainty of the predictions. Machine learning algorithm - Artificial Neural Networks - are used to speed up the identification of regions in parameter space where good matches to observed data can be found. Adaptive nature of ANN allows to develop different ways of integrating them into the Bayesian framework: as direct time
Piezoelectric energy harvesting with parametric uncertainty
International Nuclear Information System (INIS)
Ali, S F; Friswell, M I; Adhikari, S
2010-01-01
The design and analysis of energy harvesting devices is becoming increasing important in recent years. Most of the literature has focused on the deterministic analysis of these systems and the problem of uncertain parameters has received less attention. Energy harvesting devices exhibit parametric uncertainty due to errors in measurement, errors in modelling and variability in the parameters during manufacture. This paper investigates the effect of parametric uncertainty in the mechanical system on the harvested power, and derives approximate explicit formulae for the optimal electrical parameters that maximize the mean harvested power. The maximum of the mean harvested power decreases with increasing uncertainty, and the optimal frequency at which the maximum mean power occurs shifts. The effect of the parameter variance on the optimal electrical time constant and optimal coupling coefficient are reported. Monte Carlo based simulation results are used to further analyse the system under parametric uncertainty
The state of the art of the impact of sampling uncertainty on measurement uncertainty
Leite, V. J.; Oliveira, E. C.
2018-03-01
The measurement uncertainty is a parameter that marks the reliability and can be divided into two large groups: sampling and analytical variations. Analytical uncertainty is a controlled process, performed in the laboratory. The same does not occur with the sampling uncertainty, which, because it faces several obstacles and there is no clarity on how to perform the procedures, has been neglected, although it is admittedly indispensable to the measurement process. This paper aims at describing the state of the art of sampling uncertainty and at assessing its relevance to measurement uncertainty.
Urban drainage models - making uncertainty analysis simple
DEFF Research Database (Denmark)
Vezzaro, Luca; Mikkelsen, Peter Steen; Deletic, Ana
2012-01-01
in each measured/observed datapoint; an issue which is commonly overlook in the uncertainty analysis of urban drainage models. This comparison allows the user to intuitively estimate the optimum number of simulations required to conduct uncertainty analyses. The output of the method includes parameter......There is increasing awareness about uncertainties in modelling of urban drainage systems and, as such, many new methods for uncertainty analyses have been developed. Despite this, all available methods have limitations which restrict their widespread application among practitioners. Here...
Uncertainty and Sensitivity Analyses Plan
International Nuclear Information System (INIS)
Simpson, J.C.; Ramsdell, J.V. Jr.
1993-04-01
Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project
Uncertainty Communication. Issues and good practice
International Nuclear Information System (INIS)
Kloprogge, P.; Van der Sluijs, J.; Wardekker, A.
2007-12-01
In 2003 the Netherlands Environmental Assessment Agency (MNP) published the RIVM/MNP Guidance for Uncertainty Assessment and Communication. The Guidance assists in dealing with uncertainty in environmental assessments. Dealing with uncertainty is essential because assessment results regarding complex environmental issues are of limited value if the uncertainties have not been taken into account adequately. A careful analysis of uncertainties in an environmental assessment is required, but even more important is the effective communication of these uncertainties in the presentation of assessment results. The Guidance yields rich and differentiated insights in uncertainty, but the relevance of this uncertainty information may vary across audiences and uses of assessment results. Therefore, the reporting of uncertainties is one of the six key issues that is addressed in the Guidance. In practice, users of the Guidance felt a need for more practical assistance in the reporting of uncertainty information. This report explores the issue of uncertainty communication in more detail, and contains more detailed guidance on the communication of uncertainty. In order to make this a 'stand alone' document several questions that are mentioned in the detailed Guidance have been repeated here. This document thus has some overlap with the detailed Guidance. Part 1 gives a general introduction to the issue of communicating uncertainty information. It offers guidelines for (fine)tuning the communication to the intended audiences and context of a report, discusses how readers of a report tend to handle uncertainty information, and ends with a list of criteria that uncertainty communication needs to meet to increase its effectiveness. Part 2 helps writers to analyze the context in which communication takes place, and helps to map the audiences, and their information needs. It further helps to reflect upon anticipated uses and possible impacts of the uncertainty information on the
Recognizing and responding to uncertainty: a grounded theory of nurses' uncertainty.
Cranley, Lisa A; Doran, Diane M; Tourangeau, Ann E; Kushniruk, Andre; Nagle, Lynn
2012-08-01
There has been little research to date exploring nurses' uncertainty in their practice. Understanding nurses' uncertainty is important because it has potential implications for how care is delivered. The purpose of this study is to develop a substantive theory to explain how staff nurses experience and respond to uncertainty in their practice. Between 2006 and 2008, a grounded theory study was conducted that included in-depth semi-structured interviews. Fourteen staff nurses working in adult medical-surgical intensive care units at two teaching hospitals in Ontario, Canada, participated in the study. The theory recognizing and responding to uncertainty characterizes the processes through which nurses' uncertainty manifested and how it was managed. Recognizing uncertainty involved the processes of assessing, reflecting, questioning, and/or being unable to predict aspects of the patient situation. Nurses' responses to uncertainty highlighted the cognitive-affective strategies used to manage uncertainty. Study findings highlight the importance of acknowledging uncertainty and having collegial support to manage uncertainty. The theory adds to our understanding the processes involved in recognizing uncertainty, strategies and outcomes of managing uncertainty, and influencing factors. Tailored nursing education programs should be developed to assist nurses in developing skills in articulating and managing their uncertainty. Further research is needed to extend, test and refine the theory of recognizing and responding to uncertainty to develop strategies for managing uncertainty. This theory advances the nursing perspective of uncertainty in clinical practice. The theory is relevant to nurses who are faced with uncertainty and complex clinical decisions, to managers who support nurses in their clinical decision-making, and to researchers who investigate ways to improve decision-making and care delivery. ©2012 Sigma Theta Tau International.
Summary of existing uncertainty methods
International Nuclear Information System (INIS)
Glaeser, Horst
2013-01-01
A summary of existing and most used uncertainty methods is presented, and the main features are compared. One of these methods is the order statistics method based on Wilks' formula. It is applied in safety research as well as in licensing. This method has been first proposed by GRS for use in deterministic safety analysis, and is now used by many organisations world-wide. Its advantage is that the number of potential uncertain input and output parameters is not limited to a small number. Such a limitation was necessary for the first demonstration of the Code Scaling Applicability Uncertainty Method (CSAU) by the United States Regulatory Commission (USNRC). They did not apply Wilks' formula in their statistical method propagating input uncertainties to obtain the uncertainty of a single output variable, like peak cladding temperature. A Phenomena Identification and Ranking Table (PIRT) was set up in order to limit the number of uncertain input parameters, and consequently, the number of calculations to be performed. Another purpose of such a PIRT process is to identify the most important physical phenomena which a computer code should be suitable to calculate. The validation of the code should be focused on the identified phenomena. Response surfaces are used in some applications replacing the computer code for performing a high number of calculations. The second well known uncertainty method is the Uncertainty Methodology Based on Accuracy Extrapolation (UMAE) and the follow-up method 'Code with the Capability of Internal Assessment of Uncertainty (CIAU)' developed by the University Pisa. Unlike the statistical approaches, the CIAU does compare experimental data with calculation results. It does not consider uncertain input parameters. Therefore, the CIAU is highly dependent on the experimental database. The accuracy gained from the comparison between experimental data and calculated results are extrapolated to obtain the uncertainty of the system code predictions
Feyen, Luc; Caers, Jef
2006-06-01
In this work, we address the problem of characterizing the heterogeneity and uncertainty of hydraulic properties for complex geological settings. Hereby, we distinguish between two scales of heterogeneity, namely the hydrofacies structure and the intrafacies variability of the hydraulic properties. We employ multiple-point geostatistics to characterize the hydrofacies architecture. The multiple-point statistics are borrowed from a training image that is designed to reflect the prior geological conceptualization. The intrafacies variability of the hydraulic properties is represented using conventional two-point correlation methods, more precisely, spatial covariance models under a multi-Gaussian spatial law. We address the different levels and sources of uncertainty in characterizing the subsurface heterogeneity, and explore their effect on groundwater flow and transport predictions. Typically, uncertainty is assessed by way of many images, termed realizations, of a fixed statistical model. However, in many cases, sampling from a fixed stochastic model does not adequately represent the space of uncertainty. It neglects the uncertainty related to the selection of the stochastic model and the estimation of its input parameters. We acknowledge the uncertainty inherent in the definition of the prior conceptual model of aquifer architecture and in the estimation of global statistics, anisotropy, and correlation scales. Spatial bootstrap is used to assess the uncertainty of the unknown statistical parameters. As an illustrative example, we employ a synthetic field that represents a fluvial setting consisting of an interconnected network of channel sands embedded within finer-grained floodplain material. For this highly non-stationary setting we quantify the groundwater flow and transport model prediction uncertainty for various levels of hydrogeological uncertainty. Results indicate the importance of accurately describing the facies geometry, especially for transport
Validation of Fuel Performance Uncertainty for RIA Safety Analysis
Energy Technology Data Exchange (ETDEWEB)
Park, Nam-Gyu; Yoo, Jong-Sung; Jung, Yil-Sup [KEPCO Nuclear Fuel Co., Daejeon (Korea, Republic of)
2016-10-15
To achieve this the computer code performance has to be validated based on the experimental results. And for the uncertainty quantification, important uncertainty parameters need to be selected, and combined uncertainty has to be evaluated with an acceptable statistical treatment. And important uncertainty parameters to the rod performance such as fuel enthalpy, fission gas release, cladding hoop strain etc. were chosen through the rigorous sensitivity studies. And their validity has been assessed by utilizing the experimental results, which were tested in CABRI and NSRR. Analysis results revealed that several tested rods were not bounded within combined fuel performance uncertainty. Assessment of fuel performance with an extended fuel power uncertainty on tested rods in NSRR and CABRI has been done. Analysis results showed that several tested rods were not bounded within calculated fuel performance uncertainty. This implies that the currently considered uncertainty range of the parameters is not enough to cover the fuel performance sufficiently.
Uncertainty quantification theory, implementation, and applications
Smith, Ralph C
2014-01-01
The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...
Quantification of uncertainties of modeling and simulation
International Nuclear Information System (INIS)
Ma Zhibo; Yin Jianwei
2012-01-01
The principles of Modeling and Simulation (M and S) is interpreted by a functional relation, from which the total uncertainties of M and S are identified and sorted to three parts considered to vary along with the conceptual models' parameters. According to the idea of verification and validation, the space of the parameters is parted to verified and applied domains, uncertainties in the verified domain are quantified by comparison between numerical and standard results, and those in the applied domain are quantified by a newly developed extrapolating method. Examples are presented to demonstrate and qualify the ideas aimed to build a framework to quantify the uncertainties of M and S. (authors)
Uncertainty Quantification in Numerical Aerodynamics
Litvinenko, Alexander
2017-05-16
We consider uncertainty quantification problem in aerodynamic simulations. We identify input uncertainties, classify them, suggest an appropriate statistical model and, finally, estimate propagation of these uncertainties into the solution (pressure, velocity and density fields as well as the lift and drag coefficients). The deterministic problem under consideration is a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. Input uncertainties include: uncertain angle of attack, the Mach number, random perturbations in the airfoil geometry, mesh, shock location, turbulence model and parameters of this turbulence model. This problem requires efficient numerical/statistical methods since it is computationally expensive, especially for the uncertainties caused by random geometry variations which involve a large number of variables. In numerical section we compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry [D.Liu et al \\'17]. For modeling we used the TAU code, developed in DLR, Germany.
Uncertainty in spatial planning proceedings
Directory of Open Access Journals (Sweden)
Aleš Mlakar
2009-01-01
Full Text Available Uncertainty is distinctive of spatial planning as it arises from the necessity to co-ordinate the various interests within the area, from the urgency of adopting spatial planning decisions, the complexity of the environment, physical space and society, addressing the uncertainty of the future and from the uncertainty of actually making the right decision. Response to uncertainty is a series of measures that mitigate the effects of uncertainty itself. These measures are based on two fundamental principles – standardization and optimization. The measures are related to knowledge enhancement and spatial planning comprehension, in the legal regulation of changes, in the existence of spatial planning as a means of different interests co-ordination, in the active planning and the constructive resolution of current spatial problems, in the integration of spatial planning and the environmental protection process, in the implementation of the analysis as the foundation of spatial planners activities, in the methods of thinking outside the parameters, in forming clear spatial concepts and in creating a transparent management spatial system and also in the enforcement the participatory processes.
Hydrocoin level 3 - Testing methods for sensitivity/uncertainty analysis
International Nuclear Information System (INIS)
Grundfelt, B.; Lindbom, B.; Larsson, A.; Andersson, K.
1991-01-01
The HYDROCOIN study is an international cooperative project for testing groundwater hydrology modelling strategies for performance assessment of nuclear waste disposal. The study was initiated in 1984 by the Swedish Nuclear Power Inspectorate and the technical work was finalised in 1987. The participating organisations are regulatory authorities as well as implementing organisations in 10 countries. The study has been performed at three levels aimed at studying computer code verification, model validation and sensitivity/uncertainty analysis respectively. The results from the first two levels, code verification and model validation, have been published in reports in 1988 and 1990 respectively. This paper focuses on some aspects of the results from Level 3, sensitivity/uncertainty analysis, for which a final report is planned to be published during 1990. For Level 3, seven test cases were defined. Some of these aimed at exploring the uncertainty associated with the modelling results by simply varying parameter values and conceptual assumptions. In other test cases statistical sampling methods were applied. One of the test cases dealt with particle tracking and the uncertainty introduced by this type of post processing. The amount of results available is substantial although unevenly spread over the test cases. It has not been possible to cover all aspects of the results in this paper. Instead, the different methods applied will be illustrated by some typical analyses. 4 figs., 9 refs
Development of Property Models with Uncertainty Estimate for Process Design under Uncertainty
DEFF Research Database (Denmark)
Hukkerikar, Amol; Sarup, Bent; Abildskov, Jens
more reliable predictions with a new and improved set of model parameters for GC (group contribution) based and CI (atom connectivity index) based models and to quantify the uncertainties in the estimated property values from a process design point-of-view. This includes: (i) parameter estimation using....... The comparison of model prediction uncertainties with reported range of measurement uncertainties is presented for the properties with related available data. The application of the developed methodology to quantify the effect of these uncertainties on the design of different unit operations (distillation column......, the developed methodology can be used to quantify the sensitivity of process design to uncertainties in property estimates; obtain rationally the risk/safety factors in process design; and identify additional experimentation needs in order to reduce most critical uncertainties....
Uncertainty of the calibration factor
International Nuclear Information System (INIS)
1995-01-01
According to present definitions, an error is the difference between a measured value and the ''true'' value. Thus an error has both a numerical value and a sign. In contrast, the uncertainly associated with a measurement is a parameter that characterizes the dispersion of the values ''that could reasonably be attributed to the measurand''. This parameter is normally an estimated standard deviation. An uncertainty, therefore, has no known sign and is usually assumed to be symmetrical. It is a measure of our lack of exact knowledge, after all recognized ''systematic'' effects have been eliminated by applying appropriate corrections. If errors were known exactly, the true value could be determined and there would be no problem left. In reality, errors are estimated in the best possible way and corrections made for them. Therefore, after application of all known corrections, errors need no further consideration (their expectation value being zero) and the only quantities of interest are uncertainties. 3 refs, 2 figs
Instrument uncertainty predictions
International Nuclear Information System (INIS)
Coutts, D.A.
1991-07-01
The accuracy of measurements and correlations should normally be provided for most experimental activities. The uncertainty is a measure of the accuracy of a stated value or equation. The uncertainty term reflects a combination of instrument errors, modeling limitations, and phenomena understanding deficiencies. This report provides several methodologies to estimate an instrument's uncertainty when used in experimental work. Methods are shown to predict both the pretest and post-test uncertainty
Some concepts of model uncertainty for performance assessments of nuclear waste repositories
International Nuclear Information System (INIS)
Eisenberg, N.A.; Sagar, B.; Wittmeyer, G.W.
1994-01-01
Models of the performance of nuclear waste repositories will be central to making regulatory decisions regarding the safety of such facilities. The conceptual model of repository performance is represented by mathematical relationships, which are usually implemented as one or more computer codes. A geologic system may allow many conceptual models, which are consistent with the observations. These conceptual models may or may not have the same mathematical representation. Experiences in modeling the performance of a waste repository representation. Experiences in modeling the performance of a waste repository (which is, in part, a geologic system), show that this non-uniqueness of conceptual models is a significant source of model uncertainty. At the same time, each conceptual model has its own set of parameters and usually, it is not be possible to completely separate model uncertainty from parameter uncertainty for the repository system. Issues related to the origin of model uncertainty, its relation to parameter uncertainty, and its incorporation in safety assessments are discussed from a broad regulatory perspective. An extended example in which these issues are explored numerically is also provided
Model uncertainty from a regulatory point of view
International Nuclear Information System (INIS)
Abramson, L.R.
1994-01-01
This paper discusses model uncertainty in the larger context of knowledge and random uncertainty. It explores some regulatory implications of model uncertainty and argues that, from a regulator's perspective, a conservative approach must be taken. As a consequence of this perspective, averaging over model results is ruled out
Refinement of the concept of uncertainty.
Penrod, J
2001-04-01
To analyse the conceptual maturity of uncertainty; to develop an expanded theoretical definition of uncertainty; to advance the concept using methods of concept refinement; and to analyse congruency with the conceptualization of uncertainty presented in the theory of hope, enduring, and suffering. Uncertainty is of concern in nursing as people experience complex life events surrounding health. In an earlier nursing study that linked the concepts of hope, enduring, and suffering into a single theoretical scheme, a state best described as 'uncertainty' arose. This study was undertaken to explore how this conceptualization fit with the scientific literature on uncertainty and to refine the concept. Initially, a concept analysis using advanced methods described by Morse, Hupcey, Mitcham and colleagues was completed. The concept was determined to be partially mature. A theoretical definition was derived and techniques of concept refinement using the literature as data were applied. The refined concept was found to be congruent with the concept of uncertainty that had emerged in the model of hope, enduring and suffering. Further investigation is needed to explore the extent of probabilistic reasoning and the effects of confidence and control on feelings of uncertainty and certainty.
Efficient climate policies under technology and climate uncertainty
International Nuclear Information System (INIS)
Held, Hermann; Kriegler, Elmar; Lessmann, Kai; Edenhofer, Ottmar
2009-01-01
This article explores efficient climate policies in terms of investment streams into fossil and renewable energy technologies. The investment decisions maximise social welfare while observing a probabilistic guardrail for global mean temperature rise under uncertain technology and climate parameters. Such a guardrail constitutes a chance constraint, and the resulting optimisation problem is an instance of chance constrained programming, not stochastic programming as often employed. Our analysis of a model of economic growth and endogenous technological change, MIND, suggests that stringent mitigation strategies cannot guarantee a very high probability of limiting warming to 2 o C since preindustrial time under current uncertainty about climate sensitivity and climate response time scale. Achieving the 2 o C temperature target with a probability P* of 75% requires drastic carbon dioxide emission cuts. This holds true even though we have assumed an aggressive mitigation policy on other greenhouse gases from, e.g., the agricultural sector. The emission cuts are deeper than estimated from a deterministic calculation with climate sensitivity fixed at the P* quantile of its marginal probability distribution (3.6 o C). We show that earlier and cumulatively larger investments into the renewable sector are triggered by including uncertainty in the technology and climate response time scale parameters. This comes at an additional GWP loss of 0.3%, resulting in a total loss of 0.8% GWP for observing the chance constraint. We obtained those results with a new numerical scheme to implement constrained welfare optimisation under uncertainty as a chance constrained programming problem in standard optimisation software such as GAMS. The scheme is able to incorporate multivariate non-factorial probability measures such as given by the joint distribution of climate sensitivity and response time. We demonstrate the scheme for the case of a four-dimensional parameter space capturing
Verburg, P.H.; Tabeau, A.A.; Hatna, E.
2013-01-01
Land change model outcomes are vulnerable to multiple types of uncertainty, including uncertainty in input data, structural uncertainties in the model and uncertainties in model parameters. In coupled model systems the uncertainties propagate between the models. This paper assesses uncertainty of
Uncertainty Assessment in Urban Storm Water Drainage Modelling
DEFF Research Database (Denmark)
Thorndahl, Søren
The object of this paper is to make an overall description of the author's PhD study, concerning uncertainties in numerical urban storm water drainage models. Initially an uncertainty localization and assessment of model inputs and parameters as well as uncertainties caused by different model...
International Nuclear Information System (INIS)
Andres, T.H.
2002-05-01
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
Energy Technology Data Exchange (ETDEWEB)
Andres, T.H
2002-05-01
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses t