WorldWideScience

Sample records for large uncertainties due

  1. Response of ENSO amplitude to global warming in CESM large ensemble: uncertainty due to internal variability

    Science.gov (United States)

    Zheng, Xiao-Tong; Hui, Chang; Yeh, Sang-Wook

    2018-06-01

    El Niño-Southern Oscillation (ENSO) is the dominant mode of variability in the coupled ocean-atmospheric system. Future projections of ENSO change under global warming are highly uncertain among models. In this study, the effect of internal variability on ENSO amplitude change in future climate projections is investigated based on a 40-member ensemble from the Community Earth System Model Large Ensemble (CESM-LE) project. A large uncertainty is identified among ensemble members due to internal variability. The inter-member diversity is associated with a zonal dipole pattern of sea surface temperature (SST) change in the mean along the equator, which is similar to the second empirical orthogonal function (EOF) mode of tropical Pacific decadal variability (TPDV) in the unforced control simulation. The uncertainty in CESM-LE is comparable in magnitude to that among models of the Coupled Model Intercomparison Project phase 5 (CMIP5), suggesting the contribution of internal variability to the intermodel uncertainty in ENSO amplitude change. However, the causations between changes in ENSO amplitude and the mean state are distinct between CESM-LE and CMIP5 ensemble. The CESM-LE results indicate that a large ensemble of 15 members is needed to separate the relative contributions to ENSO amplitude change over the twenty-first century between forced response and internal variability.

  2. Quantifying uncertainty in NDSHA estimates due to earthquake catalogue

    Science.gov (United States)

    Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano

    2014-05-01

    The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate

  3. Sensitivity of Process Design due to Uncertainties in Property Estimates

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Jones, Mark Nicholas; Sarup, Bent

    2012-01-01

    The objective of this paper is to present a systematic methodology for performing analysis of sensitivity of process design due to uncertainties in property estimates. The methodology provides the following results: a) list of properties with critical importance on design; b) acceptable levels of...... in chemical processes. Among others vapour pressure accuracy for azeotropic mixtures is critical and needs to be measured or estimated with a ±0.25% accuracy to satisfy acceptable safety levels in design....

  4. Large-uncertainty intelligent states for angular momentum and angle

    International Nuclear Information System (INIS)

    Goette, Joerg B; Zambrini, Roberta; Franke-Arnold, Sonja; Barnett, Stephen M

    2005-01-01

    The equality in the uncertainty principle for linear momentum and position is obtained for states which also minimize the uncertainty product. However, in the uncertainty relation for angular momentum and angular position both sides of the inequality are state dependent and therefore the intelligent states, which satisfy the equality, do not necessarily give a minimum for the uncertainty product. In this paper, we highlight the difference between intelligent states and minimum uncertainty states by investigating a class of intelligent states which obey the equality in the angular uncertainty relation while having an arbitrarily large uncertainty product. To develop an understanding for the uncertainties of angle and angular momentum for the large-uncertainty intelligent states we compare exact solutions with analytical approximations in two limiting cases

  5. Uncertainty of Doppler reactivity worth due to uncertainties of JENDL-3.2 resonance parameters

    Energy Technology Data Exchange (ETDEWEB)

    Zukeran, Atsushi [Hitachi Ltd., Hitachi, Ibaraki (Japan). Power and Industrial System R and D Div.; Hanaki, Hiroshi; Nakagawa, Tuneo; Shibata, Keiichi; Ishikawa, Makoto

    1998-03-01

    Analytical formula of Resonance Self-shielding Factor (f-factor) is derived from the resonance integral (J-function) based on NR approximation and the analytical expression for Doppler reactivity worth ({rho}) is also obtained by using the result. Uncertainties of the f-factor and Doppler reactivity worth are evaluated on the basis of sensitivity coefficients to the resonance parameters. The uncertainty of the Doppler reactivity worth at 487{sup 0}K is about 4 % for the PNC Large Fast Breeder Reactor. (author)

  6. Uncertainty in techno-economic estimates of cellulosic ethanol production due to experimental measurement uncertainty

    Directory of Open Access Journals (Sweden)

    Vicari Kristin J

    2012-04-01

    Full Text Available Abstract Background Cost-effective production of lignocellulosic biofuels remains a major financial and technical challenge at the industrial scale. A critical tool in biofuels process development is the techno-economic (TE model, which calculates biofuel production costs using a process model and an economic model. The process model solves mass and energy balances for each unit, and the economic model estimates capital and operating costs from the process model based on economic assumptions. The process model inputs include experimental data on the feedstock composition and intermediate product yields for each unit. These experimental yield data are calculated from primary measurements. Uncertainty in these primary measurements is propagated to the calculated yields, to the process model, and ultimately to the economic model. Thus, outputs of the TE model have a minimum uncertainty associated with the uncertainty in the primary measurements. Results We calculate the uncertainty in the Minimum Ethanol Selling Price (MESP estimate for lignocellulosic ethanol production via a biochemical conversion process: dilute sulfuric acid pretreatment of corn stover followed by enzymatic hydrolysis and co-fermentation of the resulting sugars to ethanol. We perform a sensitivity analysis on the TE model and identify the feedstock composition and conversion yields from three unit operations (xylose from pretreatment, glucose from enzymatic hydrolysis, and ethanol from fermentation as the most important variables. The uncertainty in the pretreatment xylose yield arises from multiple measurements, whereas the glucose and ethanol yields from enzymatic hydrolysis and fermentation, respectively, are dominated by a single measurement: the fraction of insoluble solids (fIS in the biomass slurries. Conclusions We calculate a $0.15/gal uncertainty in MESP from the TE model due to uncertainties in primary measurements. This result sets a lower bound on the error bars of

  7. Large break LOCA uncertainty evaluation and comparison with conservative calculation

    International Nuclear Information System (INIS)

    Glaeser, H.G.

    2004-01-01

    The first formulation of the USA Code of Federal Regulations (CFR) 10CFR50 with applicable sections specific to NPP licensing requirements was released 1976. Over a decade later 10CFR 50.46 allowed the use of BE codes instead of conservative code models but uncertainties have to be identified and quantified. Guidelines were released that described interpretations developed over the intervening years that are applicable. Other countries established similar conservative procedures and acceptance criteria. Because conservative methods were used to calculate the peak values of key parameters, such as peak clad temperature (PCT), it was always acknowledged that a large margin, between the 'conservative' calculated value and the 'true' value, existed. Beside USA, regulation in other countries, like Germany, for example, allowed that the state of science and technology is applied in licensing. I.e. the increase of experimental evidence and progress in code development during time could be used. There was no requirement to apply a pure evaluation methodology with licensed assumptions and frozen codes. The thermal-hydraulic system codes became more and more best-estimate codes based on comprehensive validation. This development was and is possible because the rules and guidelines provide the necessary latitude to consider further development of safety technology. Best estimate codes are allowed to be used in licensing in combination with conservative initial and boundary conditions. However, uncertainty quantification is not required. Since some of the initial and boundary conditions are more conservative compared with those internationally used (e.g. 106% reactor power instead 102%, a single failure plus a non-availability due to preventive maintenance is assumed, etc.) it is claimed that the uncertainties of code models are covered. Since many utilities apply for power increase, calculation results come closer to some licensing criteria. The situation in German licensing

  8. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    Science.gov (United States)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  9. Uncertainty in soil carbon accounting due to unrecognized soil erosion.

    Science.gov (United States)

    Sanderman, Jonathan; Chappell, Adrian

    2013-01-01

    The movement of soil organic carbon (SOC) during erosion and deposition events represents a major perturbation to the terrestrial carbon cycle. Despite the recognized impact soil redistribution can have on the carbon cycle, few major carbon accounting models currently allow for soil mass flux. Here, we modified a commonly used SOC model to include a soil redistribution term and then applied it to scenarios which explore the implications of unrecognized erosion and deposition for SOC accounting. We show that models that assume a static landscape may be calibrated incorrectly as erosion of SOC is hidden within the decay constants. This implicit inclusion of erosion then limits the predictive capacity of these models when applied to sites with different soil redistribution histories. Decay constants were found to be 15-50% slower when an erosion rate of 15 t soil ha(-1)  yr(-1) was explicitly included in the SOC model calibration. Static models cannot account for SOC change resulting from agricultural management practices focused on reducing erosion rates. Without accounting for soil redistribution, a soil sampling scheme which uses a fixed depth to support model development can create large errors in actual and relative changes in SOC stocks. When modest levels of erosion were ignored, the combined uncertainty in carbon sequestration rates was 0.3-1.0 t CO2  ha(-1)  yr(-1) . This range is similar to expected sequestration rates for many management options aimed at increasing SOC levels. It is evident from these analyses that explicit recognition of soil redistribution is critical to the success of a carbon monitoring or trading scheme which seeks to credit agricultural activities. © 2012 Blackwell Publishing Ltd.

  10. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    Czech Academy of Sciences Publication Activity Database

    Yu, X.; Lamačová, Anna; Duffy, Ch.; Krám, P.; Hruška, Jakub

    2016-01-01

    Roč. 90, part B (2016), s. 90-101 ISSN 0098-3004 R&D Projects: GA MŠk(CZ) LO1415 Institutional support: RVO:67179843 Keywords : Uncertainty * Evapotranspiration * Forest management * PIHM * Biome-BGC Subject RIV: DA - Hydrology ; Limnology OBOR OECD: Hydrology Impact factor: 2.533, year: 2016

  11. Alignment measurements uncertainties for large assemblies using probabilistic analysis techniques

    CERN Document Server

    AUTHOR|(CDS)2090816; Almond, Heather

    Big science and ambitious industrial projects continually push forward with technical requirements beyond the grasp of conventional engineering techniques. Example of those are ultra-high precision requirements in the field of celestial telescopes, particle accelerators and aerospace industry. Such extreme requirements are limited largely by the capability of the metrology used, namely, it’s uncertainty in relation to the alignment tolerance required. The current work was initiated as part of Maria Curie European research project held at CERN, Geneva aiming to answer those challenges as related to future accelerators requiring alignment of 2 m large assemblies to tolerances in the 10 µm range. The thesis has found several gaps in current knowledge limiting such capability. Among those was the lack of application of state of the art uncertainty propagation methods in alignment measurements metrology. Another major limiting factor found was the lack of uncertainty statements in the thermal errors compensatio...

  12. Quantifying uncertainty due to internal variability using high-resolution regional climate model simulations

    Science.gov (United States)

    Gutmann, E. D.; Ikeda, K.; Deser, C.; Rasmussen, R.; Clark, M. P.; Arnold, J. R.

    2015-12-01

    The uncertainty in future climate predictions is as large or larger than the mean climate change signal. As such, any predictions of future climate need to incorporate and quantify the sources of this uncertainty. One of the largest sources comes from the internal, chaotic, variability within the climate system itself. This variability has been approximated using the 30 ensemble members of the Community Earth System Model (CESM) large ensemble. Here we examine the wet and dry end members of this ensemble for cool-season precipitation in the Colorado Rocky Mountains with a set of high-resolution regional climate model simulations. We have used the Weather Research and Forecasting model (WRF) to simulate the periods 1990-2000, 2025-2035, and 2070-2080 on a 4km grid. These simulations show that the broad patterns of change depicted in CESM are inherited by the high-resolution simulations; however, the differences in the height and location of the mountains in the WRF simulation, relative to the CESM simulation, means that the location and magnitude of the precipitation changes are very different. We further show that high-resolution simulations with the Intermediate Complexity Atmospheric Research model (ICAR) predict a similar spatial pattern in the change signal as WRF for these ensemble members. We then use ICAR to examine the rest of the CESM Large Ensemble as well as the uncertainty in the regional climate model due to the choice of physics parameterizations.

  13. Risk Management and Uncertainty in Large Complex Public Projects

    DEFF Research Database (Denmark)

    Neerup Themsen, Tim; Harty, Chris; Tryggestad, Kjell

    Governmental actors worldwide are promoting risk management as a rational approach to man-age uncertainty and improve the abilities to deliver large complex projects according to budget, time plans, and pre-set project specifications: But what do we know about the effects of risk management...... on the abilities to meet such objectives? Using Callon’s (1998) twin notions of framing and overflowing we examine the implementation of risk management within the Dan-ish public sector and the effects this generated for the management of two large complex pro-jects. We show how the rational framing of risk...... management have generated unexpected costly outcomes such as: the undermining of the longer-term value and societal relevance of the built asset, the negligence of the wider range of uncertainties emerging during project processes, and constraining forms of knowledge. We also show how expert accountants play...

  14. Uncertainty of measurement for large product verification: evaluation of large aero gas turbine engine datums

    International Nuclear Information System (INIS)

    Muelaner, J E; Wang, Z; Keogh, P S; Brownell, J; Fisher, D

    2016-01-01

    Understanding the uncertainty of dimensional measurements for large products such as aircraft, spacecraft and wind turbines is fundamental to improving efficiency in these products. Much work has been done to ascertain the uncertainty associated with the main types of instruments used, based on laser tracking and photogrammetry, and the propagation of this uncertainty through networked measurements. Unfortunately this is not sufficient to understand the combined uncertainty of industrial measurements, which include secondary tooling and datum structures used to locate the coordinate frame. This paper presents for the first time a complete evaluation of the uncertainty of large scale industrial measurement processes. Generic analysis and design rules are proven through uncertainty evaluation and optimization for the measurement of a large aero gas turbine engine. This shows how the instrument uncertainty can be considered to be negligible. Before optimization the dominant source of uncertainty was the tooling design, after optimization the dominant source was thermal expansion of the engine; meaning that no further improvement can be made without measurement in a temperature controlled environment. These results will have a significant impact on the ability of aircraft and wind turbines to improve efficiency and therefore reduce carbon emissions, as well as the improved reliability of these products. (paper)

  15. Measurements of fusion neutron yields by neutron activation technique: Uncertainty due to the uncertainty on activation cross-sections

    Energy Technology Data Exchange (ETDEWEB)

    Stankunas, Gediminas, E-mail: gediminas.stankunas@lei.lt [Lithuanian Energy Institute, Laboratory of Nuclear Installation Safety, Breslaujos str. 3, LT-44403 Kaunas (Lithuania); EUROfusion Consortium, JET, Culham Science Centre, Abingdon OX14 3DB (United Kingdom); Batistoni, Paola [ENEA, Via E. Fermi, 45, 00044 Frascati, Rome (Italy); EUROfusion Consortium, JET, Culham Science Centre, Abingdon OX14 3DB (United Kingdom); Sjöstrand, Henrik; Conroy, Sean [Department of Physics and Astronomy, Uppsala University, PO Box 516, SE-75120 Uppsala (Sweden); EUROfusion Consortium, JET, Culham Science Centre, Abingdon OX14 3DB (United Kingdom)

    2015-07-11

    The neutron activation technique is routinely used in fusion experiments to measure the neutron yields. This paper investigates the uncertainty on these measurements as due to the uncertainties on dosimetry and activation reactions. For this purpose, activation cross-sections were taken from the International Reactor Dosimetry and Fusion File (IRDFF-v1.05) in 640 groups ENDF-6 format for several reactions of interest for both 2.5 and 14 MeV neutrons. Activation coefficients (reaction rates) have been calculated using the neutron flux spectra at JET vacuum vessel, both for DD and DT plasmas, calculated by MCNP in the required 640-energy group format. The related uncertainties for the JET neutron spectra are evaluated as well using the covariance data available in the library. These uncertainties are in general small, but not negligible when high accuracy is required in the determination of the fusion neutron yields.

  16. Investigation of the uncertainty of a validation experiment due to uncertainty in its boundary conditions

    International Nuclear Information System (INIS)

    Harris, J.; Nani, D.; Jones, K.; Khodier, M.; Smith, B.L.

    2011-01-01

    Elements contributing to uncertainty in experimental repeatability are quantified for data acquisition in a bank of cylinders. The cylinder bank resembles the lower plenum of a high temperature reactor with cylinders arranged on equilateral triangles with a pitch to diameter ratio of 1.7. The 3-D as-built geometry was measured by imaging reflections off the internal surfaces of the facility. This information is useful for building CFD grids for Validation studies. Time-averaged Particle Image Velocimetry (PIV) measurements were acquired daily over several months along with the pressure drop between two cylinders. The atmospheric pressure was measured along with the data set. The PIV data and pressure drop were correlated with atmospheric conditions and changes in experimental setup. It was found that atmospheric conditions play little role in the channel velocity, but impact the pressure drop significantly. The adjustments made to the experiment setup did not change the results. However, in some cases, the wake behind a cylinder was shifted significantly from one day to the next. These changes did not correlate with ambient pressure, room temperature, nor tear down/rebuilds of the facility. (author)

  17. Managing the continuum certainty, uncertainty, unpredictability in large engineering projects

    CERN Document Server

    Caron, Franco

    2013-01-01

    The brief will describe how to develop a risk analysis applied to a project , through a sequence of steps: risk management planning, risk identification, risk classification, risk assessment, risk quantification, risk response planning, risk monitoring and control, process close out and lessons learning. The project risk analysis and management process will be applied to large engineering projects, in particular related to the oil and gas industry. The brief will address the overall range of possible events affecting the project moving from certainty (project issues) through uncertainty (project risks) to unpredictability (unforeseeable events), considering both negative and positive events. Some quantitative techniques (simulation, event tree, Bayesian inference, etc.) will be used to develop risk quantification. The brief addresses a typical subject in the area of project management, with reference to large engineering projects concerning the realization of large plants and infrastructures. These projects a...

  18. Value of Uncertainty: The Lost Opportunities in Large Projects

    Directory of Open Access Journals (Sweden)

    Agnar Johansen

    2016-08-01

    Full Text Available The uncertainty management theory has become well established over the last 20–30 years. However, the authors suggest that it does not fully address why opportunities often remain unexploited. Empirical studies show a stronger focus on mitigating risks than exploiting opportunities. This paper therefore addresses why so few opportunities are explored in large projects. The theory claims that risks and opportunities should be equally managed in the same process. In two surveys, conducted in six (private and public companies over a four-year period, project managers stated that uncertainty management is about managing risk and opportunities. However, two case studies from 12 projects from the same companies revealed that all of them had their main focus on risks, and most of the opportunities were left unexploited. We have developed a theoretical explanation model to shed light on this phenomena. The concept is a reflection based on findings from our empirical data up against current project management, uncertainty, risk and stakeholder literature. Our model shows that the threshold for pursuing a potential opportunity is high. If a potential opportunity should be considered, it must be extremely interesting, since it may require contract changes, and the project must abandon an earlier-accepted best solution.

  19. Attributing uncertainty in streamflow simulations due to variable inputs via the Quantile Flow Deviation metric

    Science.gov (United States)

    Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish

    2018-06-01

    Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.

  20. Uncertainty in urban flood damage assessment due to urban drainage modelling and depth-damage curve estimation.

    Science.gov (United States)

    Freni, G; La Loggia, G; Notaro, V

    2010-01-01

    Due to the increased occurrence of flooding events in urban areas, many procedures for flood damage quantification have been defined in recent decades. The lack of large databases in most cases is overcome by combining the output of urban drainage models and damage curves linking flooding to expected damage. The application of advanced hydraulic models as diagnostic, design and decision-making support tools has become a standard practice in hydraulic research and application. Flooding damage functions are usually evaluated by a priori estimation of potential damage (based on the value of exposed goods) or by interpolating real damage data (recorded during historical flooding events). Hydraulic models have undergone continuous advancements, pushed forward by increasing computer capacity. The details of the flooding propagation process on the surface and the details of the interconnections between underground and surface drainage systems have been studied extensively in recent years, resulting in progressively more reliable models. The same level of was advancement has not been reached with regard to damage curves, for which improvements are highly connected to data availability; this remains the main bottleneck in the expected flooding damage estimation. Such functions are usually affected by significant uncertainty intrinsically related to the collected data and to the simplified structure of the adopted functional relationships. The present paper aimed to evaluate this uncertainty by comparing the intrinsic uncertainty connected to the construction of the damage-depth function to the hydraulic model uncertainty. In this way, the paper sought to evaluate the role of hydraulic model detail level in the wider context of flood damage estimation. This paper demonstrated that the use of detailed hydraulic models might not be justified because of the higher computational cost and the significant uncertainty in damage estimation curves. This uncertainty occurs mainly

  1. Application of probabilistic modelling for the uncertainty evaluation of alignment measurements of large accelerator magnets assemblies

    Science.gov (United States)

    Doytchinov, I.; Tonnellier, X.; Shore, P.; Nicquevert, B.; Modena, M.; Mainaud Durand, H.

    2018-05-01

    Micrometric assembly and alignment requirements for future particle accelerators, and especially large assemblies, create the need for accurate uncertainty budgeting of alignment measurements. Measurements and uncertainties have to be accurately stated and traceable, to international standards, for metre-long sized assemblies, in the range of tens of µm. Indeed, these hundreds of assemblies will be produced and measured by several suppliers around the world, and will have to be integrated into a single machine. As part of the PACMAN project at CERN, we proposed and studied a practical application of probabilistic modelling of task-specific alignment uncertainty by applying a simulation by constraints calibration method. Using this method, we calibrated our measurement model using available data from ISO standardised tests (10360 series) for the metrology equipment. We combined this model with reference measurements and analysis of the measured data to quantify the actual specific uncertainty of each alignment measurement procedure. Our methodology was successfully validated against a calibrated and traceable 3D artefact as part of an international inter-laboratory study. The validated models were used to study the expected alignment uncertainty and important sensitivity factors in measuring the shortest and longest of the compact linear collider study assemblies, 0.54 m and 2.1 m respectively. In both cases, the laboratory alignment uncertainty was within the targeted uncertainty budget of 12 µm (68% confidence level). It was found that the remaining uncertainty budget for any additional alignment error compensations, such as the thermal drift error due to variation in machine operation heat load conditions, must be within 8.9 µm and 9.8 µm (68% confidence level) respectively.

  2. Uncertainty on PIV mean and fluctuating velocity due to bias and random errors

    International Nuclear Information System (INIS)

    Wilson, Brandon M; Smith, Barton L

    2013-01-01

    Particle image velocimetry is a powerful and flexible fluid velocity measurement tool. In spite of its widespread use, the uncertainty of PIV measurements has not been sufficiently addressed to date. The calculation and propagation of local, instantaneous uncertainties on PIV results into the measured mean and Reynolds stresses are demonstrated for four PIV error sources that impact uncertainty through the vector computation: particle image density, diameter, displacement and velocity gradients. For the purpose of this demonstration, velocity data are acquired in a rectangular jet. Hot-wire measurements are compared to PIV measurements with velocity fields computed using two PIV algorithms. Local uncertainty on the velocity mean and Reynolds stress for these algorithms are automatically estimated using a previously published method. Previous work has shown that PIV measurements can become ‘noisy’ in regions of high shear as well as regions of small displacement. This paper also demonstrates the impact of these effects by comparing PIV data to data acquired using hot-wire anemometry, which does not suffer from the same issues. It is confirmed that flow gradients, large particle images and insufficient particle image displacements can result in elevated measurements of turbulence levels. The uncertainty surface method accurately estimates the difference between hot-wire and PIV measurements for most cases. The uncertainty based on each algorithm is found to be unique, motivating the use of algorithm-specific uncertainty estimates. (paper)

  3. Estimation of Peaking Factor Uncertainty due to Manufacturing Tolerance using Statistical Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kyung Hoon; Park, Ho Jin; Lee, Chung Chan; Cho, Jin Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    The purpose of this paper is to study the effect on output parameters in the lattice physics calculation due to the last input uncertainty such as manufacturing deviations from nominal value for material composition and geometric dimensions. In a nuclear design and analysis, the lattice physics calculations are usually employed to generate lattice parameters for the nodal core simulation and pin power reconstruction. These lattice parameters which consist of homogenized few-group cross-sections, assembly discontinuity factors, and form-functions can be affected by input uncertainties which arise from three different sources: 1) multi-group cross-section uncertainties, 2) the uncertainties associated with methods and modeling approximations utilized in lattice physics codes, and 3) fuel/assembly manufacturing uncertainties. In this paper, data provided by the light water reactor (LWR) uncertainty analysis in modeling (UAM) benchmark has been used as the manufacturing uncertainties. First, the effect of each input parameter has been investigated through sensitivity calculations at the fuel assembly level. Then, uncertainty in prediction of peaking factor due to the most sensitive input parameter has been estimated using the statistical sampling method, often called the brute force method. For our analysis, the two-dimensional transport lattice code DeCART2D and its ENDF/B-VII.1 based 47-group library were used to perform the lattice physics calculation. Sensitivity calculations have been performed in order to study the influence of manufacturing tolerances on the lattice parameters. The manufacturing tolerance that has the largest influence on the k-inf is the fuel density. The second most sensitive parameter is the outer clad diameter.

  4. Uncertainty Evaluation of Reactivity Coefficients for a large advanced SFR Core Design

    International Nuclear Information System (INIS)

    Khamakhem, Wassim; Rimpault, Gerald

    2008-01-01

    Sodium Cooled Fast Reactors are currently being reshaped in order to meet Generation IV goals on economics, safety and reliability, sustainability and proliferation resistance. Recent studies have led to large SFR cores for a 3600 MWth power plants, cores which exhibit interesting features. The designs have had to balance between competing aspects such as sustainability and safety characteristics. Sustainability in neutronic terms is translated into positive breeding gain and safety into rather low Na void reactivity effects. The studies have been done on two SFR concepts using oxide and carbide fuels. The use of the sensitivity theory in the ERANOS determinist code system has been used. Calculations have been performed with different sodium evaluations: JEF2.2, ERALIB-1 and the most recent JEFF3.1 and ENDF/B-VII in order to make a broad comparison. Values for the Na void reactivity effect exhibit differences as large as 14% when using the different sodium libraries. Uncertainties due to nuclear data on the reactivity coefficients were performed with BOLNA variances-covariances data, the Na Void Effect uncertainties are near to 12% at 1σ. Since, the uncertainties are far beyond the target accuracy for a design achieving high performance, two directions are envisaged: the first one is to perform new differential measurements or in a second attempt use integral experiments to improve effectively the nuclear data set and its uncertainties such as performed in the past with ERALIB1. (authors)

  5. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  6. Exploring the uncertainty in attributing sediment contributions in fingerprinting studies due to uncertainty in determining element concentrations in source areas.

    Science.gov (United States)

    Gomez, Jose Alfonso; Owens, Phillip N.; Koiter, Alex J.; Lobb, David

    2016-04-01

    One of the major sources of uncertainty in attributing sediment sources in fingerprinting studies is the uncertainty in determining the concentrations of the elements used in the mixing model due to the variability of the concentrations of these elements in the source materials (e.g., Kraushaar et al., 2015). The uncertainty in determining the "true" concentration of a given element in each one of the source areas depends on several factors, among them the spatial variability of that element, the sampling procedure and sampling density. Researchers have limited control over these factors, and usually sampling density tends to be sparse, limited by time and the resources available. Monte Carlo analysis has been used regularly in fingerprinting studies to explore the probable solutions within the measured variability of the elements in the source areas, providing an appraisal of the probability of the different solutions (e.g., Collins et al., 2012). This problem can be considered analogous to the propagation of uncertainty in hydrologic models due to uncertainty in the determination of the values of the model parameters, and there are many examples of Monte Carlo analysis of this uncertainty (e.g., Freeze, 1980; Gómez et al., 2001). Some of these model analyses rely on the simulation of "virtual" situations that were calibrated from parameter values found in the literature, with the purpose of providing insight about the response of the model to different configurations of input parameters. This approach - evaluating the answer for a "virtual" problem whose solution could be known in advance - might be useful in evaluating the propagation of uncertainty in mixing models in sediment fingerprinting studies. In this communication, we present the preliminary results of an on-going study evaluating the effect of variability of element concentrations in source materials, sampling density, and the number of elements included in the mixing models. For this study a virtual

  7. Large contribution of natural aerosols to uncertainty in indirect forcing

    Science.gov (United States)

    Carslaw, K. S.; Lee, L. A.; Reddington, C. L.; Pringle, K. J.; Rap, A.; Forster, P. M.; Mann, G. W.; Spracklen, D. V.; Woodhouse, M. T.; Regayre, L. A.; Pierce, J. R.

    2013-11-01

    The effect of anthropogenic aerosols on cloud droplet concentrations and radiative properties is the source of one of the largest uncertainties in the radiative forcing of climate over the industrial period. This uncertainty affects our ability to estimate how sensitive the climate is to greenhouse gas emissions. Here we perform a sensitivity analysis on a global model to quantify the uncertainty in cloud radiative forcing over the industrial period caused by uncertainties in aerosol emissions and processes. Our results show that 45 per cent of the variance of aerosol forcing since about 1750 arises from uncertainties in natural emissions of volcanic sulphur dioxide, marine dimethylsulphide, biogenic volatile organic carbon, biomass burning and sea spray. Only 34 per cent of the variance is associated with anthropogenic emissions. The results point to the importance of understanding pristine pre-industrial-like environments, with natural aerosols only, and suggest that improved measurements and evaluation of simulated aerosols in polluted present-day conditions will not necessarily result in commensurate reductions in the uncertainty of forcing estimates.

  8. Large contribution of natural aerosols to uncertainty in indirect forcing.

    Science.gov (United States)

    Carslaw, K S; Lee, L A; Reddington, C L; Pringle, K J; Rap, A; Forster, P M; Mann, G W; Spracklen, D V; Woodhouse, M T; Regayre, L A; Pierce, J R

    2013-11-07

    The effect of anthropogenic aerosols on cloud droplet concentrations and radiative properties is the source of one of the largest uncertainties in the radiative forcing of climate over the industrial period. This uncertainty affects our ability to estimate how sensitive the climate is to greenhouse gas emissions. Here we perform a sensitivity analysis on a global model to quantify the uncertainty in cloud radiative forcing over the industrial period caused by uncertainties in aerosol emissions and processes. Our results show that 45 per cent of the variance of aerosol forcing since about 1750 arises from uncertainties in natural emissions of volcanic sulphur dioxide, marine dimethylsulphide, biogenic volatile organic carbon, biomass burning and sea spray. Only 34 per cent of the variance is associated with anthropogenic emissions. The results point to the importance of understanding pristine pre-industrial-like environments, with natural aerosols only, and suggest that improved measurements and evaluation of simulated aerosols in polluted present-day conditions will not necessarily result in commensurate reductions in the uncertainty of forcing estimates.

  9. Thermodynamic Temperatures of High-Temperature Fixed Points: Uncertainties Due to Temperature Drop and Emissivity

    Science.gov (United States)

    Castro, P.; Machin, G.; Bloembergen, P.; Lowe, D.; Whittam, A.

    2014-07-01

    This study forms part of the European Metrology Research Programme project implementing the New Kelvin to assign thermodynamic temperatures to a selected set of high-temperature fixed points (HTFPs), Cu, Co-C, Pt-C, and Re-C. A realistic thermal model of these HTFPs, developed in finite volume software ANSYS FLUENT, was constructed to quantify the uncertainty associated with the temperature drop across the back wall of the cell. In addition, the widely applied software package, STEEP3 was used to investigate the influence of cell emissivity. The temperature drop, , relates to the temperature difference due to the net loss of heat from the aperture of the cavity between the back wall of the cavity, viewed by the thermometer, defining the radiance temperature, and the solid-liquid interface of the alloy, defining the transition temperature of the HTFP. The actual value of can be used either as a correction (with associated uncertainty) to thermodynamic temperature evaluations of HTFPs, or as an uncertainty contribution to the overall estimated uncertainty. In addition, the effect of a range of furnace temperature profiles on the temperature drop was calculated and found to be negligible for Cu, Co-C, and Pt-C and small only for Re-C. The effective isothermal emissivity is calculated over the wavelength range from 450 nm to 850 nm for different assumed values of surface emissivity. Even when furnace temperature profiles are taken into account, the estimated emissivities change only slightly from the effective isothermal emissivity of the bare cell. These emissivity calculations are used to estimate the uncertainty in the temperature assignment due to the uncertainty in the emissivity of the blackbody.

  10. Uncertainty Quantification for Large-Scale Ice Sheet Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [Univ. of Texas, Austin, TX (United States)

    2016-02-05

    This report summarizes our work to develop advanced forward and inverse solvers and uncertainty quantification capabilities for a nonlinear 3D full Stokes continental-scale ice sheet flow model. The components include: (1) forward solver: a new state-of-the-art parallel adaptive scalable high-order-accurate mass-conservative Newton-based 3D nonlinear full Stokes ice sheet flow simulator; (2) inverse solver: a new adjoint-based inexact Newton method for solution of deterministic inverse problems governed by the above 3D nonlinear full Stokes ice flow model; and (3) uncertainty quantification: a novel Hessian-based Bayesian method for quantifying uncertainties in the inverse ice sheet flow solution and propagating them forward into predictions of quantities of interest such as ice mass flux to the ocean.

  11. Robust design optimization method for centrifugal impellers under surface roughness uncertainties due to blade fouling

    Science.gov (United States)

    Ju, Yaping; Zhang, Chuhua

    2016-03-01

    Blade fouling has been proved to be a great threat to compressor performance in operating stage. The current researches on fouling-induced performance degradations of centrifugal compressors are based mainly on simplified roughness models without taking into account the realistic factors such as spatial non-uniformity and randomness of the fouling-induced surface roughness. Moreover, little attention has been paid to the robust design optimization of centrifugal compressor impellers with considerations of blade fouling. In this paper, a multi-objective robust design optimization method is developed for centrifugal impellers under surface roughness uncertainties due to blade fouling. A three-dimensional surface roughness map is proposed to describe the nonuniformity and randomness of realistic fouling accumulations on blades. To lower computational cost in robust design optimization, the support vector regression (SVR) metamodel is combined with the Monte Carlo simulation (MCS) method to conduct the uncertainty analysis of fouled impeller performance. The analyzed results show that the critical fouled region associated with impeller performance degradations lies at the leading edge of blade tip. The SVR metamodel has been proved to be an efficient and accurate means in the detection of impeller performance variations caused by roughness uncertainties. After design optimization, the robust optimal design is found to be more efficient and less sensitive to fouling uncertainties while maintaining good impeller performance in the clean condition. This research proposes a systematic design optimization method for centrifugal compressors with considerations of blade fouling, providing a practical guidance to the design of advanced centrifugal compressors.

  12. Quantification of Back-End Nuclear Fuel Cycle Metrics Uncertainties Due to Cross Sections

    International Nuclear Information System (INIS)

    Tracy E. Stover Jr.

    2007-01-01

    This work examines uncertainties in the back end fuel cycle metrics of isotopic composition, decay heat, radioactivity, and radiotoxicity. Most advanced fuel cycle scenarios, including the ones represented in this work, are limited by one or more of these metrics, so that quantification of them becomes of great importance in order to optimize or select one of these scenarios. Uncertainty quantification, in this work, is performed by propagating cross-section covariance data, and later number density covariance data, through a reactor physics and depletion code sequence. Propagation of uncertainty is performed primarily via the Efficient Subspace Method (ESM). ESM decomposes the covariance data into singular pairs and perturbs input data along independent directions of the uncertainty and only for the most significant values of that uncertainty. Results of these perturbations being collected, ESM directly calculates the covariance of the observed output posteriori. By exploiting the rank deficient nature of the uncertainty data, ESM works more efficiently than traditional stochastic sampling, but is shown to produce equivalent results. ESM is beneficial for very detailed models with large amounts of input data that make stochastic sampling impractical. In this study various fuel cycle scenarios are examined. Simplified, representative models of pressurized water reactor (PWR) and boiling water reactor (BWR) fuels composed of both uranium oxide and mixed oxides are examined. These simple models are intended to give a representation of the uncertainty that can be associated with open uranium oxide fuel cycles and closed mixed oxide fuel cycles. The simplified models also serve as a demonstration to show that ESM and stochastic sampling produce equivalent results, because these models require minimum computer resources and have amounts of input data small enough such that either method can be quickly implemented and a numerical experiment performed. The simplified

  13. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  14. Assessing Fatigue and Ultimate Load Uncertainty in Floating Offshore Wind Turbines Due to Varying Simulation Length

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, G.; Lackner, M.; Haid, L.; Matha, D.; Jonkman, J.; Robertson, A.

    2013-07-01

    With the push towards siting wind turbines farther offshore due to higher wind quality and less visibility, floating offshore wind turbines, which can be located in deep water, are becoming an economically attractive option. The International Electrotechnical Commission's (IEC) 61400-3 design standard covers fixed-bottom offshore wind turbines, but there are a number of new research questions that need to be answered to modify these standards so that they are applicable to floating wind turbines. One issue is the appropriate simulation length needed for floating turbines. This paper will discuss the results from a study assessing the impact of simulation length on the ultimate and fatigue loads of the structure, and will address uncertainties associated with changing the simulation length for the analyzed floating platform. Recommendations of required simulation length based on load uncertainty will be made and compared to current simulation length requirements.

  15. Uncertainty in Earthquake Source Imaging Due to Variations in Source Time Function and Earth Structure

    KAUST Repository

    Razafindrakoto, H. N. T.

    2014-03-25

    One way to improve the accuracy and reliability of kinematic earthquake source imaging is to investigate the origin of uncertainty and to minimize their effects. The difficulties in kinematic source inversion arise from the nonlinearity of the problem, nonunique choices in the parameterization, and observational errors. We analyze particularly the uncertainty related to the choice of the source time function (STF) and the variability in Earth structure. We consider a synthetic data set generated from a spontaneous dynamic rupture calculation. Using Bayesian inference, we map the solution space of peak slip rate, rupture time, and rise time to characterize the kinematic rupture in terms of posterior density functions. Our test to investigate the effect of the choice of STF reveals that all three tested STFs (isosceles triangle, regularized Yoffe with acceleration time of 0.1 and 0.3 s) retrieve the patch of high slip and slip rate around the hypocenter. However, the use of an isosceles triangle as STF artificially accelerates the rupture to propagate faster than the target solution. It additionally generates an artificial linear correlation between rupture onset time and rise time. These appear to compensate for the dynamic source effects that are not included in the symmetric triangular STF. The exact rise time for the tested STFs is difficult to resolve due to the small amount of radiated seismic moment in the tail of STF. To highlight the effect of Earth structure variability, we perform inversions including the uncertainty in the wavespeed only, and variability in both wavespeed and layer depth. We find that little difference is noticeable between the resulting rupture model uncertainties from these two parameterizations. Both significantly broaden the posterior densities and cause faster rupture propagation particularly near the hypocenter due to the major velocity change at the depth where the fault is located.

  16. Uncertainty in Earthquake Source Imaging Due to Variations in Source Time Function and Earth Structure

    KAUST Repository

    Razafindrakoto, H. N. T.; Mai, Paul Martin

    2014-01-01

    One way to improve the accuracy and reliability of kinematic earthquake source imaging is to investigate the origin of uncertainty and to minimize their effects. The difficulties in kinematic source inversion arise from the nonlinearity of the problem, nonunique choices in the parameterization, and observational errors. We analyze particularly the uncertainty related to the choice of the source time function (STF) and the variability in Earth structure. We consider a synthetic data set generated from a spontaneous dynamic rupture calculation. Using Bayesian inference, we map the solution space of peak slip rate, rupture time, and rise time to characterize the kinematic rupture in terms of posterior density functions. Our test to investigate the effect of the choice of STF reveals that all three tested STFs (isosceles triangle, regularized Yoffe with acceleration time of 0.1 and 0.3 s) retrieve the patch of high slip and slip rate around the hypocenter. However, the use of an isosceles triangle as STF artificially accelerates the rupture to propagate faster than the target solution. It additionally generates an artificial linear correlation between rupture onset time and rise time. These appear to compensate for the dynamic source effects that are not included in the symmetric triangular STF. The exact rise time for the tested STFs is difficult to resolve due to the small amount of radiated seismic moment in the tail of STF. To highlight the effect of Earth structure variability, we perform inversions including the uncertainty in the wavespeed only, and variability in both wavespeed and layer depth. We find that little difference is noticeable between the resulting rupture model uncertainties from these two parameterizations. Both significantly broaden the posterior densities and cause faster rupture propagation particularly near the hypocenter due to the major velocity change at the depth where the fault is located.

  17. Uncertainties in environmental impact assessments due to expert opinion. Case study. Radioactive waste in Slovenia

    International Nuclear Information System (INIS)

    Kontic, B.; Ravnik, M.

    1998-01-01

    A comprehensive study was done at the J. Stefan Institute in Ljubljana and the School of Environmental Sciences in Nova Gorica in relation to sources of uncertainties in long-term environmental impact assessment (EIA). Under the research two main components were examined: first, methodology of the preparation of an EIA, and second validity of an expert opinion. Following the findings of the research a survey was performed in relation to assessing acceptability of radioactive waste repository by the regulatory. The components of dose evaluation in different time frames were examined in terms of susceptibility to uncertainty. Uncertainty associated to human exposure in the far future is so large that dose and risk, as individual numerical indicators of safety, by our opinion, should not be used in compliance assessment for radioactive waste repository. On the other hand, results of the calculations on the amount and activity of low and intermediate level waste and the spent fuel from the Krsko NPP show that expert's understanding of the treated questions can be expressed in transparent way giving credible output of the models used.(author)

  18. Sensitivity/uncertainty analysis for free-in-air tissue kerma due to initial radiation at Hiroshima and Nagasaki

    International Nuclear Information System (INIS)

    Lillie, R.A.; Broadhead, B.L.; Pace, J.V. III

    1988-01-01

    Uncertainty estimates and cross correlations by range/survivor have been calculated for the Hiroshima and Nagasaki free-in-air (FIA) tissue kerma obtained from two-dimensional air/ground transport calculations. The uncertainties due to modeling parameter and basic nuclear transport data uncertainties were calculated for 700-, 1000-, and 1500-m ground ranges. Only the FIA tissue kerma due to initial radiation was treated in the analysis; the uncertainties associated with terrain and building shielding and phantom attenuation were not considered in this study. Uncertainties of --20% were obtained for the prompt neutron and secondary gamma kerma and 30% for the prompt gamma kerma at both cities. The uncertainties on the total prompt kerma at Hiroshima and Nagasaki are --18 and 15%, respectively. The estimated uncertainties vary only slightly by ground range and are fairly highly correlated. The total prompt kerma uncertainties are dominated by the secondary gamma uncertainties, which in turn are dominated by the modeling parameter uncertainties, particularly those associated with the weapon yield and radiation sources

  19. Characterizing Uncertainty In Electrical Resistivity Tomography Images Due To Subzero Temperature Variability

    Science.gov (United States)

    Herring, T.; Cey, E. E.; Pidlisecky, A.

    2017-12-01

    Time-lapse electrical resistivity tomography (ERT) is used to image changes in subsurface electrical conductivity (EC), e.g. due to a saline contaminant plume. Temperature variation also produces an EC response, which interferes with the signal of interest. Temperature compensation requires the temperature distribution and the relationship between EC and temperature, but this relationship at subzero temperatures is not well defined. The goal of this study is to examine how uncertainty in the subzero EC/temperature relationship manifests in temperature corrected ERT images, especially with respect to relevant plume parameters (location, contaminant mass, etc.). First, a lab experiment was performed to determine the EC of fine-grained glass beads over a range of temperatures (-20° to 20° C) and saturations. The measured EC/temperature relationship was then used to add temperature effects to a hypothetical EC model of a conductive plume. Forward simulations yielded synthetic field data to which temperature corrections were applied. Varying the temperature/EC relationship used in the temperature correction and comparing the temperature corrected ERT results to the synthetic model enabled a quantitative analysis of the error of plume parameters associated with temperature variability. Modeling possible scenarios in this way helps to establish the feasibility of different time-lapse ERT applications by quantifying the uncertainty associated with parameter(s) of interest.

  20. Estimated Uncertainty in Segmented Gamma Scanner Assay Results due to the Variation in Drum Tare Weights

    International Nuclear Information System (INIS)

    Bosko, A.; Croft, St.; Gulbransen, E.

    2009-01-01

    General purpose gamma scanners are often used to assay unknown drums that differ from those used to create the default calibration. This introduces a potential source of bias into the matrix correction when the correction is based on the estimation of the mean density of the drum contents from a weigh scale measurement. In this paper we evaluate the magnitude of this bias that may be introduced by performing assay measurements with a system whose matrix correction algorithm was calibrated with a set of standard drums but applied to a population of drums whose tare weight may be different. The matrix correction factors are perturbed in such cases because the unknown difference in tare weight gets reflected as a bias in the derived matrix density. This would be the only impact if the difference in tare weight was due solely to the weight of the lid or base, say. But in reality the reason for the difference may be because the steel wall of the drum is of a different thickness. Thus, there is an opposing interplay at work which tends to compensate. The purpose of this work is to evaluate and bound the magnitude of the resulting assay uncertainty introduced by tare weight variation. We compare the results obtained using simple analytical models and the 3-D ray tracing with ISOCS software to illustrate and quantify the problem. The numerical results allow a contribution to the Total Measurement Uncertainty (TMU) to be propagated into the final assay result. (authors)

  1. [Dual process in large number estimation under uncertainty].

    Science.gov (United States)

    Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento

    2016-08-01

    According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.

  2. Social Discounting of Large Dams with Climate Change Uncertainty

    Directory of Open Access Journals (Sweden)

    Marc Jeuland

    2010-06-01

    This paper reviews the recent discounting controversy and examines its implications for the appraisal of an illustrative hydropower project in Ethiopia. The analysis uses an integrated hydro-economic model that accounts for how the dam’s transboundary impacts vary with climate change. The real value of the dam is found to be highly sensitive to assumptions about future economic growth. The argument for investment is weakest under conditions of robust global economic growth, particularly if these coincide with unfavourable hydrological or development factors related to the project. If however long-term growth is reduced, the value of the dam tends to increase. There may also be distributional or local arguments favouring investment, if growth in the investment region lags behind that of the rest of the globe. In such circumstances, a large dam can be seen as a form of insurance that protects future vulnerable generations against the possibility of macroeconomic instability or climate shocks.

  3. Uncertainties on decay heat power due to fission product data uncertainties; Incertitudes sur la puissance residuelle dues aux incertitudes sur les donnees de produits de fission

    Energy Technology Data Exchange (ETDEWEB)

    Rebah, J

    1998-08-01

    Following a reactor shutdown, after the fission process has completely faded out, a significant quantity of energy known as 'decay heat' continues to be generated in the core. The knowledge with a good precision of the decay heat released in a fuel after reactor shutdown is necessary for: residual heat removal for normal operation or emergency shutdown condition, the design of cooling systems and spent fuel handling. By the summation calculations method, the decay heat is equal to the sum of the energies released by individual fission products. Under taking into account all nuclides that contribute significantly to the total decay heat, the results from summation method are comparable with the measured ones. Without the complete covariance information of nuclear data, the published uncertainty analyses of fission products decay heat summation calculation give underestimated errors through the variance/covariance analysis in consideration of correlation between the basic nuclear data, we calculate in this work the uncertainties on the decay heat associated with the summation calculations. Contribution to the total error of decay heat comes from uncertainties in three terms: fission yields, half-lives and average beta and gamma decay energy. (author)

  4. Uncertainties

    Indian Academy of Sciences (India)

    To reflect this uncertainty in the climate scenarios, the use of AOGCMs that explicitly simulate the carbon cycle and chemistry of all the substances are needed. The Hadley Centre has developed a version of the climate model that allows the effect of climate change on the carbon cycle and its feedback into climate, to be ...

  5. Uncertainty

    International Nuclear Information System (INIS)

    Silva, T.A. da

    1988-01-01

    The comparison between the uncertainty method recommended by International Atomic Energy Agency (IAEA) and the and the International Weight and Measure Commitee (CIPM) are showed, for the calibration of clinical dosimeters in the secondary standard Dosimetry Laboratory (SSDL). (C.G.C.) [pt

  6. Sampling based uncertainty analysis of 10% hot leg break LOCA in large scale test facility

    International Nuclear Information System (INIS)

    Sengupta, Samiran; Kraina, V.; Dubey, S. K.; Rao, R. S.; Gupta, S. K.

    2010-01-01

    Sampling based uncertainty analysis was carried out to quantify uncertainty in predictions of best estimate code RELAP5/MOD3.2 for a thermal hydraulic test (10% hot leg break LOCA) performed in the Large Scale Test Facility (LSTF) as a part of an IAEA coordinated research project. The nodalisation of the test facility was qualified for both steady state and transient level by systematically applying the procedures led by uncertainty methodology based on accuracy extrapolation (UMAE); uncertainty analysis was carried out using the Latin hypercube sampling (LHS) method to evaluate uncertainty for ten input parameters. Sixteen output parameters were selected for uncertainty evaluation and uncertainty band between 5 th and 95 th percentile of the output parameters were evaluated. It was observed that the uncertainty band for the primary pressure during two phase blowdown is larger than that of the remaining period. Similarly, a larger uncertainty band is observed relating to accumulator injection flow during reflood phase. Importance analysis was also carried out and standard rank regression coefficients were computed to quantify the effect of each individual input parameter on output parameters. It was observed that the break discharge coefficient is the most important uncertain parameter relating to the prediction of all the primary side parameters and that the steam generator (SG) relief pressure setting is the most important parameter in predicting the SG secondary pressure

  7. Scalable multi-objective control for large scale water resources systems under uncertainty

    Science.gov (United States)

    Giuliani, Matteo; Quinn, Julianne; Herman, Jonathan; Castelletti, Andrea; Reed, Patrick

    2016-04-01

    The use of mathematical models to support the optimal management of environmental systems is rapidly expanding over the last years due to advances in scientific knowledge of the natural processes, efficiency of the optimization techniques, and availability of computational resources. However, undergoing changes in climate and society introduce additional challenges for controlling these systems, ultimately motivating the emergence of complex models to explore key causal relationships and dependencies on uncontrolled sources of variability. In this work, we contribute a novel implementation of the evolutionary multi-objective direct policy search (EMODPS) method for controlling environmental systems under uncertainty. The proposed approach combines direct policy search (DPS) with hierarchical parallelization of multi-objective evolutionary algorithms (MOEAs) and offers a threefold advantage: the DPS simulation-based optimization can be combined with any simulation model and does not add any constraint on modeled information, allowing the use of exogenous information in conditioning the decisions. Moreover, the combination of DPS and MOEAs prompts the generation or Pareto approximate set of solutions for up to 10 objectives, thus overcoming the decision biases produced by cognitive myopia, where narrow or restrictive definitions of optimality strongly limit the discovery of decision relevant alternatives. Finally, the use of large-scale MOEAs parallelization improves the ability of the designed solutions in handling the uncertainty due to severe natural variability. The proposed approach is demonstrated on a challenging water resources management problem represented by the optimal control of a network of four multipurpose water reservoirs in the Red River basin (Vietnam). As part of the medium-long term energy and food security national strategy, four large reservoirs have been constructed on the Red River tributaries, which are mainly operated for hydropower

  8. How uncertainty in socio-economic variables affects large-scale transport model forecasts

    DEFF Research Database (Denmark)

    Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2015-01-01

    A strategic task assigned to large-scale transport models is to forecast the demand for transport over long periods of time to assess transport projects. However, by modelling complex systems transport models have an inherent uncertainty which increases over time. As a consequence, the longer...... the period forecasted the less reliable is the forecasted model output. Describing uncertainty propagation patterns over time is therefore important in order to provide complete information to the decision makers. Among the existing literature only few studies analyze uncertainty propagation patterns over...

  9. Evaluating Sources of Risks in Large Engineering Projects: The Roles of Equivocality and Uncertainty

    Directory of Open Access Journals (Sweden)

    Leena Pekkinen

    2015-11-01

    Full Text Available Contemporary project risk management literature introduces uncertainty, i.e., the lack of information, as a fundamental basis of project risks. In this study the authors assert that equivocality, i.e., the existence of multiple and conflicting interpretations, can also serve as a basis of risks. With an in-depth empirical investigation of a large complex engineering project the authors identified risk sources having their bases in the situations where uncertainty or equivocality was the predominant attribute. The information processing theory proposes different managerial practices for risk management based on the sources of risks in uncertainty or equivocality.

  10. Effects of uncertainty in model predictions of individual tree volume on large area volume estimates

    Science.gov (United States)

    Ronald E. McRoberts; James A. Westfall

    2014-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...

  11. Interpretation of the peak areas in gamma-ray spectra that have a large relative uncertainty

    International Nuclear Information System (INIS)

    Korun, M.; Maver Modec, P.; Vodenik, B.

    2012-01-01

    Empirical evidence is provided that the areas of peaks having a relative uncertainty in excess of 30% are overestimated. This systematic influence is of a statistical nature and originates in way the peak-analyzing routine recognizes the small peaks. It is not easy to detect this influence since it is smaller than the peak-area uncertainty. However, the systematic influence can be revealed in repeated measurements under the same experimental conditions, e.g., in background measurements. To evaluate the systematic influence, background measurements were analyzed with the peak-analyzing procedure described by Korun et al. (2008). The magnitude of the influence depends on the relative uncertainty of the peak area and may amount, in the conditions used in the peak analysis, to a factor of 5 at relative uncertainties exceeding 60%. From the measurements, the probability for type-II errors, as a function of the relative uncertainty of the peak area, was extracted. This probability is near zero below an uncertainty of 30% and rises to 90% at uncertainties exceeding 50%. - Highlights: ► A systematic influence affecting small peak areas in gamma-ray spectra is described. ► The influence originates in the peak locating procedure, using a pre-determined sensitivity. ► The predetermined sensitivity makes peak areas with large uncertainties to be overestimated. ► The influence depends on the relative uncertainty of the number of counts in the peak. ► Corrections exceeding a factor of 3 are attained at peak area uncertainties exceeding 60%.

  12. Uncertainty of Forest Biomass Estimates in North Temperate Forests Due to Allometry: Implications for Remote Sensing

    Directory of Open Access Journals (Sweden)

    Razi Ahmed

    2013-06-01

    Full Text Available Estimates of above ground biomass density in forests are crucial for refining global climate models and understanding climate change. Although data from field studies can be aggregated to estimate carbon stocks on global scales, the sparsity of such field data, temporal heterogeneity and methodological variations introduce large errors. Remote sensing measurements from spaceborne sensors are a realistic alternative for global carbon accounting; however, the uncertainty of such measurements is not well known and remains an active area of research. This article describes an effort to collect field data at the Harvard and Howland Forest sites, set in the temperate forests of the Northeastern United States in an attempt to establish ground truth forest biomass for calibration of remote sensing measurements. We present an assessment of the quality of ground truth biomass estimates derived from three different sets of diameter-based allometric equations over the Harvard and Howland Forests to establish the contribution of errors in ground truth data to the error in biomass estimates from remote sensing measurements.

  13. Procedure to approximately estimate the uncertainty of material ratio parameters due to inhomogeneity of surface roughness

    International Nuclear Information System (INIS)

    Hüser, Dorothee; Thomsen-Schmidt, Peter; Hüser, Jonathan; Rief, Sebastian; Seewig, Jörg

    2016-01-01

    Roughness parameters that characterize contacting surfaces with regard to friction and wear are commonly stated without uncertainties, or with an uncertainty only taking into account a very limited amount of aspects such as repeatability of reproducibility (homogeneity) of the specimen. This makes it difficult to discriminate between different values of single roughness parameters. Therefore uncertainty assessment methods are required that take all relevant aspects into account. In the literature this is rarely performed and examples specific for parameters used in friction and wear are not yet given. We propose a procedure to derive the uncertainty from a single profile employing a statistical method that is based on the statistical moments of the amplitude distribution and the autocorrelation length of the profile. To show the possibilities and the limitations of this method we compare the uncertainty derived from a single profile with that derived from a high statistics experiment. (paper)

  14. Uncertainty quantification of surface-water/groundwater exchange estimates in large wetland systems using Python

    Science.gov (United States)

    Hughes, J. D.; Metz, P. A.

    2014-12-01

    Most watershed studies include observation-based water budget analyses to develop first-order estimates of significant flow terms. Surface-water/groundwater (SWGW) exchange is typically assumed to be equal to the residual of the sum of inflows and outflows in a watershed. These estimates of SWGW exchange, however, are highly uncertain as a result of the propagation of uncertainty inherent in the calculation or processing of the other terms of the water budget, such as stage-area-volume relations, and uncertainties associated with land-cover based evapotranspiration (ET) rate estimates. Furthermore, the uncertainty of estimated SWGW exchanges can be magnified in large wetland systems that transition from dry to wet during wet periods. Although it is well understood that observation-based estimates of SWGW exchange are uncertain it is uncommon for the uncertainty of these estimates to be directly quantified. High-level programming languages like Python can greatly reduce the effort required to (1) quantify the uncertainty of estimated SWGW exchange in large wetland systems and (2) evaluate how different approaches for partitioning land-cover data in a watershed may affect the water-budget uncertainty. We have used Python with the Numpy, Scipy.stats, and pyDOE packages to implement an unconstrained Monte Carlo approach with Latin Hypercube sampling to quantify the uncertainty of monthly estimates of SWGW exchange in the Floral City watershed of the Tsala Apopka wetland system in west-central Florida, USA. Possible sources of uncertainty in the water budget analysis include rainfall, ET, canal discharge, and land/bathymetric surface elevations. Each of these input variables was assigned a probability distribution based on observation error or spanning the range of probable values. The Monte Carlo integration process exposes the uncertainties in land-cover based ET rate estimates as the dominant contributor to the uncertainty in SWGW exchange estimates. We will discuss

  15. A new system to quantify uncertainties in LEO satellite position determination due to space weather events

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a new system for quantitative assessment of uncertainties in LEO satellite position caused by storm time changes in space environmental...

  16. Large uncertainty in carbon uptake potential of land-based climate-change mitigation efforts.

    Science.gov (United States)

    Krause, Andreas; Pugh, Thomas A M; Bayer, Anita D; Li, Wei; Leung, Felix; Bondeau, Alberte; Doelman, Jonathan C; Humpenöder, Florian; Anthoni, Peter; Bodirsky, Benjamin L; Ciais, Philippe; Müller, Christoph; Murray-Tortarolo, Guillermo; Olin, Stefan; Popp, Alexander; Sitch, Stephen; Stehfest, Elke; Arneth, Almut

    2018-07-01

    Most climate mitigation scenarios involve negative emissions, especially those that aim to limit global temperature increase to 2°C or less. However, the carbon uptake potential in land-based climate change mitigation efforts is highly uncertain. Here, we address this uncertainty by using two land-based mitigation scenarios from two land-use models (IMAGE and MAgPIE) as input to four dynamic global vegetation models (DGVMs; LPJ-GUESS, ORCHIDEE, JULES, LPJmL). Each of the four combinations of land-use models and mitigation scenarios aimed for a cumulative carbon uptake of ~130 GtC by the end of the century, achieved either via the cultivation of bioenergy crops combined with carbon capture and storage (BECCS) or avoided deforestation and afforestation (ADAFF). Results suggest large uncertainty in simulated future land demand and carbon uptake rates, depending on the assumptions related to land use and land management in the models. Total cumulative carbon uptake in the DGVMs is highly variable across mitigation scenarios, ranging between 19 and 130 GtC by year 2099. Only one out of the 16 combinations of mitigation scenarios and DGVMs achieves an equivalent or higher carbon uptake than achieved in the land-use models. The large differences in carbon uptake between the DGVMs and their discrepancy against the carbon uptake in IMAGE and MAgPIE are mainly due to different model assumptions regarding bioenergy crop yields and due to the simulation of soil carbon response to land-use change. Differences between land-use models and DGVMs regarding forest biomass and the rate of forest regrowth also have an impact, albeit smaller, on the results. Given the low confidence in simulated carbon uptake for a given land-based mitigation scenario, and that negative emissions simulated by the DGVMs are typically lower than assumed in scenarios consistent with the 2°C target, relying on negative emissions to mitigate climate change is a highly uncertain strategy. © 2018 John

  17. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    International Nuclear Information System (INIS)

    Sig Drellack, Lance Prothro

    2007-01-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  18. High level of CA 125 due to large endometrioma.

    Science.gov (United States)

    Phupong, Vorapong; Chen, Orawan; Ultchaswadi, Pornthip

    2004-09-01

    CA 125 is a tumor-associated antigen. Its high levels are usually associated with ovarian malignancies, whereas smaller increases in the levels were associated with benign gynecologic conditions. The authors report a high level of CA 125 in a case of large ovarian endometrioma. A 45-year-old nulliparous Thai woman, presented with an increase of her abdominal girth for 7 months. Transabdominal ultrasonogram demonstrated a large ovarian cyst and multiple small leiomyoma uteri, and serum CA 125 level was 1,006 U/ml. The preoperative diagnosis was ovarian cancer with leiomyoma uteri. Exploratory laparotomy was performed. There were a large right ovarian endometrioma, small left ovarian endometrioma and multiple small leiomyoma. Total abdominal hysterectomy and bilateral salpingo-oophorectomy was performed and histopathology confirmed the diagnosis of endometrioma and leiomyoma. The serum CA 125 level declined to non-detectable at the 4th week. She was well at discharge and throughout her 4th week follow-up period Although a very high level of CA 125 is associated with a malignant process, it can also be found in benign conditions such as a large endometrioma. The case emphasizes the association of high levels of CA 125 with benign gynecologic conditions.

  19. A new robust adaptive controller for vibration control of active engine mount subjected to large uncertainties

    International Nuclear Information System (INIS)

    Fakhari, Vahid; Choi, Seung-Bok; Cho, Chang-Hyun

    2015-01-01

    This work presents a new robust model reference adaptive control (MRAC) for vibration control caused from vehicle engine using an electromagnetic type of active engine mount. Vibration isolation performances of the active mount associated with the robust controller are evaluated in the presence of large uncertainties. As a first step, an active mount with linear solenoid actuator is prepared and its dynamic model is identified via experimental test. Subsequently, a new robust MRAC based on the gradient method with σ-modification is designed by selecting a proper reference model. In designing the robust adaptive control, structured (parametric) uncertainties in the stiffness of the passive part of the mount and in damping ratio of the active part of the mount are considered to investigate the robustness of the proposed controller. Experimental and simulation results are presented to evaluate performance focusing on the robustness behavior of the controller in the face of large uncertainties. The obtained results show that the proposed controller can sufficiently provide the robust vibration control performance even in the presence of large uncertainties showing an effective vibration isolation. (paper)

  20. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  1. Errors in mean and fluctuating velocity due to PIV bias and precision uncertainties

    International Nuclear Information System (INIS)

    Wilson, B.; Smith, B.L.

    2011-01-01

    Particle Image Velocimetry is a powerful fluid velocity measurement tool that has recently become important for CFD validation experiments. Knowledge of experimental uncertainty is important to CFD validation, but the uncertainty of PIV is very complex and not well understood. Previous work has shown that PIV measurements can become 'noisy' in regions of high shear as well as regions of small displacement. This paper aims to demonstrate the impact of these effects on validation data by comparing PIV data to data acquired using hot-wire anemometry, which does not suffer from the same issues. It is confirmed that shear and insufficient particle displacements can result in elevated measurements of turbulence levels. (author)

  2. Uncertainties in HTGR neutron-physical characteristics due to computational errors and technological tolerances

    International Nuclear Information System (INIS)

    Glushkov, E.S.; Grebennik, V.N.; Davidenko, V.G.; Kosovskij, V.G.; Smirnov, O.N.; Tsibul'skij, V.F.

    1991-01-01

    The paper is dedicated to the consideration of uncertainties is neutron-physical characteristics (NPC) of high-temperature gas-cooled reactors (HTGR) with a core as spherical fuel element bed, which are caused by calculations from HTGR parameters mean values affecting NPC. Among NPC are: effective multiplication factor, burnup depth, reactivity effect, control element worth, distribution of neutrons and heat release over a reactor core, etc. The short description of calculated methods and codes used for HTGR calculations in the USSR is given and evaluations of NPC uncertainties of the methodical character are presented. Besides, the analysis of the effect technological deviations in parameters of reactor main elements such as uranium amount in the spherical fuel element, number of neutron-absorbing impurities in the reactor core and reflector, etc, upon the NPC is carried out. Results of some experimental studies of NPC of critical assemblies with graphite moderator are given as applied to HTGR. The comparison of calculations results and experiments on critical assemblies has made it possible to evaluate uncertainties of calculated description of HTGR NPC. (author). 8 refs, 8 figs, 6 tabs

  3. Increasing stress on disaster risk finance due to large floods

    Science.gov (United States)

    Jongman, Brenden; Hochrainer-Stigler, Stefan; Feyen, Luc; Aerts, Jeroen; Mechler, Reinhard; Botzen, Wouter; Bouwer, Laurens; Pflug, Georg; Rojas, Rodrigo; Ward, Philip

    2014-05-01

    Recent major flood disasters have shown that single extreme events can affect multiple countries simultaneously, which puts high pressure on trans-national risk reduction and risk transfer mechanisms. To date, little is known about such flood hazard interdependencies across regions, and the corresponding joint risks at regional to continental scales. Reliable information on correlated loss probabilities is crucial for developing robust insurance schemes and public adaptation funds, and for enhancing our understanding of climate change impacts. Here we show that extreme discharges are strongly correlated across European river basins and that these correlations can, or should, be used in national to continental scale risk assessment. We present probabilistic trends in continental flood risk, and demonstrate that currently observed extreme flood losses could more than double in frequency by 2050 under future climate change and socioeconomic development. The results demonstrate that accounting for tail dependencies leads to higher estimates of extreme losses than estimates based on the traditional assumption of independence between basins. We suggest that risk management for these increasing losses is largely feasible, and we demonstrate that risk can be shared by expanding risk transfer financing, reduced by investing in flood protection, or absorbed by enhanced solidarity between countries. We conclude that these measures have vastly different efficiency, equity and acceptability implications, which need to be taken into account in broader consultation, for which our analysis provides a basis.

  4. Systematic uncertainties in long-baseline neutrino oscillations for large θ₁₃

    Energy Technology Data Exchange (ETDEWEB)

    Coloma, Pilar; Huber, Patrick; Kopp, Joachim; Winter, Walter

    2013-02-01

    We study the physics potential of future long-baseline neutrino oscillation experiments at large θ₁₃, focusing especially on systematic uncertainties. We discuss superbeams, \\bbeams, and neutrino factories, and for the first time compare these experiments on an equal footing with respect to systematic errors. We explicitly simulate near detectors for all experiments, we use the same implementation of systematic uncertainties for all experiments, and we fully correlate the uncertainties among detectors, oscillation channels, and beam polarizations as appropriate. As our primary performance indicator, we use the achievable precision in the measurement of the CP violating phase $\\deltacp$. We find that a neutrino factory is the only instrument that can measure $\\deltacp$ with a precision similar to that of its quark sector counterpart. All neutrino beams operating at peak energies ≳2 GeV are quite robust with respect to systematic uncertainties, whereas especially \\bbeams and \\thk suffer from large cross section uncertainties in the quasi-elastic regime, combined with their inability to measure the appearance signal cross sections at the near detector. A noteworthy exception is the combination of a γ =100 \\bbeam with an \\spl-based superbeam, in which all relevant cross sections can be measured in a self-consistent way. This provides a performance, second only to the neutrino factory. For other superbeam experiments such as \\lbno and the setups studied in the context of the \\lbne reconfiguration effort, statistics turns out to be the bottleneck. In almost all cases, the near detector is not critical to control systematics since the combined fit of appearance and disappearance data already constrains the impact of systematics to be small provided that the three active flavor oscillation framework is valid.

  5. Variations in environmental tritium doses due to meteorological data averaging and uncertainties in pathway model parameters

    Energy Technology Data Exchange (ETDEWEB)

    Kock, A.

    1996-05-01

    The objectives of this research are: (1) to calculate and compare off site doses from atmospheric tritium releases at the Savannah River Site using monthly versus 5 year meteorological data and annual source terms, including additional seasonal and site specific parameters not included in present annual assessments; and (2) to calculate the range of the above dose estimates based on distributions in model parameters given by uncertainty estimates found in the literature. Consideration will be given to the sensitivity of parameters given in former studies.

  6. Variations in environmental tritium doses due to meteorological data averaging and uncertainties in pathway model parameters

    International Nuclear Information System (INIS)

    Kock, A.

    1996-05-01

    The objectives of this research are: (1) to calculate and compare off site doses from atmospheric tritium releases at the Savannah River Site using monthly versus 5 year meteorological data and annual source terms, including additional seasonal and site specific parameters not included in present annual assessments; and (2) to calculate the range of the above dose estimates based on distributions in model parameters given by uncertainty estimates found in the literature. Consideration will be given to the sensitivity of parameters given in former studies

  7. Probabilistic flood inundation mapping at ungauged streams due to roughness coefficient uncertainty in hydraulic modelling

    Science.gov (United States)

    Papaioannou, George; Vasiliades, Lampros; Loukas, Athanasios; Aronica, Giuseppe T.

    2017-04-01

    Probabilistic flood inundation mapping is performed and analysed at the ungauged Xerias stream reach, Volos, Greece. The study evaluates the uncertainty introduced by the roughness coefficient values on hydraulic models in flood inundation modelling and mapping. The well-established one-dimensional (1-D) hydraulic model, HEC-RAS is selected and linked to Monte-Carlo simulations of hydraulic roughness. Terrestrial Laser Scanner data have been used to produce a high quality DEM for input data uncertainty minimisation and to improve determination accuracy on stream channel topography required by the hydraulic model. Initial Manning's n roughness coefficient values are based on pebble count field surveys and empirical formulas. Various theoretical probability distributions are fitted and evaluated on their accuracy to represent the estimated roughness values. Finally, Latin Hypercube Sampling has been used for generation of different sets of Manning roughness values and flood inundation probability maps have been created with the use of Monte Carlo simulations. Historical flood extent data, from an extreme historical flash flood event, are used for validation of the method. The calibration process is based on a binary wet-dry reasoning with the use of Median Absolute Percentage Error evaluation metric. The results show that the proposed procedure supports probabilistic flood hazard mapping at ungauged rivers and provides water resources managers with valuable information for planning and implementing flood risk mitigation strategies.

  8. Uncertainty of the 20th century sea-level rise due to vertical land motion errors

    Science.gov (United States)

    Santamaría-Gómez, Alvaro; Gravelle, Médéric; Dangendorf, Sönke; Marcos, Marta; Spada, Giorgio; Wöppelmann, Guy

    2017-09-01

    Assessing the vertical land motion (VLM) at tide gauges (TG) is crucial to understanding global and regional mean sea-level changes (SLC) over the last century. However, estimating VLM with accuracy better than a few tenths of a millimeter per year is not a trivial undertaking and many factors, including the reference frame uncertainty, must be considered. Using a novel reconstruction approach and updated geodetic VLM corrections, we found the terrestrial reference frame and the estimated VLM uncertainty may contribute to the global SLC rate error by ± 0.2 mmyr-1. In addition, a spurious global SLC acceleration may be introduced up to ± 4.8 ×10-3 mmyr-2. Regional SLC rate and acceleration errors may be inflated by a factor 3 compared to the global. The difference of VLM from two independent Glacio-Isostatic Adjustment models introduces global SLC rate and acceleration biases at the level of ± 0.1 mmyr-1 and 2.8 ×10-3 mmyr-2, increasing up to 0.5 mm yr-1 and 9 ×10-3 mmyr-2 for the regional SLC. Errors in VLM corrections need to be budgeted when considering past and future SLC scenarios.

  9. Uncertainty in soil-structure interaction analysis of a nuclear power plant due to different analytical techniques

    International Nuclear Information System (INIS)

    Chen, J.C.; Chun, R.C.; Goudreau, G.L.; Maslenikov, O.R.; Johnson, J.J.

    1984-01-01

    This paper summarizes the results of the dynamic response analysis of the Zion reactor containment building using three different soil-structure interaction (SSI) analytical procedures: the substructure method, CLASSI; the equivalent linear finite element approach, ALUSH and the nonlinear finite element procedure, DYNA3D. Uncertainties in analyzing a soil-structure system due to SSI analysis procedures were investigated. Responses at selected locations in the structure were compared: peak accelerations and response spectra

  10. Uncertainty in sap flow-based transpiration due to xylem properties

    Science.gov (United States)

    Looker, N. T.; Hu, J.; Martin, J. T.; Jencso, K. G.

    2014-12-01

    Transpiration, the evaporative loss of water from plants through their stomata, is a key component of the terrestrial water balance, influencing streamflow as well as regional convective systems. From a plant physiological perspective, transpiration is both a means of avoiding destructive leaf temperatures through evaporative cooling and a consequence of water loss through stomatal uptake of carbon dioxide. Despite its hydrologic and ecological significance, transpiration remains a notoriously challenging process to measure in heterogeneous landscapes. Sap flow methods, which estimate transpiration by tracking the velocity of a heat pulse emitted into the tree sap stream, have proven effective for relating transpiration dynamics to climatic variables. To scale sap flow-based transpiration from the measured domain (often area) to the whole-tree level, researchers generally assume constancy of scale factors (e.g., wood thermal diffusivity (k), radial and azimuthal distributions of sap velocity, and conducting sapwood area (As)) through time, across space, and within species. For the widely used heat-ratio sap flow method (HRM), we assessed the sensitivity of transpiration estimates to uncertainty in k (a function of wood moisture content and density) and As. A sensitivity analysis informed by distributions of wood moisture content, wood density and As sampled across a gradient of water availability indicates that uncertainty in these variables can impart substantial error when scaling sap flow measurements to the whole tree. For species with variable wood properties, the application of the HRM assuming a spatially constant k or As may systematically over- or underestimate whole-tree transpiration rates, resulting in compounded error in ecosystem-scale estimates of transpiration.

  11. Sustainability Risk Evaluation for Large-Scale Hydropower Projects with Hybrid Uncertainty

    Directory of Open Access Journals (Sweden)

    Weiyao Tang

    2018-01-01

    Full Text Available As large-scale hydropower projects are influenced by many factors, risk evaluations are complex. This paper considers a hydropower project as a complex system from the perspective of sustainability risk, and divides it into three subsystems: the natural environment subsystem, the eco-environment subsystem and the socioeconomic subsystem. Risk-related factors and quantitative dimensions of each subsystem are comprehensively analyzed considering uncertainty of some quantitative dimensions solved by hybrid uncertainty methods, including fuzzy (e.g., the national health degree, the national happiness degree, the protection of cultural heritage, random (e.g., underground water levels, river width, and fuzzy random uncertainty (e.g., runoff volumes, precipitation. By calculating the sustainability risk-related degree in each of the risk-related factors, a sustainable risk-evaluation model is built. Based on the calculation results, the critical sustainability risk-related factors are identified and targeted to reduce the losses caused by sustainability risk factors of the hydropower project. A case study at the under-construction Baihetan hydropower station is presented to demonstrate the viability of the risk-evaluation model and to provide a reference for the sustainable risk evaluation of other large-scale hydropower projects.

  12. Uncertainty analysis methods for quantification of source terms using a large computer code

    International Nuclear Information System (INIS)

    Han, Seok Jung

    1997-02-01

    Quantification of uncertainties in the source term estimations by a large computer code, such as MELCOR and MAAP, is an essential process of the current probabilistic safety assessments (PSAs). The main objectives of the present study are (1) to investigate the applicability of a combined procedure of the response surface method (RSM) based on input determined from a statistical design and the Latin hypercube sampling (LHS) technique for the uncertainty analysis of CsI release fractions under a hypothetical severe accident sequence of a station blackout at Young-Gwang nuclear power plant using MAAP3.0B code as a benchmark problem; and (2) to propose a new measure of uncertainty importance based on the distributional sensitivity analysis. On the basis of the results obtained in the present work, the RSM is recommended to be used as a principal tool for an overall uncertainty analysis in source term quantifications, while using the LHS in the calculations of standardized regression coefficients (SRC) and standardized rank regression coefficients (SRRC) to determine the subset of the most important input parameters in the final screening step and to check the cumulative distribution functions (cdfs) obtained by RSM. Verification of the response surface model for its sufficient accuracy is a prerequisite for the reliability of the final results obtained by the combined procedure proposed in the present work. In the present study a new measure has been developed to utilize the metric distance obtained from cumulative distribution functions (cdfs). The measure has been evaluated for three different cases of distributions in order to assess the characteristics of the measure: The first case and the second are when the distribution is known as analytical distributions and the other case is when the distribution is unknown. The first case is given by symmetry analytical distributions. The second case consists of two asymmetry distributions of which the skewness is non zero

  13. Kalman filter application to mitigate the errors in the trajectory simulations due to the lunar gravitational model uncertainty

    International Nuclear Information System (INIS)

    Gonçalves, L D; Rocco, E M; De Moraes, R V; Kuga, H K

    2015-01-01

    This paper aims to simulate part of the orbital trajectory of Lunar Prospector mission to analyze the relevance of using a Kalman filter to estimate the trajectory. For this study it is considered the disturbance due to the lunar gravitational potential using one of the most recent models, the LP100K model, which is based on spherical harmonics, and considers the maximum degree and order up to the value 100. In order to simplify the expression of the gravitational potential and, consequently, to reduce the computational effort required in the simulation, in some cases, lower values for degree and order are used. Following this aim, it is made an analysis of the inserted error in the simulations when using such values of degree and order to propagate the spacecraft trajectory and control. This analysis was done using the standard deviation that characterizes the uncertainty for each one of the values of the degree and order used in LP100K model for the satellite orbit. With knowledge of the uncertainty of the gravity model adopted, lunar orbital trajectory simulations may be accomplished considering these values of uncertainty. Furthermore, it was also used a Kalman filter, where is considered the sensor's uncertainty that defines the satellite position at each step of the simulation and the uncertainty of the model, by means of the characteristic variance of the truncated gravity model. Thus, this procedure represents an effort to approximate the results obtained using lower values for the degree and order of the spherical harmonics, to the results that would be attained if the maximum accuracy of the model LP100K were adopted. Also a comparison is made between the error in the satellite position in the situation in which the Kalman filter is used and the situation in which the filter is not used. The data for the comparison were obtained from the standard deviation in the velocity increment of the space vehicle. (paper)

  14. Evidence Theory Based Uncertainty Quantification in Radiological Risk due to Accidental Release of Radioactivity from a Nuclear Power Plant

    International Nuclear Information System (INIS)

    Ingale, S. V.; Datta, D.

    2010-01-01

    Consequence of the accidental release of radioactivity from a nuclear power plant is assessed in terms of exposure or dose to the members of the public. Assessment of risk is routed through this dose computation. Dose computation basically depends on the basic dose assessment model and exposure pathways. One of the exposure pathways is the ingestion of contaminated food. The aim of the present paper is to compute the uncertainty associated with the risk to the members of the public due to the ingestion of contaminated food. The governing parameters of the ingestion dose assessment model being imprecise, we have approached evidence theory to compute the bound of the risk. The uncertainty is addressed by the belief and plausibility fuzzy measures.

  15. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  16. Experimental study of geo-acoustic inversion uncertainty due to ocean sound-speed fluctuations.

    NARCIS (Netherlands)

    Siderius, M.; Nielsen, P.L.; Sellschopp, J.; Snellen, M.; Simons, D.G.

    2001-01-01

    Acoustic data measured in the ocean fluctuate due to the complex time-varying properties of the channel. When measured data are used for model-based, geo-acoustic inversion, how do acoustic fluctuations impact estimates for the seabed properties? In May 1999 SACLANT Undersea Research Center and

  17. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment

  18. Evaluation of photonuclear reaction cross-sections using the reduction method for large systematic uncertainties

    International Nuclear Information System (INIS)

    Varlamov, V.V.; Efimkin, N.G.; Ishkhanov, B.S.; Sapunenko, V.V.

    1994-12-01

    The authors describe a method based on the reduction method for the evaluation of photonuclear reaction cross-sections obtained under conditions where there are large systematic uncertainties (different instrumental functions, calibration and normalization errors). The evaluation method involves using the actual instrumental function (photon spectrum) of each individual experiment to reduce the data to a representation generated by an instrumental function of better quality. The objective is to find the most reasonably achievable monoenergetic representation of the information on the reaction cross-section derived from the results of various experiments and to take into account the calibration and normalization errors in these experiments. The method was used to obtain the evaluated total photoneutron reaction cross-section (γ,xn) for a large number of nuclei. Data obtained for 16 O and 208 Pb are presented. (author). 36 refs, 6 figs, 4 tabs

  19. Changes in Handset Performance Measures due to Spherical Radiation Pattern Measurement Uncertainty

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ødum; Pedersen, Gert Frølund

    An important characteristic of a mobile handset is its ability to receive and transmit power. One way to characterize the performance of a handset in this respect is to use measurements of the spherical radiation pattern from which the total radiated power (TRP), total isotropic sensitivity (TIS)...... with respect to the environment. Standard deviations up to about 0.5dB and a maximum deviation of about 1.6dB were found....... in the performance measures are investigated for both the GSM-900 and the GSM-1800 band. Despite the deliberately large deviations from the reference position, the changes in TRP and TIS are generally within ±0.5dB with a maximum of about 1.4dB. For the MEG values the results depend on the orientation of the handset...... system that may introduce errors in standardized performance measurements. Radiation patterns of six handsets have been measured while they were mounted at various offsets from the reference position defined by the Cellular Telecommunications & Internet Association (CTIA) certification. The change...

  20. Large storage operations under climate change: expanding uncertainties and evolving tradeoffs

    Science.gov (United States)

    Giuliani, Matteo; Anghileri, Daniela; Castelletti, Andrea; Vu, Phuong Nam; Soncini-Sessa, Rodolfo

    2016-03-01

    In a changing climate and society, large storage systems can play a key role for securing water, energy, and food, and rebalancing their cross-dependencies. In this letter, we study the role of large storage operations as flexible means of adaptation to climate change. In particular, we explore the impacts of different climate projections for different future time horizons on the multi-purpose operations of the existing system of large dams in the Red River basin (China-Laos-Vietnam). We identify the main vulnerabilities of current system operations, understand the risk of failure across sectors by exploring the evolution of the system tradeoffs, quantify how the uncertainty associated to climate scenarios is expanded by the storage operations, and assess the expected costs if no adaptation is implemented. Results show that, depending on the climate scenario and the time horizon considered, the existing operations are predicted to change on average from -7 to +5% in hydropower production, +35 to +520% in flood damages, and +15 to +160% in water supply deficit. These negative impacts can be partially mitigated by adapting the existing operations to future climate, reducing the loss of hydropower to 5%, potentially saving around 34.4 million US year-1 at the national scale. Since the Red River is paradigmatic of many river basins across south east Asia, where new large dams are under construction or are planned to support fast growing economies, our results can support policy makers in prioritizing responses and adaptation strategies to the changing climate.

  1. Multi-fidelity uncertainty quantification in large-scale predictive simulations of turbulent flow

    Science.gov (United States)

    Geraci, Gianluca; Jofre-Cruanyes, Lluis; Iaccarino, Gianluca

    2017-11-01

    The performance characterization of complex engineering systems often relies on accurate, but computationally intensive numerical simulations. It is also well recognized that in order to obtain a reliable numerical prediction the propagation of uncertainties needs to be included. Therefore, Uncertainty Quantification (UQ) plays a fundamental role in building confidence in predictive science. Despite the great improvement in recent years, even the more advanced UQ algorithms are still limited to fairly simplified applications and only moderate parameter dimensionality. Moreover, in the case of extremely large dimensionality, sampling methods, i.e. Monte Carlo (MC) based approaches, appear to be the only viable alternative. In this talk we describe and compare a family of approaches which aim to accelerate the convergence of standard MC simulations. These methods are based on hierarchies of generalized numerical resolutions (multi-level) or model fidelities (multi-fidelity), and attempt to leverage the correlation between Low- and High-Fidelity (HF) models to obtain a more accurate statistical estimator without introducing additional HF realizations. The performance of these methods are assessed on an irradiated particle laden turbulent flow (PSAAP II solar energy receiver). This investigation was funded by the United States Department of Energy's (DoE) National Nuclear Security Administration (NNSA) under the Predicitive Science Academic Alliance Program (PSAAP) II at Stanford University.

  2. Large scale applicability of a Fully Adaptive Non-Intrusive Spectral Projection technique: Sensitivity and uncertainty analysis of a transient

    International Nuclear Information System (INIS)

    Perkó, Zoltán; Lathouwers, Danny; Kloosterman, Jan Leen; Hagen, Tim van der

    2014-01-01

    Highlights: • Grid and basis adaptive Polynomial Chaos techniques are presented for S and U analysis. • Dimensionality reduction and incremental polynomial order reduce computational costs. • An unprotected loss of flow transient is investigated in a Gas Cooled Fast Reactor. • S and U analysis is performed with MC and adaptive PC methods, for 42 input parameters. • PC accurately estimates means, variances, PDFs, sensitivities and uncertainties. - Abstract: Since the early years of reactor physics the most prominent sensitivity and uncertainty (S and U) analysis methods in the nuclear community have been adjoint based techniques. While these are very effective for pure neutronics problems due to the linearity of the transport equation, they become complicated when coupled non-linear systems are involved. With the continuous increase in computational power such complicated multi-physics problems are becoming progressively tractable, hence affordable and easily applicable S and U analysis tools also have to be developed in parallel. For reactor physics problems for which adjoint methods are prohibitive Polynomial Chaos (PC) techniques offer an attractive alternative to traditional random sampling based approaches. At TU Delft such PC methods have been studied for a number of years and this paper presents a large scale application of our Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm for performing the sensitivity and uncertainty analysis of a Gas Cooled Fast Reactor (GFR) Unprotected Loss Of Flow (ULOF) transient. The transient was simulated using the Cathare 2 code system and a fully detailed model of the GFR2400 reactor design that was investigated in the European FP7 GoFastR project. Several sources of uncertainty were taken into account amounting to an unusually high number of stochastic input parameters (42) and numerous output quantities were investigated. The results show consistently good performance of the applied adaptive PC

  3. Uncertainty of SWAT model at different DEM resolutions in a large mountainous watershed.

    Science.gov (United States)

    Zhang, Peipei; Liu, Ruimin; Bao, Yimeng; Wang, Jiawei; Yu, Wenwen; Shen, Zhenyao

    2014-04-15

    The objective of this study was to enhance understanding of the sensitivity of the SWAT model to the resolutions of Digital Elevation Models (DEMs) based on the analysis of multiple evaluation indicators. The Xiangxi River, a large tributary of Three Gorges Reservoir in China, was selected as the study area. A range of 17 DEM spatial resolutions, from 30 to 1000 m, was examined, and the annual and monthly model outputs based on each resolution were compared. The following results were obtained: (i) sediment yield was greatly affected by DEM resolution; (ii) the prediction of dissolved oxygen load was significantly affected by DEM resolutions coarser than 500 m; (iii) Total Nitrogen (TN) load was not greatly affected by the DEM resolution; (iv) Nitrate Nitrogen (NO₃-N) and Total Phosphorus (TP) loads were slightly affected by the DEM resolution; and (v) flow and Ammonia Nitrogen (NH₄-N) load were essentially unaffected by the DEM resolution. The flow and dissolved oxygen load decreased more significantly in the dry season than in the wet and normal seasons. Excluding flow and dissolved oxygen, the uncertainties of the other Hydrology/Non-point Source (H/NPS) pollution indicators were greater in the wet season than in the dry and normal seasons. Considering the temporal distribution uncertainties, the optimal DEM resolutions for flow was 30-200 m, for sediment and TP was 30-100 m, for dissolved oxygen and NO₃-N was 30-300 m, for NH₄-N was 30 to 70 m and for TN was 30-150 m. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Changes in Rectal Dose Due to Alterations in Beam Angles for Setup Uncertainty and Range Uncertainty in Carbon-Ion Radiotherapy for Prostate Cancer.

    Directory of Open Access Journals (Sweden)

    Yoshiki Kubota

    Full Text Available Carbon-ion radiotherapy of prostate cancer is challenging in patients with metal implants in one or both hips. Problems can be circumvented by using fields at oblique angles. To evaluate the influence of setup and range uncertainties accompanying oblique field angles, we calculated rectal dose changes with oblique orthogonal field angles, using a device with fixed fields at 0° and 90° and a rotating patient couch.Dose distributions were calculated at the standard angles of 0° and 90°, and then at 30° and 60°. Setup uncertainty was simulated with changes from -2 mm to +2 mm for fields in the anterior-posterior, left-right, and cranial-caudal directions, and dose changes from range uncertainty were calculated with a 1 mm water-equivalent path length added to the target isocenter in each angle. The dose distributions regarding the passive irradiation method were calculated using the K2 dose algorithm.The rectal volumes with 0°, 30°, 60°, and 90° field angles at 95% of the prescription dose were 3.4±0.9 cm3, 2.8±1.1 cm3, 2.2±0.8 cm3, and 3.8±1.1 cm3, respectively. As compared with 90° fields, 30° and 60° fields had significant advantages regarding setup uncertainty and significant disadvantages regarding range uncertainty, but were not significantly different from the 90° field setup and range uncertainties.The setup and range uncertainties calculated at 30° and 60° field angles were not associated with a significant change in rectal dose relative to those at 90°.

  5. Prototype application of best estimate and uncertainty safety analysis methodology to large LOCA analysis

    International Nuclear Information System (INIS)

    Luxat, J.C.; Huget, R.G.

    2001-01-01

    Development of a methodology to perform best estimate and uncertainty nuclear safety analysis has been underway at Ontario Power Generation for the past two and one half years. A key driver for the methodology development, and one of the major challenges faced, is the need to re-establish demonstrated safety margins that have progressively been undermined through excessive and compounding conservatism in deterministic analyses. The major focus of the prototyping applications was to quantify the safety margins that exist at the probable range of high power operating conditions, rather than the highly improbable operating states associated with Limit of the Envelope (LOE) assumptions. In LOE, all parameters of significance to the consequences of a postulated accident are assumed to simultaneously deviate to their limiting values. Another equally important objective of the prototyping was to demonstrate the feasibility of conducting safety analysis as an incremental analysis activity, as opposed to a major re-analysis activity. The prototype analysis solely employed prior analyses of Bruce B large break LOCA events - no new computer simulations were undertaken. This is a significant and novel feature of the prototyping work. This methodology framework has been applied to a postulated large break LOCA in a Bruce generating unit on a prototype basis. This paper presents results of the application. (author)

  6. Uncertainty assessment of a model for biological nitrogen and phosphorus removal: Application to a large wastewater treatment plant

    Science.gov (United States)

    Mannina, Giorgio; Cosenza, Alida; Viviani, Gaspare

    In the last few years, the use of mathematical models in WasteWater Treatment Plant (WWTP) processes has become a common way to predict WWTP behaviour. However, mathematical models generally demand advanced input for their implementation that must be evaluated by an extensive data-gathering campaign, which cannot always be carried out. This fact, together with the intrinsic complexity of the model structure, leads to model results that may be very uncertain. Quantification of the uncertainty is imperative. However, despite the importance of uncertainty quantification, only few studies have been carried out in the wastewater treatment field, and those studies only included a few of the sources of model uncertainty. Seeking the development of the area, the paper presents the uncertainty assessment of a mathematical model simulating biological nitrogen and phosphorus removal. The uncertainty assessment was conducted according to the Generalised Likelihood Uncertainty Estimation (GLUE) methodology that has been scarcely applied in wastewater field. The model was based on activated-sludge models 1 (ASM) and 2 (ASM2). Different approaches can be used for uncertainty analysis. The GLUE methodology requires a large number of Monte Carlo simulations in which a random sampling of individual parameters drawn from probability distributions is used to determine a set of parameter values. Using this approach, model reliability was evaluated based on its capacity to globally limit the uncertainty. The method was applied to a large full-scale WWTP for which quantity and quality data was gathered. The analysis enabled to gain useful insights for WWTP modelling identifying the crucial aspects where higher uncertainty rely and where therefore, more efforts should be provided in terms of both data gathering and modelling practises.

  7. Dose uncertainties for large solar particle events: Input spectra variability and human geometry approximations

    International Nuclear Information System (INIS)

    Townsend, Lawrence W.; Zapp, E. Neal

    1999-01-01

    The true uncertainties in estimates of body organ absorbed dose and dose equivalent, from exposures of interplanetary astronauts to large solar particle events (SPEs), are essentially unknown. Variations in models used to parameterize SPE proton spectra for input into space radiation transport and shielding computer codes can result in uncertainty about the reliability of dose predictions for these events. Also, different radiation transport codes and their input databases can yield significant differences in dose predictions, even for the same input spectra. Different results may also be obtained for the same input spectra and transport codes if different spacecraft and body self-shielding distributions are assumed. Heretofore there have been no systematic investigations of the variations in dose and dose equivalent resulting from these assumptions and models. In this work we present a study of the variability in predictions of organ dose and dose equivalent arising from the use of different parameters to represent the same incident SPE proton data and from the use of equivalent sphere approximations to represent human body geometry. The study uses the BRYNTRN space radiation transport code to calculate dose and dose equivalent for the skin, ocular lens and bone marrow using the October 1989 SPE as a model event. Comparisons of organ dose and dose equivalent, obtained with a realistic human geometry model and with the oft-used equivalent sphere approximation, are also made. It is demonstrated that variations of 30-40% in organ dose and dose equivalent are obtained for slight variations in spectral fitting parameters obtained when various data points are included or excluded from the fitting procedure. It is further demonstrated that extrapolating spectra from low energy (≤30 MeV) proton fluence measurements, rather than using fluence data extending out to 100 MeV results in dose and dose equivalent predictions that are underestimated by factors as large as 2

  8. A Statistical Modeling Framework for Characterising Uncertainty in Large Datasets: Application to Ocean Colour

    Directory of Open Access Journals (Sweden)

    Peter E. Land

    2018-05-01

    Full Text Available Uncertainty estimation is crucial to establishing confidence in any data analysis, and this is especially true for Essential Climate Variables, including ocean colour. Methods for deriving uncertainty vary greatly across data types, so a generic statistics-based approach applicable to multiple data types is an advantage to simplify the use and understanding of uncertainty data. Progress towards rigorous uncertainty analysis of ocean colour has been slow, in part because of the complexity of ocean colour processing. Here, we present a general approach to uncertainty characterisation, using a database of satellite-in situ matchups to generate a statistical model of satellite uncertainty as a function of its contributing variables. With an example NASA MODIS-Aqua chlorophyll-a matchups database mostly covering the north Atlantic, we demonstrate a model that explains 67% of the squared error in log(chlorophyll-a as a potentially correctable bias, with the remaining uncertainty being characterised as standard deviation and standard error at each pixel. The method is quite general, depending only on the existence of a suitable database of matchups or reference values, and can be applied to other sensors and data types such as other satellite observed Essential Climate Variables, empirical algorithms derived from in situ data, or even model data.

  9. Uncertainties in surface mass and energy flux estimates due to different eddy covariance sensors and technical set-up

    Science.gov (United States)

    Arriga, Nicola; Fratini, Gerardo; Forgione, Antonio; Tomassucci, Michele; Papale, Dario

    2010-05-01

    Eddy covariance is a well established and widely used methodology for the measurement of turbulent fluxes of mass and energy in the atmospheric boundary layer, in particular to estimate CO2/H2O and heat exchange above ecologically relevant surfaces (Aubinet 2000, Baldocchi 2003). Despite its long term application and theoretical studies, many issues are still open about the effect of different experimental set-up on final flux estimates. Open issues are the evaluation of the performances of different kind of sensors (e.g. open path vs closed path infra-red gas analysers, vertical vs horizontal mounting ultrasonic anemometers), the quantification of the impact of corresponding physical corrections to be applied to get robust flux estimates taking in account all processes concurring to the measurement (e.g. the so-called WPL term, signal attenuation due to air sampling system for closed path analyser, relative position of analyser and anemometer) and the differences between several data transmission protocols used (analogue, digital RS-232, SDM). A field experiment was designed to study these issues using several instruments among those most used within the Fluxnet community and to compare their performances under conditions supposed to be critical: rainy and cold weather conditions for open-path analysers (Burba 2008), water transport and absorption at high air relative humidity conditions for closed-path systems (Ibrom, 2007), frequency sampling limits and recorded data robustness due to different transmission protocols (RS232, SDM, USB, Ethernet) and finally the effect of the displacement between anemometer and analyser using at least two identical analysers placed at different horizontal and vertical distances from the anemometer. Aim of this experiment is to quantify the effect of several technical solutions on the final estimates of fluxes measured at a point in the space and if they represent a significant source of uncertainty for mass and energy cycle

  10. SU-F-T-301: Planar Dose Pass Rate Inflation Due to the MapCHECK Measurement Uncertainty Function

    International Nuclear Information System (INIS)

    Bailey, D; Spaans, J; Kumaraswamy, L; Podgorsak, M

    2016-01-01

    Purpose: To quantify the effect of the Measurement Uncertainty function on planar dosimetry pass rates, as analyzed with Sun Nuclear Corporation analytic software (“MapCHECK” or “SNC Patient”). This optional function is toggled on by default upon software installation, and automatically increases the user-defined dose percent difference (%Diff) tolerance for each planar dose comparison. Methods: Dose planes from 109 IMRT fields and 40 VMAT arcs were measured with the MapCHECK 2 diode array, and compared to calculated planes from a commercial treatment planning system. Pass rates were calculated within the SNC analytic software using varying calculation parameters, including Measurement Uncertainty on and off. By varying the %Diff criterion for each dose comparison performed with Measurement Uncertainty turned off, an effective %Diff criterion was defined for each field/arc corresponding to the pass rate achieved with MapCHECK Uncertainty turned on. Results: For 3%/3mm analysis, the Measurement Uncertainty function increases the user-defined %Diff by 0.8–1.1% average, depending on plan type and calculation technique, for an average pass rate increase of 1.0–3.5% (maximum +8.7%). For 2%, 2 mm analysis, the Measurement Uncertainty function increases the user-defined %Diff by 0.7–1.2% average, for an average pass rate increase of 3.5–8.1% (maximum +14.2%). The largest increases in pass rate are generally seen with poorly-matched planar dose comparisons; the MapCHECK Uncertainty effect is markedly smaller as pass rates approach 100%. Conclusion: The Measurement Uncertainty function may substantially inflate planar dose comparison pass rates for typical IMRT and VMAT planes. The types of uncertainties incorporated into the function (and their associated quantitative estimates) as described in the software user’s manual may not accurately estimate realistic measurement uncertainty for the user’s measurement conditions. Pass rates listed in published

  11. SU-F-T-301: Planar Dose Pass Rate Inflation Due to the MapCHECK Measurement Uncertainty Function

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, D [Northside Hospital Cancer Institute, Atlanta, GA (United States); Spaans, J; Kumaraswamy, L; Podgorsak, M [Roswell Park Cancer Institute, Buffalo, NY (United States)

    2016-06-15

    Purpose: To quantify the effect of the Measurement Uncertainty function on planar dosimetry pass rates, as analyzed with Sun Nuclear Corporation analytic software (“MapCHECK” or “SNC Patient”). This optional function is toggled on by default upon software installation, and automatically increases the user-defined dose percent difference (%Diff) tolerance for each planar dose comparison. Methods: Dose planes from 109 IMRT fields and 40 VMAT arcs were measured with the MapCHECK 2 diode array, and compared to calculated planes from a commercial treatment planning system. Pass rates were calculated within the SNC analytic software using varying calculation parameters, including Measurement Uncertainty on and off. By varying the %Diff criterion for each dose comparison performed with Measurement Uncertainty turned off, an effective %Diff criterion was defined for each field/arc corresponding to the pass rate achieved with MapCHECK Uncertainty turned on. Results: For 3%/3mm analysis, the Measurement Uncertainty function increases the user-defined %Diff by 0.8–1.1% average, depending on plan type and calculation technique, for an average pass rate increase of 1.0–3.5% (maximum +8.7%). For 2%, 2 mm analysis, the Measurement Uncertainty function increases the user-defined %Diff by 0.7–1.2% average, for an average pass rate increase of 3.5–8.1% (maximum +14.2%). The largest increases in pass rate are generally seen with poorly-matched planar dose comparisons; the MapCHECK Uncertainty effect is markedly smaller as pass rates approach 100%. Conclusion: The Measurement Uncertainty function may substantially inflate planar dose comparison pass rates for typical IMRT and VMAT planes. The types of uncertainties incorporated into the function (and their associated quantitative estimates) as described in the software user’s manual may not accurately estimate realistic measurement uncertainty for the user’s measurement conditions. Pass rates listed in published

  12. The Effects of Uncertainty in Speed-Flow Curve Parameters on a Large-Scale Model

    DEFF Research Database (Denmark)

    Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2014-01-01

    -delay functions express travel time as a function of traffic flows and the theoretical capacity of the modeled facility. The U.S. Bureau of Public Roads (BPR) formula is one of the most extensively applied volume delay functions in practice. This study investigated uncertainty in the BPR parameters. Initially......-stage Danish national transport model. The results clearly highlight the importance to modeling purposes of taking into account BPR formula parameter uncertainty, expressed as a distribution of values rather than assumed point values. Indeed, the model output demonstrates a noticeable sensitivity to parameter...

  13. Calibration and Forward Uncertainty Propagation for Large-eddy Simulations of Engineering Flows

    Energy Technology Data Exchange (ETDEWEB)

    Templeton, Jeremy Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Blaylock, Myra L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Domino, Stefan P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hewson, John C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kumar, Pritvi Raj [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ling, Julia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Najm, Habib N. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ruiz, Anthony [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Safta, Cosmin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stewart, Alessia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wagner, Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    The objective of this work is to investigate the efficacy of using calibration strategies from Uncertainty Quantification (UQ) to determine model coefficients for LES. As the target methods are for engineering LES, uncertainty from numerical aspects of the model must also be quantified. 15 The ultimate goal of this research thread is to generate a cost versus accuracy curve for LES such that the cost could be minimized given an accuracy prescribed by an engineering need. Realization of this goal would enable LES to serve as a predictive simulation tool within the engineering design process.

  14. Reduction of the uncertainty due to fissile clusters in radioactive waste characterization with the Differential Die-away Technique

    Science.gov (United States)

    Antoni, R.; Passard, C.; Perot, B.; Guillaumin, F.; Mazy, C.; Batifol, M.; Grassi, G.

    2018-07-01

    AREVA NC is preparing to process, characterize and compact old used fuel metallic waste stored at La Hague reprocessing plant in view of their future storage ("Haute Activité Oxyde" HAO project). For a large part of these historical wastes, the packaging is planned in CSD-C canisters ("Colis Standard de Déchets Compacté s") in the ACC hulls and nozzles compaction facility ("Atelier de Compactage des Coques et embouts"). . This paper presents a new method to take into account the possible presence of fissile material clusters, which may have a significant impact in the active neutron interrogation (Differential Die-away Technique) measurement of the CSD-C canisters, in the industrial neutron measurement station "P2-2". A matrix effect correction has already been investigated to predict the prompt fission neutron calibration coefficient (which provides the fissile mass) from an internal "drum flux monitor" signal provided during the active measurement by a boron-coated proportional counter located in the measurement cavity, and from a "drum transmission signal" recorded in passive mode by the detection blocks, in presence of an AmBe point source in the measurement cell. Up to now, the relationship between the calibration coefficient and these signals was obtained from a factorial design that did not consider the potential for occurrence of fissile material clusters. The interrogative neutron self-shielding in these clusters was treated separately and resulted in a penalty coefficient larger than 20% to prevent an underestimation of the fissile mass within the drum. In this work, we have shown that the incorporation of a new parameter in the factorial design, representing the fissile mass fraction in these clusters, provides an alternative to the penalty coefficient. This new approach finally does not degrade the uncertainty of the original prediction, which was calculated without taking into consideration the possible presence of clusters. Consequently, the

  15. Uncertainty in the Future Distribution of Tropospheric Ozone over West Africa due to Variability in Anthropogenic Emissions Estimates between 2025 and 2050

    Directory of Open Access Journals (Sweden)

    J. E. Williams

    2011-01-01

    Full Text Available Particle and trace gas emissions due to anthropogenic activity are expected to increase significantly in West Africa over the next few decades due to rising population and more energy intensive lifestyles. Here we perform 3D global chemistry-transport model calculations for 2025 and 2050 using both a “business-as-usual” (A1B and “clean economy” (B1 future anthropogenic emission scenario to focus on the changes in the distribution and uncertainties associated with tropospheric O3 due to the various projected emission scenarios. When compared to the present-day troposphere we find that there are significant increases in tropospheric O3 for the A1B emission scenario, with the largest increases being located in the lower troposphere near the source regions and into the Sahel around 15–20°N. In part this increase is due to more efficient NOx re-cycling related to increases in the background methane concentrations. Examining the uncertainty across different emission inventories reveals that there is an associated uncertainty of up to ~20% in the predicted increases at 2025 and 2050. For the upper troposphere, where increases in O3 have a more pronounced impact on radiative forcing, the uncertainty is influenced by transport of O3 rich air from Asia on the Tropical Easterly Jet.

  16. Probabilistic modelling and uncertainty analysis of flux and water balance changes in a regional aquifer system due to coal seam gas development.

    Science.gov (United States)

    Sreekanth, J; Cui, Tao; Pickett, Trevor; Rassam, David; Gilfedder, Mat; Barrett, Damian

    2018-09-01

    Large scale development of coal seam gas (CSG) is occurring in many sedimentary basins around the world including Australia, where commercial production of CSG has started in the Surat and Bowen basins. CSG development often involves extraction of large volumes of water that results in depressurising aquifers that overlie and/or underlie the coal seams thus perturbing their flow regimes. This can potentially impact regional aquifer systems that are used for many purposes such as irrigation, and stock and domestic water. In this study, we adopt a probabilistic approach to quantify the depressurisation of the Gunnedah coal seams and how this impacts fluxes to, and from the overlying Great Artesian Basin (GAB) Pilliga Sandstone aquifer. The proposed method is suitable when effects of a new resource development activity on the regional groundwater balance needs to be assessed and account for large scale uncertainties in the groundwater flow system and proposed activity. The results indicated that the extraction of water and gas from the coal seam could potentially induce additional fluxes from the Pilliga Sandstone to the deeper formations due to lowering pressure heads in the coal seams. The median value of the rise in the maximum flux from the Pilliga Sandstone to the deeper formations is estimated to be 85ML/year, which is considered insignificant as it forms only about 0.29% of the Long Term Annual Average Extraction Limit of 30GL/year from the groundwater management area. The probabilistic simulation of the water balance components indicates only small changes being induced by CSG development that influence interactions of the Pilliga Sandstone with the overlying and underlying formations and with the surface water courses. The current analyses that quantified the potential maximum impacts of resource developments and how they influences the regional water balance, would greatly underpin future management decisions. Copyright © 2018 Elsevier B.V. All rights

  17. Multi-Scale Fusion of Information for Uncertainty Quantification and Management in Large-Scale Simulations

    Science.gov (United States)

    2015-12-02

    of completely new nonlinear Malliavin calculus . This type of calculus is important for the analysis and simulation of stationary and/or “causal...been limited by the fact that it requires the solution of an optimization problem with noisy gradients . When using deterministic optimization schemes...under uncertainty. We tested new developments on nonlinear Malliavin calculus , combining reduced basis methods with ANOVA, model validation, on

  18. Application of code scaling applicability and uncertainty methodology to the large break loss of coolant

    International Nuclear Information System (INIS)

    Young, M.Y.; Bajorek, S.M.; Nissley, M.E.

    1998-01-01

    In the late 1980s, after completion of an extensive research program, the United States Nuclear Regulatory Commission (USNRC) amended its regulations (10CFR50.46) to allow the use of realistic physical models to analyze the loss of coolant accident (LOCA) in a light water reactors. Prior to this time, the evaluation of this accident was subject to a prescriptive set of rules (appendix K of the regulations) requiring conservative models and assumptions to be applied simultaneously, leading to very pessimistic estimates of the impact of this accident on the reactor core. The rule change therefore promised to provide significant benefits to owners of power reactors, allowing them to increase output. In response to the rule change, a method called code scaling, applicability and uncertainty (CSAU) was developed to apply realistic methods, while properly taking into account data uncertainty, uncertainty in physical modeling and plant variability. The method was claimed to be structured, traceable, and practical, but was met with some criticism when first demonstrated. In 1996, the USNRC approved a methodology, based on CSAU, developed by a group led by Westinghouse. The lessons learned in this application of CSAU will be summarized. Some of the issues raised concerning the validity and completeness of the CSAU methodology will also be discussed. (orig.)

  19. Modeling and solving a large-scale generation expansion planning problem under uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Shan; Ryan, Sarah M. [Iowa State University, Department of Industrial and Manufacturing Systems Engineering, Ames (United States); Watson, Jean-Paul [Sandia National Laboratories, Discrete Math and Complex Systems Department, Albuquerque (United States); Woodruff, David L. [University of California Davis, Graduate School of Management, Davis (United States)

    2011-11-15

    We formulate a generation expansion planning problem to determine the type and quantity of power plants to be constructed over each year of an extended planning horizon, considering uncertainty regarding future demand and fuel prices. Our model is expressed as a two-stage stochastic mixed-integer program, which we use to compute solutions independently minimizing the expected cost and the Conditional Value-at-Risk; i.e., the risk of significantly larger-than-expected operational costs. We introduce stochastic process models to capture demand and fuel price uncertainty, which are in turn used to generate trees that accurately represent the uncertainty space. Using a realistic problem instance based on the Midwest US, we explore two fundamental, unexplored issues that arise when solving any stochastic generation expansion model. First, we introduce and discuss the use of an algorithm for computing confidence intervals on obtained solution costs, to account for the fact that a finite sample of scenarios was used to obtain a particular solution. Second, we analyze the nature of solutions obtained under different parameterizations of this method, to assess whether the recommended solutions themselves are invariant to changes in costs. The issues are critical for decision makers who seek truly robust recommendations for generation expansion planning. (orig.)

  20. Sensitivity analysis of local uncertainties in large break loss-of-coolant accident (LB-LOCA) thermo-mechanical simulations

    Energy Technology Data Exchange (ETDEWEB)

    Arkoma, Asko, E-mail: asko.arkoma@vtt.fi; Ikonen, Timo

    2016-08-15

    Highlights: • A sensitivity analysis using the data from EPR LB-LOCA simulations is done. • A procedure to analyze such complex data is outlined. • Both visual and quantitative methods are used. • Input factors related to core design are identified as most significant. - Abstract: In this paper, a sensitivity analysis for the data originating from a large break loss-of-coolant accident (LB-LOCA) analysis of an EPR-type nuclear power plant is presented. In the preceding LOCA analysis, the number of failing fuel rods in the accident was established (Arkoma et al., 2015). However, the underlying causes for rod failures were not addressed. It is essential to bring out which input parameters and boundary conditions have significance to the outcome of the analysis, i.e. the ballooning and burst of the rods. Due to complexity of the existing data, the first part of the analysis consists of defining the relevant input parameters for the sensitivity analysis. Then, selected sensitivity measures are calculated between the chosen input and output parameters. The ultimate goal is to develop a systematic procedure for the sensitivity analysis of statistical LOCA simulation that takes into account the various sources of uncertainties in the calculation chain. In the current analysis, the most relevant parameters with respect to the cladding integrity are the decay heat power during the transient, the thermal hydraulic conditions in the rod’s location in reactor, and the steady-state irradiation history of the rod. Meanwhile, the tolerances in fuel manufacturing parameters were found to have negligible effect on cladding deformation.

  1. Uncertainty analysis of multiple canister repository model by large-scale calculation

    International Nuclear Information System (INIS)

    Tsujimoto, K.; Okuda, H.; Ahn, J.

    2007-01-01

    A prototype uncertainty analysis has been made by using the multiple-canister radionuclide transport code, VR, for performance assessment for the high-level radioactive waste repository. Fractures in the host rock determine main conduit of groundwater, and thus significantly affect the magnitude of radionuclide release rates from the repository. In this study, the probability distribution function (PDF) for the number of connected canisters in the same fracture cluster that bears water flow has been determined in a Monte-Carlo fashion by running the FFDF code with assumed PDFs for fracture geometry. The uncertainty for the release rate of 237 Np from a hypothetical repository containing 100 canisters has been quantitatively evaluated by using the VR code with PDFs for the number of connected canisters and the near field rock porosity. The calculation results show that the mass transport is greatly affected by (1) the magnitude of the radionuclide source determined by the number of connected canisters by the fracture cluster, and (2) the canister concentration effect in the same fracture network. The results also show the two conflicting tendencies that the more fractures in the repository model space, the greater average value but the smaller uncertainty of the peak fractional release rate is. To perform a vast amount of calculation, we have utilized the Earth Simulator and SR8000. The multi-level hybrid programming method is applied in the optimization to exploit high performance of the Earth Simulator. The Latin Hypercube Sampling has been utilized to reduce the number of samplings in Monte-Carlo calculation. (authors)

  2. A computationally inexpensive model for estimating dimensional measurement uncertainty due to x-ray computed tomography instrument misalignments

    Science.gov (United States)

    Ametova, Evelina; Ferrucci, Massimiliano; Chilingaryan, Suren; Dewulf, Wim

    2018-06-01

    The recent emergence of advanced manufacturing techniques such as additive manufacturing and an increased demand on the integrity of components have motivated research on the application of x-ray computed tomography (CT) for dimensional quality control. While CT has shown significant empirical potential for this purpose, there is a need for metrological research to accelerate the acceptance of CT as a measuring instrument. The accuracy in CT-based measurements is vulnerable to the instrument geometrical configuration during data acquisition, namely the relative position and orientation of x-ray source, rotation stage, and detector. Consistency between the actual instrument geometry and the corresponding parameters used in the reconstruction algorithm is critical. Currently available procedures provide users with only estimates of geometrical parameters. Quantification and propagation of uncertainty in the measured geometrical parameters must be considered to provide a complete uncertainty analysis and to establish confidence intervals for CT dimensional measurements. In this paper, we propose a computationally inexpensive model to approximate the influence of errors in CT geometrical parameters on dimensional measurement results. We use surface points extracted from a computer-aided design (CAD) model to model discrepancies in the radiographic image coordinates assigned to the projected edges between an aligned system and a system with misalignments. The efficacy of the proposed method was confirmed on simulated and experimental data in the presence of various geometrical uncertainty contributors.

  3. Warning and prevention based on estimates with large uncertainties: the case of low-frequency and large-impact events like tsunamis

    Science.gov (United States)

    Tinti, Stefano; Armigliato, Alberto; Pagnoni, Gianluca; Zaniboni, Filippo

    2013-04-01

    Geoscientists deal often with hazardous processes like earthquakes, volcanic eruptions, tsunamis, hurricanes, etc., and their research is aimed not only to a better understanding of the physical processes, but also to provide assessment of the space and temporal evolution of a given individual event (i.e. to provide short-term prediction) and of the expected evolution of a group of events (i.e. to provide statistical estimates referred to a given return period, and a given geographical area). One of the main issues of any scientific method is how to cope with measurement errors, a topic which in case of forecast of ongoing or of future events translates into how to deal with forecast uncertainties. In general, the more data are available and processed to make a prediction, the more accurate the prediction is expected to be if the scientific approach is sound, and the smaller the associated uncertainties are. However, there are several important cases where assessment is to be made with insufficient data or insufficient time for processing, which leads to large uncertainties. Two examples can be given taken from tsunami science, since tsunamis are rare events that may have destructive power and very large impact. One example is the case of warning for a tsunami generated by a near-coast earthquake, which is an issue at the focus of the European funded project NearToWarn. Warning has to be launched before tsunami hits the coast, that is in a few minutes after its generation. This may imply that data collected in such a short time are not yet enough for an accurate evaluation, also because the implemented monitoring system (if any) could be inadequate (f.i. one reason of inadequacy could be that implementing a dense instrumental network could be judged too expensive for rare events) The second case is the long term prevention from tsunami strikes. Tsunami infrequency may imply that the historical record for a given piece of coast is too short to capture a statistical

  4. Scalability on LHS (Latin Hypercube Sampling) samples for use in uncertainty analysis of large numerical models

    International Nuclear Information System (INIS)

    Baron, Jorge H.; Nunez Mac Leod, J.E.

    2000-01-01

    The present paper deals with the utilization of advanced sampling statistical methods to perform uncertainty and sensitivity analysis on numerical models. Such models may represent physical phenomena, logical structures (such as boolean expressions) or other systems, and various of their intrinsic parameters and/or input variables are usually treated as random variables simultaneously. In the present paper a simple method to scale-up Latin Hypercube Sampling (LHS) samples is presented, starting with a small sample and duplicating its size at each step, making it possible to use the already run numerical model results with the smaller sample. The method does not distort the statistical properties of the random variables and does not add any bias to the samples. The results is a significant reduction in numerical models running time can be achieved (by re-using the previously run samples), keeping all the advantages of LHS, until an acceptable representation level is achieved in the output variables. (author)

  5. Using Large-Scale Cooperative Control to Manage Operational Uncertainties for Aquifer Thermal Energy Storage

    Science.gov (United States)

    Jaxa-Rozen, M.; Rostampour, V.; Kwakkel, J. H.; Bloemendal, M.

    2017-12-01

    Seasonal Aquifer Thermal Energy Storage (ATES) technology can help reduce the demand of energy for heating and cooling in buildings, and has become a popular option for larger buildings in northern Europe. However, the larger-scale deployment of this technology has evidenced some issues of concern for policymakers; in particular, recent research shows that operational uncertainties contribute to inefficient outcomes under current planning methods for ATES. For instance, systems in the Netherlands typically use less than half of their permitted pumping volume on an annual basis. This overcapacity gives users more flexibility to operate their systems in response to the uncertainties which drive building energy demand; these include short-term operational factors such as weather and occupancy, and longer-term, deeply uncertain factors such as changes in climate and aquifer conditions over the lifespan of the buildings. However, as allocated subsurface volume remains unused, this situation limits the adoption of the technology in dense areas. Previous work using coupled agent-based/geohydrological simulation has shown that the cooperative operation of neighbouring ATES systems can support more efficient spatial planning, by dynamically managing thermal interactions in response to uncertain operating conditions. An idealized case study with centralized ATES control thus showed significant improvements in the energy savings which could obtained per unit of allocated subsurface volume, without degrading the recovery performance of systems. This work will extend this cooperative approach for a realistic case study of ATES planning in the city of Utrecht, in the Netherlands. This case was previously simulated under different scenarios for individual ATES operation. The poster will compare these results with a cooperative case under which neighbouring systems can coordinate their operation to manage interactions. Furthermore, a cooperative game-theoretical framework will be

  6. Managing Risk and Uncertainty in Large-Scale University Research Projects

    Science.gov (United States)

    Moore, Sharlissa; Shangraw, R. F., Jr.

    2011-01-01

    Both publicly and privately funded research projects managed by universities are growing in size and scope. Complex, large-scale projects (over $50 million) pose new management challenges and risks for universities. This paper explores the relationship between project success and a variety of factors in large-scale university projects. First, we…

  7. Large Civil Tiltrotor (LCTR2) Interior Noise Predictions due to Turbulent Boundary Layer Excitation

    Science.gov (United States)

    Grosveld, Ferdinand W.

    2013-01-01

    The Large Civil Tiltrotor (LCTR2) is a conceptual vehicle that has a design goal to transport 90 passengers over a distance of 1800 km at a speed of 556 km/hr. In this study noise predictions were made in the notional LCTR2 cabin due to Cockburn/Robertson and Efimtsov turbulent boundary layer (TBL) excitation models. A narrowband hybrid Finite Element (FE) analysis was performed for the low frequencies (6-141 Hz) and a Statistical Energy Analysis (SEA) was conducted for the high frequency one-third octave bands (125- 8000 Hz). It is shown that the interior sound pressure level distribution in the low frequencies is governed by interactions between individual structural and acoustic modes. The spatially averaged predicted interior sound pressure levels for the low frequency hybrid FE and the high frequency SEA analyses, due to the Efimtsov turbulent boundary layer excitation, were within 1 dB in the common 125 Hz one-third octave band. The averaged interior noise levels for the LCTR2 cabin were predicted lower than the levels in a comparable Bombardier Q400 aircraft cabin during cruise flight due to the higher cruise altitude and lower Mach number of the LCTR2. LCTR2 cabin noise due to TBL excitation during cruise flight was found not unacceptable for crew or passengers when predictions were compared to an acoustic survey on a Q400 aircraft.

  8. Surface and Internal Waves due to a Moving Load on a Very Large Floating Structure

    Directory of Open Access Journals (Sweden)

    Taro Kakinuma

    2012-01-01

    Full Text Available Interaction of surface/internal water waves with a floating platform is discussed with nonlinearity of fluid motion and flexibility of oscillating structure. The set of governing equations based on a variational principle is applied to a one- or two-layer fluid interacting with a horizontally very large and elastic thin plate floating on the water surface. Calculation results of surface displacements are compared with the existing experimental data, where a tsunami, in terms of a solitary wave, propagates across one-layer water with a floating thin plate. We also simulate surface and internal waves due to a point load, such as an airplane, moving on a very large floating structure in shallow water. The wave height of the surface or internal mode is amplified when the velocity of moving point load is equal to the surface- or internal-mode celerity, respectively.

  9. A stochastic model of depolarization enhancement due to large energy spread in electron storage rings

    International Nuclear Information System (INIS)

    Buon, J.

    1988-10-01

    A new semiclassical and stochastic model of spin diffusion is used to obtain numerical predictions for depolarization enhancement due to beam energy spread. It confirms the results of previous models for the synchrotron sidebands of isolated spin resonances. A satisfactory agreement is obtained with the width of a synchrotron satellite observed at SPEAR. For HERA and LEP, at Z 0 energy, the depolarization enhancement is of the order of a few units and increases very rapidly with the energy spread. Large reduction of polarization degree is expected in these rings

  10. Analytic calculation of depolarization due to large energy spread in high-energy electron storage rings

    International Nuclear Information System (INIS)

    Buon, J.

    1989-08-01

    A new semiclassical and stochastic model of spin diffusion is used to obtain numerical predictions for depolarization enhancement due to beam energy spread. It confirms the results of previous models for the synchrotron sidebands of spin resonances. A satisfactory agreement is obtained with the width of a synchrotron satellite observed at SPEAR. For HERA, TRISTAN, and LEP at Z 0 energy, the depolarization enhancement is of the order of a few units and increases very rapidly with the energy spread. Large reduction of polarization degree is expected in these rings

  11. Sensitiveness Analysis of Neutronic Parameters Due to Uncertainty in Thermo-hydraulic parameters on CAREM-25 Reactor

    International Nuclear Information System (INIS)

    Serra, Oscar

    2000-01-01

    Some studies were done about the effect of the uncertainty in the values of several thermo-hydraulic parameters on the core behaviour of the CAREM-25 reactor.By using the chain codes CITVAP-THERMIT and the perturbation the reference states, it was found that concerning to the total power, the effects were not very important, but were much bigger for the pressure.Furthermore were hardly significant in the presence of any perturbation on the void fraction calculation and the fuel temperature.The reactivity and the power peaking factor had highly important changes in the case of the coolant flow.We conclude that the use of this procedure is adequate and useful to our purpose

  12. The Application of Best Estimate and Uncertainty Analysis Methodology to Large LOCA Power Pulse in a CANDU 6 Reactor

    International Nuclear Information System (INIS)

    Abdul-Razzak, A.; Zhang, J.; Sills, H.E.; Flatt, L.; Jenkins, D.; Wallace, D.J.; Popov, N.

    2002-01-01

    The paper describes briefly a best estimate plus uncertainty analysis (BE+UA) methodology and presents its proto-typing application to the power pulse phase of a limiting large Loss-of-Coolant Accident (LOCA) for a CANDU 6 reactor fuelled with CANFLEX R fuel. The methodology is consistent with and builds on world practice. The analysis is divided into two phases to focus on the dominant parameters for each phase and to allow for the consideration of all identified highly ranked parameters in the statistical analysis and response surface fits for margin parameters. The objective of this analysis is to quantify improvements in predicted safety margins under best estimate conditions. (authors)

  13. Evaluation and correction of uncertainty due to Gaussian approximation in radar - rain gauge merging using kriging with external drift

    Science.gov (United States)

    Cecinati, F.; Wani, O.; Rico-Ramirez, M. A.

    2016-12-01

    It is widely recognised that merging radar rainfall estimates (RRE) with rain gauge data can improve the RRE and provide areal and temporal coverage that rain gauges cannot offer. Many methods to merge radar and rain gauge data are based on kriging and require an assumption of Gaussianity on the variable of interest. In particular, this work looks at kriging with external drift (KED), because it is an efficient, widely used, and well performing merging method. Rainfall, especially at finer temporal scale, does not have a normal distribution and presents a bi-modal skewed distribution. In some applications a Gaussianity assumption is made, without any correction. In other cases, variables are transformed in order to obtain a distribution closer to Gaussian. This work has two objectives: 1) compare different transformation methods in merging applications; 2) evaluate the uncertainty arising when untransformed rainfall data is used in KED. The comparison of transformation methods is addressed under two points of view. On the one hand, the ability to reproduce the original probability distribution after back-transformation of merged products is evaluated with qq-plots, on the other hand the rainfall estimates are compared with an independent set of rain gauge measurements. The tested methods are 1) no transformation, 2) Box-Cox transformations with parameter equal to λ=0.5 (square root), 3) λ=0.25 (square root - square root), and 4) λ=0.1 (almost logarithmic), 5) normal quantile transformation, and 6) singularity analysis. The uncertainty associated with the use of non-transformed data in KED is evaluated in comparison with the best performing product. The methods are tested on a case study in Northern England, using hourly data from 211 tipping bucket rain gauges from the Environment Agency and radar rainfall data at 1 km/5-min resolutions from the UK Met Office. In addition, 25 independent rain gauges from the UK Met Office were used to assess the merged products.

  14. Measuring uncertainty in dose delivered to the cochlea due to setup error during external beam treatment of patients with cancer of the head and neck

    Energy Technology Data Exchange (ETDEWEB)

    Yan, M.; Lovelock, D.; Hunt, M.; Mechalakos, J.; Hu, Y.; Pham, H.; Jackson, A., E-mail: jacksona@mskcc.org [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York 10065 (United States)

    2013-12-15

    , the standard deviation of setup error reduced by 31%, 42%, and 54% in RL, AP, and SI direction, respectively, and consequently, the uncertainty of the mean dose to cochlea reduced more than 50%. The authors estimate that the effects of these uncertainties on the probability of hearing loss for an individual patient could be as large as 10%.

  15. Large abnormal peak on capillary zone electrophoresis due to contrast agent.

    Science.gov (United States)

    Wheeler, Rachel D; Zhang, Liqun; Sheldon, Joanna

    2017-01-01

    Background Some iodinated radio-contrast media absorb ultraviolet light and can therefore be detected by capillary zone electrophoresis. If seen, these peaks are typically small with 'quantifications' of below 5 g/L. Here, we describe the detection of a large peak on capillary zone electrophoresis that was due to the radio-contrast agent, Omnipaque™. Methods Serum from a patient was analysed by capillary zone electrophoresis, and the IgG, IgA, IgM and total protein concentrations were measured. The serum sample was further analysed by gel electrophoresis and immunofixation. Results Capillary zone electrophoresis results for the serum sample showed a large peak with a concentration high enough to warrant urgent investigation. However, careful interpretation alongside the serum immunoglobulin concentrations and total protein concentration showed that the abnormal peak was a pseudoparaprotein rather than a monoclonal immunoglobulin. This was confirmed by analysis with gel electrophoresis and also serum immunofixation. The patient had had a CT angiogram with the radio-contrast agent Omnipaque™; addition of Omnipaque™ to a normal serum sample gave a peak with comparable mobility to the pseudoparaprotein in the patient's serum. Conclusions Pseudoparaproteins can appear as a large band on capillary zone electrophoresis. This case highlights the importance of a laboratory process that detects significant electrophoretic abnormalities promptly and interprets them in the context of the immunoglobulin concentrations. This should avoid incorrect reporting of pseudoparaproteins which could result in the patient having unnecessary investigations.

  16. The assessment of damages due to climate change in a situation of uncertainty: the contribution of adaptation cost modelling

    International Nuclear Information System (INIS)

    Dumas, P.

    2006-01-01

    The aim of this research is to introduce new elements for the assessment of damages due to climate changes within the frame of compact models aiding the decision. Two types of methodologies are used: sequential optimisation stochastic models and simulation stochastic models using optimal assessment methods. The author first defines the damages, characterizes their different categories, and reviews the existing assessments. Notably, he makes the distinction between damages due to climate change and damages due to its rate. Then, he presents the different models used in this study, the numerical solutions, and gives a rough estimate of the importance of the considered phenomena. By introducing a new category of capital in an optimal growth model, he tries to establish a framework allowing the representation of adaptation and of its costs. He introduces inertia in macro-economical evolutions, climatic variability, detection of climate change and damages due to climate hazards

  17. Toward adaptive radiotherapy for head and neck patients: Uncertainties in dose warping due to the choice of deformable registration algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Veiga, Catarina, E-mail: catarina.veiga.11@ucl.ac.uk; Royle, Gary [Radiation Physics Group, Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT (United Kingdom); Lourenço, Ana Mónica [Radiation Physics Group, Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, United Kingdom and Acoustics and Ionizing Radiation Team, National Physical Laboratory, Teddington TW11 0LW (United Kingdom); Mouinuddin, Syed [Department of Radiotherapy, University College London Hospital, London NW1 2BU (United Kingdom); Herk, Marcel van [Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam 1066 CX (Netherlands); Modat, Marc; Ourselin, Sébastien; McClelland, Jamie R. [Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT (United Kingdom)

    2015-02-15

    Purpose: The aims of this work were to evaluate the performance of several deformable image registration (DIR) algorithms implemented in our in-house software (NiftyReg) and the uncertainties inherent to using different algorithms for dose warping. Methods: The authors describe a DIR based adaptive radiotherapy workflow, using CT and cone-beam CT (CBCT) imaging. The transformations that mapped the anatomy between the two time points were obtained using four different DIR approaches available in NiftyReg. These included a standard unidirectional algorithm and more sophisticated bidirectional ones that encourage or ensure inverse consistency. The forward (CT-to-CBCT) deformation vector fields (DVFs) were used to propagate the CT Hounsfield units and structures to the daily geometry for “dose of the day” calculations, while the backward (CBCT-to-CT) DVFs were used to remap the dose of the day onto the planning CT (pCT). Data from five head and neck patients were used to evaluate the performance of each implementation based on geometrical matching, physical properties of the DVFs, and similarity between warped dose distributions. Geometrical matching was verified in terms of dice similarity coefficient (DSC), distance transform, false positives, and false negatives. The physical properties of the DVFs were assessed calculating the harmonic energy, determinant of the Jacobian, and inverse consistency error of the transformations. Dose distributions were displayed on the pCT dose space and compared using dose difference (DD), distance to dose difference, and dose volume histograms. Results: All the DIR algorithms gave similar results in terms of geometrical matching, with an average DSC of 0.85 ± 0.08, but the underlying properties of the DVFs varied in terms of smoothness and inverse consistency. When comparing the doses warped by different algorithms, we found a root mean square DD of 1.9% ± 0.8% of the prescribed dose (pD) and that an average of 9% ± 4% of

  18. Large-Scale Uncertainty and Error Analysis for Time-dependent Fluid/Structure Interactions in Wind Turbine Applications

    Energy Technology Data Exchange (ETDEWEB)

    Alonso, Juan J. [Stanford University; Iaccarino, Gianluca [Stanford University

    2013-08-25

    The following is the final report covering the entire period of this aforementioned grant, June 1, 2011 - May 31, 2013 for the portion of the effort corresponding to Stanford University (SU). SU has partnered with Sandia National Laboratories (PI: Mike S. Eldred) and Purdue University (PI: Dongbin Xiu) to complete this research project and this final report includes those contributions made by the members of the team at Stanford. Dr. Eldred is continuing his contributions to this project under a no-cost extension and his contributions to the overall effort will be detailed at a later time (once his effort has concluded) on a separate project submitted by Sandia National Laboratories. At Stanford, the team is made up of Profs. Alonso, Iaccarino, and Duraisamy, post-doctoral researcher Vinod Lakshminarayan, and graduate student Santiago Padron. At Sandia National Laboratories, the team includes Michael Eldred, Matt Barone, John Jakeman, and Stefan Domino, and at Purdue University, we have Prof. Dongbin Xiu as our main collaborator. The overall objective of this project was to develop a novel, comprehensive methodology for uncertainty quantification by combining stochastic expansions (nonintrusive polynomial chaos and stochastic collocation), the adjoint approach, and fusion with experimental data to account for aleatory and epistemic uncertainties from random variable, random field, and model form sources. The expected outcomes of this activity were detailed in the proposal and are repeated here to set the stage for the results that we have generated during the time period of execution of this project: 1. The rigorous determination of an error budget comprising numerical errors in physical space and statistical errors in stochastic space and its use for optimal allocation of resources; 2. A considerable increase in efficiency when performing uncertainty quantification with a large number of uncertain variables in complex non-linear multi-physics problems; 3. A

  19. Nonlinear unbiased minimum-variance filter for Mars entry autonomous navigation under large uncertainties and unknown measurement bias.

    Science.gov (United States)

    Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua

    2018-05-01

    High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Large persistent photochromic effect due to DX centers in AlSb doped with selenium

    International Nuclear Information System (INIS)

    Becla, P.; Witt, A.G.

    1995-04-01

    A large photochromic effect has been observed in bulk AlSb crystals doped with Se. Illumination with light of energy higher than 1 eV leads to an increase of the absorption coefficient in the spectral range 0.1 to 1.6 eV. The enhanced absorption is persistent at temperatures below about K. The effect is a manifestation of a DX-like bistability of Se donors. The illumination transfers the from the DX center to a metastable hydrogenic level. The increased absorption with peaks around 0.2 eV and 0.5 is due to photoionization from the donor level to X l and X 3 minima of the conduction band

  1. Enhancement of natural radiation and population exposures due to the activity of large steelworks

    Energy Technology Data Exchange (ETDEWEB)

    Niewiadomski, T; Godek, J; Jasinska, M; Wasiolek, P [Institute of Nuclear Physics, Krakow (Poland)

    1984-09-01

    Radionuclide releases and resulting population exposures from large industrial plants have recently become a subject of some public concern. Methods for assessing these effects were developed and, as an example, a complex of large steelworks located in the vicinity of the city of Krakow was investigated. The following critical pathways were considered: atmospheric release, and use of fly ash for production of building materials. For assessing annual average radionuclide concentrations in air and in soil around the works, a computer program was developed while other mathematical methods were applied to the assessment of maximum individual effective dose equivalent commitments (EDEC) due to inhalation, ingestion, and external gamma radiation. In order to acquire data for calculations many samples of raw materials, coal, ash, and dust were analysed as to their radionuclide concentration. The total individual EDEC at the place of maximum immission was estimated to be about 100 ..mu..Sv a/sup -1/ (i.e., about 6% of the natural exposure in this region), this being mainly due to ingestion (ca. 65 ..mu..Sv a/sup -1/) and to gamma radiation (ca. 30 ..mu..Sv a/sup -1/). The enhancement of dose rates over the ponds and of radioactivity concentration of liquid discharges from the ponds was found to be negligible. Dose rates in houses built entirely of fly ash were estimated to be higher than those in red-brick houses by not more than 0.2 ..mu..Sv a/sup -1/. The collective EDEC from the operational discharge of the steelworks is less than 11 man Sv a/sup -1/ and that of use of fly-ash prefabricated elements will be in the future less than 45 man Sv a/sup -1/.

  2. Number of deaths due to lung diseases: How large is the problem?

    International Nuclear Information System (INIS)

    Wagener, D.K.

    1990-01-01

    The importance of lung disease as an indicator of environmentally induced adverse health effects has been recognized by inclusion among the Health Objectives for the Nation. The 1990 Health Objectives for the Nation (US Department of Health and Human Services, 1986) includes an objective that there should be virtually no new cases among newly exposed workers for four preventable occupational lung diseases-asbestosis, byssinosis, silicosis, and coal workers' pneumoconiosis. This brief communication describes two types of cause-of-death statistics- underlying and multiple cause-and demonstrates the differences between the two statistics using lung disease deaths among adult men. The choice of statistic has a large impact on estimated lung disease mortality rates. The choice of statistics also may have large effect on the estimated mortality rates due to other chromic diseases thought to be environmentally mediated. Issues of comorbidity and the way causes of death are reported become important in the interpretation of these statistics. The choice of which statistic to use when comparing data from a study population with national statistics may greatly affect the interpretations of the study findings

  3. Preliminary analysis of tank 241-C-106 dryout due to large postulated leak and vaporization

    Energy Technology Data Exchange (ETDEWEB)

    Piepho, M.G.

    1995-03-01

    At the Hanford site in SE Washington, there are 149 single-shell tanks containing radionuclide wastes in the form of liquids, sludges and salt cakes. One of the tanks, tank 241-C-106, is heated to the boiling point due to radionuclide decay (primarily Sr-90). Water is added to the tank, which is ventilated, in order to cool the tank. This analysis assumes that there is a hypothetical large leak at the bottom of Tank 241-C-106 which initiates the dryout of the tank. The time required for a tank to dryout after a leak is of interest for safety reasons. As a tank dries outs, its temperature is expected to greatly increase, which could affect the structural integrity of the concrete tank dome. Hence, it is of interest to know how fast the temperature in a leaky tank increases, so that mitigation procedures can be planned and implemented in a timely manner. The objective of the study was to determine how long it would take for tank 241-C-106 to reach 350 degrees Fahrenheit (about 177 degrees Centigrade) after a postulated large leak develops at the bottom center of the tank.

  4. Unscheduled load flow effect due to large variation in the distributed generation in a subtransmission network

    Science.gov (United States)

    Islam, Mujahidul

    A sustainable energy delivery infrastructure implies the safe and reliable accommodation of large scale penetration of renewable sources in the power grid. In this dissertation it is assumed there will be no significant change in the power transmission and distribution structure currently in place; except in the operating strategy and regulatory policy. That is to say, with the same old structure, the path towards unveiling a high penetration of switching power converters in the power system will be challenging. Some of the dimensions of this challenge are power quality degradation, frequent false trips due to power system imbalance, and losses due to a large neutral current. The ultimate result is the reduced life of many power distribution components - transformers, switches and sophisticated loads. Numerous ancillary services are being developed and offered by the utility operators to mitigate these problems. These services will likely raise the system's operational cost, not only from the utility operators' end, but also reflected on the Independent System Operators and by the Regional Transmission Operators (RTO) due to an unforeseen backlash of frequent variation in the load-side generation or distributed generation. The North American transmission grid is an interconnected system similar to a large electrical circuit. This circuit was not planned but designed over 100 years. The natural laws of physics govern the power flow among loads and generators except where control mechanisms are installed. The control mechanism has not matured enough to withstand the high penetration of variable generators at uncontrolled distribution ends. Unlike a radial distribution system, mesh or loop networks can alleviate complex channels for real and reactive power flow. Significant variation in real power injection and absorption on the distribution side can emerge as a bias signal on the routing reactive power in some physical links or channels that are not distinguishable

  5. Model uncertainties in top-quark physics

    CERN Document Server

    Seidel, Markus

    2014-01-01

    The ATLAS and CMS collaborations at the Large Hadron Collider (LHC) are studying the top quark in pp collisions at 7 and 8 TeV. Due to the large integrated luminosity, precision measurements of production cross-sections and properties are often limited by systematic uncertainties. An overview of the modeling uncertainties for simulated events is given in this report.

  6. Understanding uncertainties in non-linear population trajectories: a Bayesian semi-parametric hierarchical approach to large-scale surveys of coral cover.

    Directory of Open Access Journals (Sweden)

    Julie Vercelloni

    Full Text Available Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making.

  7. Large-scale determinants of diversity across Spanish forest habitats: accounting for model uncertainty in compositional and structural indicators

    Energy Technology Data Exchange (ETDEWEB)

    Martin-Quller, E.; Torras, O.; Alberdi, I.; Solana, J.; Saura, S.

    2011-07-01

    An integral understanding of forest biodiversity requires the exploration of the many aspects it comprises and of the numerous potential determinants of their distribution. The landscape ecological approach provides a necessary complement to conventional local studies that focus on individual plots or forest ownerships. However, most previous landscape studies used equally-sized cells as units of analysis to identify the factors affecting forest biodiversity distribution. Stratification of the analysis by habitats with a relatively homogeneous forest composition might be more adequate to capture the underlying patterns associated to the formation and development of a particular ensemble of interacting forest species. Here we used a landscape perspective in order to improve our understanding on the influence of large-scale explanatory factors on forest biodiversity indicators in Spanish habitats, covering a wide latitudinal and attitudinal range. We considered six forest biodiversity indicators estimated from more than 30,000 field plots in the Spanish national forest inventory, distributed in 213 forest habitats over 16 Spanish provinces. We explored biodiversity response to various environmental (climate and topography) and landscape configuration (fragmentation and shape complexity) variables through multiple linear regression models (built and assessed through the Akaike Information Criterion). In particular, we took into account the inherent model uncertainty when dealing with a complex and large set of variables, and considered different plausible models and their probability of being the best candidate for the observed data. Our results showed that compositional indicators (species richness and diversity) were mostly explained by environmental factors. Models for structural indicators (standing deadwood and stand complexity) had the worst fits and selection uncertainties, but did show significant associations with some configuration metrics. In general

  8. Sensitivity/uncertainty analysis for the Hiroshima dosimetry reevaluation effort

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Lillie, R.A.; Pace, J.V. III; Cacuci, D.G.

    1987-01-01

    Uncertainty estimates and cross correlations by range/survivor location have been obtained for the free-in-air (FIA) tissue kerma for the Hiroshima atomic event. These uncertainties in the FIA kerma include contributions due to various modeling parameters and the basic cross section data and are given at three ground ranges, 700, 1000 and 1500 m. The estimated uncertainties are nearly constant over the given ground ranges and are approximately 27% for the prompt neutron kerma and secondary gamma kerma and 35% for the prompt gamma kerma. The total kerma uncertainty is dominated by the secondary gamma kerma uncertainties which are in turn largely due to the modeling parameter uncertainties

  9. Carbon nanotubes purification constrains due to large Fe–Ni/Al2O3 ...

    Indian Academy of Sciences (India)

    The phenomenon is due to liquid-like behaviour of the active phase at reaction temperature (700 ◦C) which is higher than both .... purified carbon nanotubes were washed with distilled water .... easy catalyst active phase extraction, followed by “tip-mode” .... racterization of porous solids and powders: surface area, pore.

  10. Is Business Failure Due to Lack of Effort? Empirical Evidence from a Large Administrative Sample

    NARCIS (Netherlands)

    Ejrnaes, M.; Hochguertel, S.

    2013-01-01

    Does insurance provision reduce entrepreneurs' effort to avoid business failure? We exploit unique features of the voluntary Danish unemployment insurance (UI) scheme, that is available to the self-employed. Using a large sample of self-employed individuals, we estimate the causal effect of

  11. Hydrodynamic changes due to large seabed installations in coastal waters off west coast of India

    Digital Repository Service at National Institute of Oceanography (India)

    Ilangovan, D.; Naik, K.A.; Anil, A.C.

    and further estimate or predict their influence on the environment. In the present context the physical environment is considered and termed in general as marine environment. Scaled physical models or numerical models are used both to understand... the prevailing marine environment as well as to predict changes in environment due to perturbations in the prevailing conditions. Though both physical modeling and numerical modeling have proved to provide reliable and reasonable results, numerical models...

  12. Large impact of reorganization energy on photovoltaic conversion due to interfacial charge-transfer transitions.

    Science.gov (United States)

    Fujisawa, Jun-ichi

    2015-05-14

    Interfacial charge-transfer (ICT) transitions are expected to be a novel charge-separation mechanism for efficient photovoltaic conversion featuring one-step charge separation without energy loss. Photovoltaic conversion due to ICT transitions has been investigated using several TiO2-organic hybrid materials that show organic-to-inorganic ICT transitions in the visible region. In applications of ICT transitions to photovoltaic conversion, there is a significant problem that rapid carrier recombination is caused by organic-inorganic electronic coupling that is necessary for the ICT transitions. In order to solve this problem, in this work, I have theoretically studied light-to-current conversions due to the ICT transitions on the basis of the Marcus theory with density functional theory (DFT) and time-dependent DFT (TD-DFT) calculations. An apparent correlation between the reported incident photon-to-current conversion efficiencies (IPCE) and calculated reorganization energies was clearly found, in which the IPCE increases with decreasing the reorganization energy consistent with the Marcus theory in the inverted region. This activation-energy dependence was systematically explained by the equation formulated by the Marcus theory based on a simple excited-state kinetic scheme. This result indicates that the reduction of the reorganization energy can suppress the carrier recombination and enhance the IPCE. The reorganization energy is predominantly governed by the structural change in the chemical-adsorption moiety between the ground and ICT excited states. This work provides crucial knowledge for efficient photovoltaic conversion due to ICT transitions.

  13. Error rate degradation due to switch crosstalk in large modular switched optical networks

    DEFF Research Database (Denmark)

    Saxtoft, Christian; Chidgey, P.

    1993-01-01

    A theoretical model of an optical network incorporating wavelength selective elements, amplifiers, couplers and switches is presented. The model is used to evaluate a large modular switch optical network that provides the capability of adapting easily to changes in network traffic requirements. T....... The network dimensions are shown to be limited by the optical crosstalk in the switch matrices and by the polarization dependent loss in the optical components...

  14. Large spin accumulation due to spin-charge coupling across a break-junction

    Science.gov (United States)

    Chen, Shuhan; Zou, Han; Chui, Siu-Tat; Ji, Yi

    2013-03-01

    We investigate large spin signals in break-junction nonlocal spin valves (NLSV). The break-junction is a nanometer-sized vacuum tunneling gap between the spin detector and the nonmagnetic channel, formed by electro-static discharge. The spin signals can be either inverted or non-inverted and the magnitudes are much larger than those of standard NLSV. Spin signals with high percentage values (10% - 0%) have been observed. When the frequency of the a.c. modulation is varied, the absolute magnitudes of signals remain the same although the percentage values change. These observations affirm the nonlocal nature of the measurements and rule out local magnetoresistive effects. Owing to the spin-charge coupling across the break-junction, the spin accumulation in a ferromagnet splits into two terms. One term decays on the charge screening length (0.1 nm) and the other decays on the spin diffusion length (10 nm nm). The magnitude of the former is proportional to the resistance of the junction. Therefore a highly resistive break-junction leads to a large spin accumulation and thereby a large spin signal. The signs of the spin signal are determined by the relationship between spin-dependent conductivities, diffusion constants, and density of states of the ferromagnet. This work was supported by US DOE grant No. DE-FG02-07ER46374.

  15. Internal dose assessment due to large area contamination: Main lessons drawn from the Chernobyl accident

    Energy Technology Data Exchange (ETDEWEB)

    Andrasi, A [KFKI Atomic Energy Research Inst., Budapest (Hungary)

    1997-03-01

    The reactor accident at Chernobyl in 1986 beside its serious and tragic consequences provided also an excellent opportunity to check, test and validate all kind of environmental models and calculation tools which were available in the emergency preparedness systems of different countries. Assessment of internal and external doses due to the accident has been carried out for the population all over Europe using different methods. Dose predictions based on environmental model calculation considering various pathways have been compared with those obtained by more direct monitoring methods. One study from Hungary and one from the TAEA is presented shortly. (orig./DG)

  16. Internal dose assessment due to large area contamination: Main lessons drawn from the Chernobyl accident

    International Nuclear Information System (INIS)

    Andrasi, A.

    1997-01-01

    The reactor accident at Chernobyl in 1986 beside its serious and tragic consequences provided also an excellent opportunity to check, test and validate all kind of environmental models and calculation tools which were available in the emergency preparedness systems of different countries. Assessment of internal and external doses due to the accident has been carried out for the population all over Europe using different methods. Dose predictions based on environmental model calculation considering various pathways have been compared with those obtained by more direct monitoring methods. One study from Hungary and one from the TAEA is presented shortly. (orig./DG)

  17. Emittance growth due to noise and its suppression with the Feedback system in large hadron colliders

    International Nuclear Information System (INIS)

    Lebedev, V.; Parkhomchuk, V.; Shiltsev, V.; Stupakov, G.

    1993-03-01

    The problem of emittance growth due to random fluctuation of the magnetic field in hadron colliders is considered. Based on a simple one-dimensional linear model, a formula for an emittance growth rate as a function of the noise spectrum is derived. Different sources of the noise are analyzed and their role is estimated for the Superconducting Super Collider (SSC). A theory of feedback suppression of the emittance growth is developed which predicts the residual growth of the emittance in the accelerator with a feedback system

  18. Computer simulation of the emittance growth due to noise in large hadron colliders

    International Nuclear Information System (INIS)

    Lebedev, V.

    1993-03-01

    The problem of emittance growth due to random fluctuations of the magnetic field in a hadron collider is considered. The results of computer simulations are compared with the analytical theory developed earlier. A good agreement was found between the analytical theory predictions and the computer simulations for the collider tunes located far enough from high order betatron resonances. The dependencies of the emittance growth rate on noise spectral density, beam separation at the Interaction Point (IP) and value of beam separation at long range collisions are studied. The results are applicable to the Superconducting Super Collider (SSC)

  19. Uncertainties in planned dose due to the limited voxel size of the planning CT when treating lung tumors with proton therapy

    International Nuclear Information System (INIS)

    Espana, Samuel; Paganetti, Harald

    2011-01-01

    Dose calculation for lung tumors can be challenging due to the low density and the fine structure of the geometry. The latter is not fully considered in the CT image resolution used in treatment planning causing the prediction of a more homogeneous tissue distribution. In proton therapy, this could result in predicting an unrealistically sharp distal dose falloff, i.e. an underestimation of the distal dose falloff degradation. The goal of this work was the quantification of such effects. Two computational phantoms resembling a two-dimensional heterogeneous random lung geometry and a swine lung were considered applying a variety of voxel sizes for dose calculation. Monte Carlo simulations were used to compare the dose distributions predicted with the voxel size typically used for the treatment planning procedure with those expected to be delivered using the finest resolution. The results show, for example, distal falloff position differences of up to 4 mm between planned and expected dose at the 90% level for the heterogeneous random lung (assuming treatment plan on a 2 x 2 x 2.5 mm 3 grid). For the swine lung, differences of up to 38 mm were seen when airways are present in the beam path when the treatment plan was done on a 0.8 x 0.8 x 2.4 mm 3 grid. The two-dimensional heterogeneous random lung phantom apparently does not describe the impact of the geometry adequately because of the lack of heterogeneities in the axial direction. The differences observed in the swine lung between planned and expected dose are presumably due to the poor axial resolution of the CT images used in clinical routine. In conclusion, when assigning margins for treatment planning for lung cancer, proton range uncertainties due to the heterogeneous lung geometry and CT image resolution need to be considered.

  20. On axial temperature gradients due to large pressure drops in dense fluid chromatography.

    Science.gov (United States)

    Colgate, Sam O; Berger, Terry A

    2015-03-13

    The effect of energy degradation (Degradation is the creation of net entropy resulting from irreversibility.) accompanying pressure drops across chromatographic columns is examined with regard to explaining axial temperature gradients in both high performance liquid chromatography (HPLC) and supercritical fluid chromatography (SFC). The observed effects of warming and cooling can be explained equally well in the language of thermodynamics or fluid dynamics. The necessary equivalence of these treatments is reviewed here to show the legitimacy of using whichever one supports the simpler determination of features of interest. The determination of temperature profiles in columns by direct application of the laws of thermodynamics is somewhat simpler than applying them indirectly by solving the Navier-Stokes (NS) equations. Both disciplines show that the preferred strategy for minimizing the reduction in peak quality caused by temperature gradients is to operate columns as nearly adiabatically as possible (i.e. as Joule-Thomson expansions). This useful fact, however, is not widely familiar or appreciated in the chromatography community due to some misunderstanding of the meaning of certain terms and expressions used in these disciplines. In fluid dynamics, the terms "resistive heating" or "frictional heating" have been widely used as synonyms for the dissipation function, Φ, in the NS energy equation. These terms have been widely used by chromatographers as well, but often misinterpreted as due to friction between the mobile phase and the column packing, when in fact Φ describes the increase in entropy of the system (dissipation, ∫TdSuniv>0) due to the irreversible decompression of the mobile phase. Two distinctly different contributions to the irreversibility are identified; (1) ΔSext, viscous dissipation of work done by the external surroundings driving the flow (the pump) contributing to its warming, and (2) ΔSint, entropy change accompanying decompression of

  1. A heating mechanism of ions due to large amplitude coherent ion acoustic wave

    International Nuclear Information System (INIS)

    Yajima, Nobuo; Kawai, Yoshinobu; Kogiso, Ken.

    1978-05-01

    Ion heating mechanism in a plasma with a coherent ion acoustic wave is studied experimentally and numerically. Ions are accelerated periodically in the electrostatic potential of the coherent wave and their oscillation energy is converted into the thermal energy of ions through the collision with the neutral atoms in plasma. The Monte Carlo calculation is applied to obtain the ion temperature. The amplitude of the electrostatic potential, the mean number of collisions and the mean life time of ions are treated as parameters in the calculation. The numerical results are compared with the experiments and both of them agree well. It is found that the ion temperature increases as the amplitude of the coherent wave increases and the high energy tail in the distribution function of ions are observed for the case of large wave-amplitude. (author)

  2. Improved tolerance of abdominal large-volume radiotherapy due to ornithine aspartate

    International Nuclear Information System (INIS)

    Kuttig, H.

    1983-01-01

    The influence of ornithine aspartate on supporting the hepatic function was investigated in a group of 47 patients with tumour dissemination in the pelvic and abdominal region, randomised on the basis of the progress of the serum enzymes GOT, GPT, LAD, LDH, LAP and the alkaline phosphatase during and following completion of a course of large-volume radiotherapy. The adjuvant therapy with ornithine aspartate resulted in reduced enzyme movement with an earlier tendency to normalisation. The results, which are borne out by statistics, clearly show an improvement in the hepatic function on detoxication of toxic degradation products of radiotherapy with reduced impairment of the body's own defence mechanisms. Subjectively too, the course of treatment with ornithine aspartate showed a reduced ratio of side effects as regards lassitude and impairment of the patient's general well-being as compared with the group of patients to whom ornithine aspartate was not simultaneously administered. (orig.) [de

  3. Evaluation of surface contamination due to alpha using large area contamination monitors

    International Nuclear Information System (INIS)

    Raghavayya, M.

    1998-01-01

    Radioactive contamination at work places is evaluated routinely using either the swipe sampling technique or a contamination monitor. Commercially available alpha probes used for the purpose are usually circular and have a face diameter of 50 or 100 mm. Square faced probes are also available. A thin aluminized mylar membrane of thickness 0.45 to 0.9 mg.cm -2 is used to screen the phosphor in the alpha probe to protect it from external light. The membrane cuts off more alphas from low energy emitters than from higher energy alpha emitters. Moreover the response of the detector for alphas originating at all points under the detector face is not uniform, especially when the large area alpha monitors are used. These factors can introduce errors as high as 40% into the measurements. This paper aims to quantify these errors and describe a procedure to overcome the limitations. (author)

  4. Dephasing due to Nuclear Spins in Large-Amplitude Electric Dipole Spin Resonance.

    Science.gov (United States)

    Chesi, Stefano; Yang, Li-Ping; Loss, Daniel

    2016-02-12

    We analyze effects of the hyperfine interaction on electric dipole spin resonance when the amplitude of the quantum-dot motion becomes comparable or larger than the quantum dot's size. Away from the well-known small-drive regime, the important role played by transverse nuclear fluctuations leads to a Gaussian decay with characteristic dependence on drive strength and detuning. A characterization of spin-flip gate fidelity, in the presence of such additional drive-dependent dephasing, shows that vanishingly small errors can still be achieved at sufficiently large amplitudes. Based on our theory, we analyze recent electric dipole spin resonance experiments relying on spin-orbit interactions or the slanting field of a micromagnet. We find that such experiments are already in a regime with significant effects of transverse nuclear fluctuations and the form of decay of the Rabi oscillations can be reproduced well by our theory.

  5. Large-scale dynamo action due to α fluctuations in a linear shear flow

    Science.gov (United States)

    Sridhar, S.; Singh, Nishant K.

    2014-12-01

    We present a model of large-scale dynamo action in a shear flow that has stochastic, zero-mean fluctuations of the α parameter. This is based on a minimal extension of the Kraichnan-Moffatt model, to include a background linear shear and Galilean-invariant α-statistics. Using the first-order smoothing approximation we derive a linear integro-differential equation for the large-scale magnetic field, which is non-perturbative in the shearing rate S , and the α-correlation time τα . The white-noise case, τα = 0 , is solved exactly, and it is concluded that the necessary condition for dynamo action is identical to the Kraichnan-Moffatt model without shear; this is because white-noise does not allow for memory effects, whereas shear needs time to act. To explore memory effects we reduce the integro-differential equation to a partial differential equation, valid for slowly varying fields when τα is small but non-zero. Seeking exponential modal solutions, we solve the modal dispersion relation and obtain an explicit expression for the growth rate as a function of the six independent parameters of the problem. A non-zero τα gives rise to new physical scales, and dynamo action is completely different from the white-noise case; e.g. even weak α fluctuations can give rise to a dynamo. We argue that, at any wavenumber, both Moffatt drift and Shear always contribute to increasing the growth rate. Two examples are presented: (a) a Moffatt drift dynamo in the absence of shear and (b) a Shear dynamo in the absence of Moffatt drift.

  6. Water shortage risk assessment considering large-scale regional transfers: a copula-based uncertainty case study in Lunan, China.

    Science.gov (United States)

    Gao, Xueping; Liu, Yinzhu; Sun, Bowen

    2018-06-05

    The risk of water shortage caused by uncertainties, such as frequent drought, varied precipitation, multiple water resources, and different water demands, brings new challenges to the water transfer projects. Uncertainties exist for transferring water and local surface water; therefore, the relationship between them should be thoroughly studied to prevent water shortage. For more effective water management, an uncertainty-based water shortage risk assessment model (UWSRAM) is developed to study the combined effect of multiple water resources and analyze the shortage degree under uncertainty. The UWSRAM combines copula-based Monte Carlo stochastic simulation and the chance-constrained programming-stochastic multiobjective optimization model, using the Lunan water-receiving area in China as an example. Statistical copula functions are employed to estimate the joint probability of available transferring water and local surface water and sampling from the multivariate probability distribution, which are used as inputs for the optimization model. The approach reveals the distribution of water shortage and is able to emphasize the importance of improving and updating transferring water and local surface water management, and examine their combined influence on water shortage risk assessment. The possible available water and shortages can be calculated applying the UWSRAM, also with the corresponding allocation measures under different water availability levels and violating probabilities. The UWSRAM is valuable for mastering the overall multi-water resource and water shortage degree, adapting to the uncertainty surrounding water resources, establishing effective water resource planning policies for managers and achieving sustainable development.

  7. Risk of a false decision on conformity of an environmental compartment due to measurement uncertainty of concentrations of two or more pollutants.

    Science.gov (United States)

    Pennecchi, Francesca R; Kuselman, Ilya; da Silva, Ricardo J N B; Hibbert, D Brynn

    2018-07-01

    Risks of false decisions in conformity assessment of an environmental compartment due to measurement uncertainty of concentrations of two or more pollutants are discussed. Even if the assessment of conformity for each pollutant in the compartment is successful, the total probability of a false decision concerning the compartment as a whole might still be significant. A model of the total probability of a false decision, formulated on the base of the law of total probability, is used, for example, for a study of test results of total suspended particulate matter (TSPM) concentration in ambient air near to three independent stone quarries located in Israel, as the sources of the air pollution. Total probabilities of underestimation of TSPM concentration (total risk of the inhabitants) and overestimation (total risk of the stone producers) are evaluated as a combination of the particular risks of air conformity assessment concerning TSPM concentration for each quarry. These probabilities characterize conformity of the TSPM concentration in the region of the quarries as a whole. Core code developed in R programming environment for the calculations is provided. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. MEASUREMENT OF RF LOSSES DUE TO TRAPPED FLUX IN A LARGE-GRAIN NIOBIUM CAVITY

    International Nuclear Information System (INIS)

    Gianluigi Ciovati; Alex Gurevich

    2008-01-01

    Trapped magnetic field in superconducting niobium is a well known cause of radio-frequency (RF) residual losses. In this contribution, we present the results of RF tests on a single-cell cavity made of high-purity large grain niobium before and after allowing a fraction of the Earth's magnetic field to be trapped in the cavity during the cooldown below the critical temperature Tc. This experiment has been done on the cavity before and after a low temperature baking. Temperature mapping allowed us to determine the location of hot-spots with high losses and to measure their field dependence. The results show not only an increase of the low-field residual resistance, but also a larger increase of the surface resistance for intermediate RF field (higher ''medium field Qslope''), which depends on the amount of the trapped flux. These additional field-dependent losses can be described as losses of pinned vortices oscillating under the applied RF magnetic field

  9. Environmental contamination due to release of a large amount of tritium

    International Nuclear Information System (INIS)

    Kawai, Hiroshi

    1988-01-01

    Tritium release incidents have occurred many times in the Savannah Rever Plant in the U.S. A tritium release incident also took place in the Lawrence Livermore Laboratory. The present article outlines the reports by the plant and laboratory on these incidents and makes some comments on environmental contamination that may results from release of a large amount of tritium from nuclear fusion facilities. Tritium is normally released in the form of a combination of chemical compounds such as HT, DT and T 2 and oxides such as HTO, DTO and T 2 O. The percentage of the oxides is given in the reports by the plant. Oxides, which can be absorbed through the skin, are considered to be nearly a thousand times more toxic than the other type of tritium compounds. The HT type compounds (HT, DT and T 2 ) can be oxidized by microorganisms in soil into oxides (HTO, DTO and T 2 O) and therefore, great care should also given to this type of compounds. After each accidental tritium release, the health physics group of the plant collected various environmental samples, including ground surface water, milk, leaves of plants, soil and human urine, in leeward areas. Results on the contamination of surface water, fish and underground water are outlined and discussed. (Nogami, K.)

  10. Large-scale structuring of a rotating plasma due to plasma macroinstabilities

    International Nuclear Information System (INIS)

    Kikuchi, Toshinori; Ikehata, Takashi; Sato, Naoyuki; Watahiki, Takeshi; Tanabe, Toshio; Mase, Hiroshi

    1995-01-01

    The formation of coherent structures during plasma macroinstabilities have been of interest in view of the nonlinear plasma physics. In the present paper, we have investigated in detail, the mechanism and specific features of large-scale structuring of a rotating plasma. In the case of weak magnetic field, the plasma ejected from a plasma gun has a high beta value (β > 1) so that it expands rapidly across the magnetic field excluding a magnetic flux from its interior. Then, the boundary between the expanding plasma and the magnetic field becomes unstable against Rayleigh-Taylor instability. This instability has the higher growth rate at the shorter wavelength and the mode appears as flute. These features of the instability are confirmed by the observation of radial plasma jets with the azimuthal mode number m=20-40 in the early time of the plasma expansion. In the case of strong magnetic field, on the other hand, the plasma little expands and rotates at two times the ion sound speed. Especially, we observe spiral jets of m=2 instead of short-wavelength radial jets. This mode appears only when a glass target is installed or a dense neutral gas is introduced around the plasma to give the plasma a frictional force. From these results and with reference to the theory of plasma instabilities, the centrifugal instability caused by a combination of the velocity shear and centrifugal force is concluded to be responsible for the formation of spiral jets. (author)

  11. Large magnetocaloric effect of NdGa compound due to successive magnetic transitions

    Science.gov (United States)

    Zheng, X. Q.; Xu, J. W.; Shao, S. H.; Zhang, H.; Zhang, J. Y.; Wang, S. G.; Xu, Z. Y.; Wang, L. C.; Chen, J.; Shen, B. G.

    2018-05-01

    The magnetic behavior and MCE property of NdGa compound were studied in detail. According to the temperature dependence of magnetization (M-T) curve at 0.01 T, two sharp changes were observed at 20 K (TSR) and 42 K (TC), respectively, corresponding to spin reorientation and FM-PM transition. Isothermal magnetization curves up to 5 T at different temperatures were measured and magnetic entropy change (ΔSM) was calculated based on M-H data. Temperature dependences of -ΔSM for a field change of 0-2 T and 0-5 T show that there are two peaks on the curves corresponding to TSR and TC, respectively. The value of the two peaks is 6.4 J/kg K and 15.5 J/kg K for the field change of 0-5 T. Since the two peaks are close, the value of -ΔSM in the temperature range between TSR and TC keeps a large value. The excellent MCE performance of NdGa compound benefits from the existence of two successive magnetic transitions.

  12. Increasing stress on disaster-risk finance due to large floods

    Science.gov (United States)

    Jongman, Brenden; Hochrainer-Stigler, Stefan; Feyen, Luc; Aerts, Jeroen C. J. H.; Mechler, Reinhard; Botzen, W. J. Wouter; Bouwer, Laurens M.; Pflug, Georg; Rojas, Rodrigo; Ward, Philip J.

    2014-04-01

    Recent major flood disasters have shown that single extreme events can affect multiple countries simultaneously, which puts high pressure on trans-national risk reduction and risk transfer mechanisms. So far, little is known about such flood hazard interdependencies across regions and the corresponding joint risks at regional to continental scales. Reliable information on correlated loss probabilities is crucial for developing robust insurance schemes and public adaptation funds, and for enhancing our understanding of climate change impacts. Here we show that extreme discharges are strongly correlated across European river basins. We present probabilistic trends in continental flood risk, and demonstrate that observed extreme flood losses could more than double in frequency by 2050 under future climate change and socio-economic development. We suggest that risk management for these increasing losses is largely feasible, and we demonstrate that risk can be shared by expanding risk transfer financing, reduced by investing in flood protection, or absorbed by enhanced solidarity between countries. We conclude that these measures have vastly different efficiency, equity and acceptability implications, which need to be taken into account in broader consultation, for which our analysis provides a basis.

  13. The Effect of Sensory Uncertainty Due to Amblyopia (Lazy Eye) on the Planning and Execution of Visually-Guided 3D Reaching Movements

    Science.gov (United States)

    Niechwiej-Szwedo, Ewa; Goltz, Herbert C.; Chandrakumar, Manokaraananthan; Wong, Agnes M. F.

    2012-01-01

    Background Impairment of spatiotemporal visual processing in amblyopia has been studied extensively, but its effects on visuomotor tasks have rarely been examined. Here, we investigate how visual deficits in amblyopia affect motor planning and online control of visually-guided, unconstrained reaching movements. Methods Thirteen patients with mild amblyopia, 13 with severe amblyopia and 13 visually-normal participants were recruited. Participants reached and touched a visual target during binocular and monocular viewing. Motor planning was assessed by examining spatial variability of the trajectory at 50–100 ms after movement onset. Online control was assessed by examining the endpoint variability and by calculating the coefficient of determination (R2) which correlates the spatial position of the limb during the movement to endpoint position. Results Patients with amblyopia had reduced precision of the motor plan in all viewing conditions as evidenced by increased variability of the reach early in the trajectory. Endpoint precision was comparable between patients with mild amblyopia and control participants. Patients with severe amblyopia had reduced endpoint precision along azimuth and elevation during amblyopic eye viewing only, and along the depth axis in all viewing conditions. In addition, they had significantly higher R2 values at 70% of movement time along the elevation and depth axes during amblyopic eye viewing. Conclusion Sensory uncertainty due to amblyopia leads to reduced precision of the motor plan. The ability to implement online corrections depends on the severity of the visual deficit, viewing condition, and the axis of the reaching movement. Patients with mild amblyopia used online control effectively to compensate for the reduced precision of the motor plan. In contrast, patients with severe amblyopia were not able to use online control as effectively to amend the limb trajectory especially along the depth axis, which could be due to their

  14. Chromaticity decay due to superconducting dipoles on the injection plateau of the Large Hadron Collider

    Directory of Open Access Journals (Sweden)

    N. Aquilina

    2012-03-01

    Full Text Available It is well known that in a superconducting accelerator a significant chromaticity drift can be induced by the decay of the sextupolar component of the main dipoles. In this paper we give a brief overview of what was expected for the Large Hadron Collider (LHC on the grounds of magnetic measurements of individual dipoles carried out during the production. According to this analysis, the decay time constants were of the order of 200 s: since the injection in the LHC starts at least 30 minutes after the magnets are at constant current, the dynamic correction of this effect was not considered to be necessary. The first beam measurements of chromaticity showed significant decay even after a few hours. For this reason, a dynamic correction of decay on the injection plateau was implemented based on beam measurements. This means that during the injection plateau the sextupole correctors are powered with a varying current to cancel out the decay of the dipoles. This strategy has been implemented successfully. A similar phenomenon has been observed for the dependence of the decay amplitude on the powering history of the dipoles: according to magnetic measurements, also in this case time constants are of the order of 200 s and therefore no difference is expected between a one hour or a ten hours flattop. On the other hand, the beam measurements show a significant change of decay for these two conditions. For the moment there is no clue of the origin of these discrepancies. We give a complete overview of the two effects, and the modifications that have been done to the field model parameters to be able to obtain a final chromaticity correction within a few units.

  15. Uncertainties of Large-Scale Forcing Caused by Surface Turbulence Flux Measurements and the Impacts on Cloud Simulations at the ARM SGP Site

    Science.gov (United States)

    Tang, S.; Xie, S.; Tang, Q.; Zhang, Y.

    2017-12-01

    Two types of instruments, the eddy correlation flux measurement system (ECOR) and the energy balance Bowen ratio system (EBBR), are used at the Atmospheric Radiation Measurement (ARM) program Southern Great Plains (SGP) site to measure surface latent and sensible fluxes. ECOR and EBBR typically sample different land surface types, and the domain-mean surface fluxes derived from ECOR and EBBR are not always consistent. The uncertainties of the surface fluxes will have impacts on the derived large-scale forcing data and further affect the simulations of single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulation models (LES), especially for the shallow-cumulus clouds which are mainly driven by surface forcing. This study aims to quantify the uncertainties of the large-scale forcing caused by surface turbulence flux measurements and investigate the impacts on cloud simulations using long-term observations from the ARM SGP site.

  16. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    Science.gov (United States)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  17. The role of social cost-benefit analysis in societal decision-making under large uncertainties with application to robbery at a cash depot

    International Nuclear Information System (INIS)

    Jones-Lee, M.; Aven, T.

    2009-01-01

    Social cost-benefit analysis is a well-established method for guiding decisions about safety investments, particularly in situations in which it is possible to make accurate predictions of future performance. However, its direct applicability to situations involving large degrees of uncertainty is less obvious and this raises the question of the extent to which social cost-benefit analysis can provide a useful input to the decision framework that has been explicitly developed to deal with safety decisions in which uncertainty is a major factor, namely risk analysis. This is the main focus of the arguments developed in this paper. In particular, we provide new insights by examining the fundamentals of both approaches and our principal conclusion is that social cost-benefit analysis and risk analysis represent complementary input bases to the decision-making process, and even in the case of large uncertainties social cost-benefit analysis may provide very useful decision support. What is required is the establishment of a proper contextual framework which structures and gives adequate weight to the uncertainties. An application to the possibility of a robbery at a cash depot is examined as a practical example.

  18. Temperature field due to time-dependent heat sources in a large rectangular grid - Derivation of analytical solution

    International Nuclear Information System (INIS)

    Claesson, J.; Probert, T.

    1996-01-01

    The temperature field in rock due to a large rectangular grid of heat releasing canisters containing nuclear waste is studied. The solution is by superposition divided into different parts. There is a global temperature field due to the large rectangular canister area, while a local field accounts for the remaining heat source problem. The global field is reduced to a single integral. The local field is also solved analytically using solutions for a finite line heat source and for an infinite grid of point sources. The local solution is reduced to three parts, each of which depends on two spatial coordinates only. The temperatures at the envelope of a canister are given by a single thermal resistance, which is given by an explicit formula. The results are illustrated by a few numerical examples dealing with the KBS-3 concept for storage of nuclear waste. 8 refs

  19. W nano-fuzzes: A metastable state formed due to large-flux He"+ irradiation at an elevated temperature

    International Nuclear Information System (INIS)

    Wu, Yunfeng; Liu, Lu; Lu, Bing; Ni, Weiyuan; Liu, Dongping

    2016-01-01

    W nano-fuzzes have been formed due to the large-flux and low-energy (200eV) He"+ irradiation at W surface temperature of 1480 °C. Microscopic evolution of W nano-fuzzes during annealing or low-energy (200 eV) He"+ bombardments has been observed using scanning electron microscopy and thermal desorption spectroscopy. Our measurements show that both annealing and He"+ bombardments can significantly alter the structure of W nano-fuzzes. W nano-fuzzes are thermally unstable due to the He release during annealing, and they are easily sputtered during He"+ bombardments. The current study shows that W nano-fuzzes act as a metastable state during low-energy and large-flux He"+ irradiation at an elevated temperature. - Highlights: • W nano-fuzzes microscopic evolution during annealing or He"+ irradiated have been measured. • W nano-fuzzes are thermally unstable due to He release during annealing. • He are released from the top layer of W fuzzes by annealing. • Metastable W nano-fuzzes are formed due to He"+ irradiation at an elevated temperature.

  20. Sudden Death by Pulmonary Thromboembolism due to a Large Uterine Leiomyoma with a Parasitic Vein to the Mesentery

    Directory of Open Access Journals (Sweden)

    Varsha Podduturi

    2014-01-01

    Full Text Available The pathophysiology of venous thrombosis is classically attributed to alterations in one or more components of Virchow’s triad: hypercoagulability, stasis, and damage to the vascular endothelium. Deep vein thrombosis (DVT may lead to pulmonary thromboembolism (PE, and the latter is culpable for many deaths annually in the United States; however, DVT as a complication of uterine leiomyoma has rarely been reported. We report a case of a 57-year-old woman whose death was due to a large pedunculated subserosal leiomyoma externally compressing the pelvic veins resulting in stasis and venous thrombosis leading to fatal PE. The association of large pelvic masses with venous thrombosis has clinical implications, since prophylactic surgery could be life-saving.

  1. Application of code scaling, applicability and uncertainty methodology to large break LOCA analysis of two loop PWR

    International Nuclear Information System (INIS)

    Mavko, B.; Stritar, A.; Prosek, A.

    1993-01-01

    In NED 119, No. 1 (May 1990) a series of six papers published by a Technical Program Group presented a new methodology for the safety evaluation of emergency core cooling systems in nuclear power plants. This paper describes the application of that new methodology to the LB LOCA analysis of the two loop Westinghouse power plant. Results of the original work were used wherever possible, so that the analysis was finished in less than one man year of work. Steam generator plugging level and safety injection flow rate were used as additional uncertainty parameters, which had not been used in the original work. The computer code RELAP5/MOD2 was used. Response surface was generated by the regression analysis and by the artificial neural network like Optimal Statistical Estimator method. Results were compared also to the analytical calculation. (orig.)

  2. Assessing the Roles of Regional Climate Uncertainty, Policy, and Economics on Future Risks to Water Stress: A Large-Ensemble Pilot Case for Southeast Asia

    Science.gov (United States)

    Schlosser, C. A.; Strzepek, K. M.; Gao, X.; Fant, C. W.; Blanc, E.; Monier, E.; Sokolov, A. P.; Paltsev, S.; Arndt, C.; Prinn, R. G.; Reilly, J. M.; Jacoby, H.

    2013-12-01

    The fate of natural and managed water resources is controlled to varying degrees by interlinked energy, agricultural, and environmental systems, as well as the hydro-climate cycles. The need for risk-based assessments of impacts and adaptation to regional change calls for likelihood quantification of outcomes via the representation of uncertainty - to the fullest extent possible. A hybrid approach of the MIT Integrated Global System Model (IGSM) framework provides probabilistic projections of regional climate change - generated in tandem with consistent socio-economic projections. A Water Resources System (WRS) then tracks water allocation and availability across these competing demands. As such, the IGSM-WRS is an integrated tool that provides quantitative insights on the risks and sustainability of water resources over large river basins. This pilot project focuses the IGSM-WRS on Southeast Asia (Figure 1). This region presents exceptional challenges toward sustainable water resources given its texture of basins that traverse and interconnect developing nations as well as large, ascending economies and populations - such as China and India. We employ the IGSM-WRS in a large ensemble of outcomes spanning hydro-climatic, economic, and policy uncertainties. For computational efficiency, a Gaussian Quadrature procedure sub-samples these outcomes (Figure 2). The IGSM-WRS impacts are quantified through frequency distributions of water stress changes. The results allow for interpretation of: the effects of policy measures; impacts on food production; and the value of design flexibility of infrastructure/institutions. An area of model development and exploration is the feedback of water-stress shocks to economic activity (i.e. GDP and land use). We discuss these further results (where possible) as well as other efforts to refine: uncertainty methods, greater basin-level and climate detail, and process-level representation glacial melt-water sources. Figure 1 Figure 2

  3. A continental-scale hydrology and water quality model for Europe: Calibration and uncertainty of a high-resolution large-scale SWAT model

    Science.gov (United States)

    Abbaspour, K. C.; Rouholahnejad, E.; Vaghefi, S.; Srinivasan, R.; Yang, H.; Kløve, B.

    2015-05-01

    A combination of driving forces are increasing pressure on local, national, and regional water supplies needed for irrigation, energy production, industrial uses, domestic purposes, and the environment. In many parts of Europe groundwater quantity, and in particular quality, have come under sever degradation and water levels have decreased resulting in negative environmental impacts. Rapid improvements in the economy of the eastern European block of countries and uncertainties with regard to freshwater availability create challenges for water managers. At the same time, climate change adds a new level of uncertainty with regard to freshwater supplies. In this research we build and calibrate an integrated hydrological model of Europe using the Soil and Water Assessment Tool (SWAT) program. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals. Leaching of nitrate into groundwater is also simulated at a finer spatial level (HRU). The use of large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation. In this article we discuss issues with data availability, calibration of large-scale distributed models, and outline procedures for model calibration and uncertainty analysis. The calibrated model and results provide information support to the European Water Framework Directive and lay the basis for further assessment of the impact of climate change on water availability and quality. The approach and methods developed are general and can be applied to any large region around the world.

  4. Aleatoric and epistemic uncertainties in sampling based nuclear data uncertainty and sensitivity analyses

    International Nuclear Information System (INIS)

    Zwermann, W.; Krzykacz-Hausmann, B.; Gallner, L.; Klein, M.; Pautz, A.; Velkov, K.

    2012-01-01

    Sampling based uncertainty and sensitivity analyses due to epistemic input uncertainties, i.e. to an incomplete knowledge of uncertain input parameters, can be performed with arbitrary application programs to solve the physical problem under consideration. For the description of steady-state particle transport, direct simulations of the microscopic processes with Monte Carlo codes are often used. This introduces an additional source of uncertainty, the aleatoric sampling uncertainty, which is due to the randomness of the simulation process performed by sampling, and which adds to the total combined output sampling uncertainty. So far, this aleatoric part of uncertainty is minimized by running a sufficiently large number of Monte Carlo histories for each sample calculation, thus making its impact negligible as compared to the impact from sampling the epistemic uncertainties. Obviously, this process may cause high computational costs. The present paper shows that in many applications reliable epistemic uncertainty results can also be obtained with substantially lower computational effort by performing and analyzing two appropriately generated series of samples with much smaller number of Monte Carlo histories each. The method is applied along with the nuclear data uncertainty and sensitivity code package XSUSA in combination with the Monte Carlo transport code KENO-Va to various critical assemblies and a full scale reactor calculation. It is shown that the proposed method yields output uncertainties and sensitivities equivalent to the traditional approach, with a high reduction of computing time by factors of the magnitude of 100. (authors)

  5. Temperature field due to time-dependent heat sources in a large rectangular grid. Application for the KBS-3 repository

    International Nuclear Information System (INIS)

    Probert, T.; Claesson, Johan

    1997-04-01

    In the KBS-3 concept canisters containing nuclear waste are deposited along parallel tunnels over a large rectangular area deep below the ground surface. The temperature field in rock due to such a rectangular grid of heat-releasing canisters is studied. An analytical solution for this problem for any heat source has been presented in a preceding paper. The complete solution is summarized in this paper. The solution is by superposition divided into two main parts. There is a global temperature field due to the large rectangular canister area, while a local field accounts for the remaining heat source problem. In this sequel to the first report, the local solution is discussed in detail. The local solution consists of three parts corresponding to line heat sources along tunnels, point heat sources along a tunnel and a line heat source along a canister. Each part depends on two special variables only. These parts are illustrated in dimensionless form. Inside the repository the local temperature field is periodic in the horizontal directions and has a short extent in the vertical direction. This allows us to look at the solution in a parallelepiped around a canister. The solution in the parallelepiped is valid for all canisters that are not too close to the repository edges. The total temperature field is calculated for the KBS-3 case. The temperature field is calculated using a heat release that is valid for the first 10 000 years after deposition. The temperature field is shown in 23 figures in order to illustrate different aspects of the complex thermal process

  6. Propagation of nuclear data uncertainties for fusion power measurements

    Directory of Open Access Journals (Sweden)

    Sjöstrand Henrik

    2017-01-01

    Full Text Available Neutron measurements using neutron activation systems are an essential part of the diagnostic system at large fusion machines such as JET and ITER. Nuclear data is used to infer the neutron yield. Consequently, high-quality nuclear data is essential for the proper determination of the neutron yield and fusion power. However, uncertainties due to nuclear data are not fully taken into account in uncertainty analysis for neutron yield calibrations using activation foils. This paper investigates the neutron yield uncertainty due to nuclear data using the so-called Total Monte Carlo Method. The work is performed using a detailed MCNP model of the JET fusion machine; the uncertainties due to the cross-sections and angular distributions in JET structural materials, as well as the activation cross-sections in the activation foils, are analysed. It is found that a significant contribution to the neutron yield uncertainty can come from uncertainties in the nuclear data.

  7. SEMI-EMPIRICAL WHITE DWARF INITIAL-FINAL MASS RELATIONSHIPS: A THOROUGH ANALYSIS OF SYSTEMATIC UNCERTAINTIES DUE TO STELLAR EVOLUTION MODELS

    International Nuclear Information System (INIS)

    Salaris, Maurizio; Serenelli, Aldo; Weiss, Achim; Miller Bertolami, Marcelo

    2009-01-01

    Using the most recent results about white dwarfs (WDs) in ten open clusters, we revisit semiempirical estimates of the initial-final mass relation (IFMR) in star clusters, with emphasis on the use of stellar evolution models. We discuss the influence of these models on each step of the derivation. One intention of our work is to use consistent sets of calculations both for the isochrones and the WD cooling tracks. The second one is to derive the range of systematic errors arising from stellar evolution theory. This is achieved by using different sources for the stellar models and by varying physical assumptions and input data. We find that systematic errors, including the determination of the cluster age, are dominating the initial mass values, while observational uncertainties influence the final mass primarily. After having determined the systematic errors, the initial-final mass relation allows us finally to draw conclusions about the physics of the stellar models, in particular about convective overshooting.

  8. Fuzzy Uncertainty Evaluation for Fault Tree Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ki Beom; Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of); Jae, Moo Sung [Hanyang University, Seoul (Korea, Republic of)

    2015-05-15

    This traditional probabilistic approach can calculate relatively accurate results. However it requires a long time because of repetitive computation due to the MC method. In addition, when informative data for statistical analysis are not sufficient or some events are mainly caused by human error, the probabilistic approach may not be possible because uncertainties of these events are difficult to be expressed by probabilistic distributions. In order to reduce the computation time and quantify uncertainties of top events when basic events whose uncertainties are difficult to be expressed by probabilistic distributions exist, the fuzzy uncertainty propagation based on fuzzy set theory can be applied. In this paper, we develop a fuzzy uncertainty propagation code and apply the fault tree of the core damage accident after the large loss of coolant accident (LLOCA). The fuzzy uncertainty propagation code is implemented and tested for the fault tree of the radiation release accident. We apply this code to the fault tree of the core damage accident after the LLOCA in three cases and compare the results with those computed by the probabilistic uncertainty propagation using the MC method. The results obtained by the fuzzy uncertainty propagation can be calculated in relatively short time, covering the results obtained by the probabilistic uncertainty propagation.

  9. Prestroke physical activity is associated with good functional outcome and arterial recanalization after stroke due to a large vessel occlusion.

    Science.gov (United States)

    Ricciardi, Ana Clara; López-Cancio, Elena; Pérez de la Ossa, Natalia; Sobrino, Tomás; Hernández-Pérez, María; Gomis, Meritxell; Munuera, Josep; Muñoz, Lucía; Dorado, Laura; Millán, Mónica; Dávalos, Antonio; Arenillas, Juan F

    2014-01-01

    Although multiple studies and meta-analyses have consistently suggested that regular physical activity (PhA) is associated with a decreased stroke risk and recurrence, there is limited data on the possible preconditioning effect of prestroke PhA on stroke severity and prognosis. We aimed to study the association of prestroke PhA with different outcome variables in patients with acute ischemic stroke due to an anterior large vessel occlusion. The Prestroke Physical Activity and Functional Recovery in Patients with Ischemic Stroke and Arterial Occlusion trial is an observational and longitudinal study that included consecutive patients with acute ischemic stroke admitted to a single tertiary stroke center. Main inclusion criteria were: anterior circulation ischemic stroke within 12 h from symptom onset; presence of a confirmed anterior large vessel occlusion, and functional independence previous to stroke. Prestroke PhA was evaluated with the International Physical Activity Questionnaire and categorized into mild, moderate and high levels by means of metabolic equivalent (MET) minutes per week thresholds. The primary outcome measure was good functional outcome at 3 months (modified Rankin scale ≤2). Secondary outcomes were severity of stroke at admission, complete early recanalization, early dramatic neurological improvement and final infarct volume. During the study period, 159 patients fulfilled the above criteria. The mean age was 68 years, 62% were men and the baseline NIHSS score was 17. Patients with high levels of prestroke PhA were younger, had more frequently distal occlusions and had lower levels of blood glucose and fibrinogen at admission. After multivariate analysis, a high level of prestroke PhA was associated with a good functional outcome at 3 months. Regarding secondary outcome variables and after adjustment for relevant factors, a high level of prestroke PhA was independently associated with milder stroke severity at admission, early dramatic

  10. Canary in the coal mine: Historical oxygen decline in the Gulf of St. Lawrence due to large scale climate changes

    Science.gov (United States)

    Claret, M.; Galbraith, E. D.; Palter, J. B.; Gilbert, D.; Bianchi, D.; Dunne, J. P.

    2016-02-01

    The regional signature of anthropogenic climate change on the atmosphere and upper ocean is often difficult to discern from observational timeseries, dominated as they are by decadal climate variability. Here we argue that a long-term decline of dissolved oxygen concentrations observed in the Gulf of S. Lawrence (GoSL) is consistent with anthropogenic climate change. Oxygen concentrations in the GoSL have declined markedly since 1930 due primarily to an increase of oxygen-poor North Atlantic Central Waters relative to Labrador Current Waters (Gilbert et al. 2005). We compare these observations to a climate warming simulation using a very high-resolution global coupled ocean-atmospheric climate model. The numerical model (CM2.6), developed by the Geophysical Fluid Dynamics Laboratory, is strongly eddying and includes a biogeochemical module with dissolved oxygen. The warming scenario shows that oxygen in the GoSL decreases and it is associated to changes in western boundary currents and wind patterns in the North Atlantic. We speculate that the large-scale changes behind the simulated decrease in GoSL oxygen have also been at play in the real world over the past century, although they are difficult to resolve in noisy atmospheric data.

  11. Climate change impact on streamflow in large-scale river basins: projections and their uncertainties sourced from GCMs and RCP scenarios

    Science.gov (United States)

    Nasonova, Olga N.; Gusev, Yeugeniy M.; Kovalev, Evgeny E.; Ayzel, Georgy V.

    2018-06-01

    Climate change impact on river runoff was investigated within the framework of the second phase of the Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP2) using a physically-based land surface model Soil Water - Atmosphere - Plants (SWAP) (developed in the Institute of Water Problems of the Russian Academy of Sciences) and meteorological projections (for 2006-2099) simulated by five General Circulation Models (GCMs) (including GFDL-ESM2M, HadGEM2-ES, IPSL-CM5A-LR, MIROC-ESM-CHEM, and NorESM1-M) for each of four Representative Concentration Pathway (RCP) scenarios (RCP2.6, RCP4.5, RCP6.0, and RCP8.5). Eleven large-scale river basins were used in this study. First of all, SWAP was calibrated and validated against monthly values of measured river runoff with making use of forcing data from the WATCH data set and all GCMs' projections were bias-corrected to the WATCH. Then, for each basin, 20 projections of possible changes in river runoff during the 21st century were simulated by SWAP. Analysis of the obtained hydrological projections allowed us to estimate their uncertainties resulted from application of different GCMs and RCP scenarios. On the average, the contribution of different GCMs to the uncertainty of the projected river runoff is nearly twice larger than the contribution of RCP scenarios. At the same time the contribution of GCMs slightly decreases with time.

  12. Uncertainty in geochemical modelling of CO2 and calcite dissolution in NaCl solutions due to different modelling codes and thermodynamic databases

    International Nuclear Information System (INIS)

    Haase, Christoph; Dethlefsen, Frank; Ebert, Markus; Dahmke, Andreas

    2013-01-01

    Highlights: • CO 2 and calcite dissolution is calculated. • The codes PHREEQC, Geochemist’s Workbench, EQ3/6, and FactSage are used. • Comparison with Duan and Li (2008) shows lowest deviation using phreeqc.dat and wateq4f.dat. • Using Pitzer databases does not improve accurate calculations. • Uncertainty in dissolved CO 2 is largest using the geochemical models. - Abstract: A prognosis of the geochemical effects of CO 2 storage induced by the injection of CO 2 into geologic reservoirs or by CO 2 leakage into the overlaying formations can be performed by numerical modelling (non-invasive) and field experiments. Until now the research has been focused on the geochemical processes of the CO 2 reacting with the minerals of the storage formation, which mostly consists of quartzitic sandstones. Regarding the safety assessment the reactions between the CO 2 and the overlaying formations in the case of a CO 2 leakage are of equal importance as the reactions in the storage formation. In particular, limestone formations can react very sensitively to CO 2 intrusion. The thermodynamic parameters necessary to model these reactions are not determined explicitly through experiments at the total range of temperature and pressure conditions and are thus extrapolated by the simulation code. The differences in the calculated results lead to different calcite and CO 2 solubilities and can influence the safety issues. This uncertainty study is performed by comparing the computed results, applying the geochemical modelling software codes The Geochemist’s Workbench, EQ3/6, PHREEQC and FactSage/ChemApp and their thermodynamic databases. The input parameters (1) total concentration of the solution, (2) temperature and (3) fugacity are varied within typical values for CO 2 reservoirs, overlaying formations and close-to-surface aquifers. The most sensitive input parameter in the system H 2 O–CO 2 –NaCl–CaCO 3 for the calculated range of dissolved calcite and CO 2 is the

  13. A stochastic mathematical model to locate field hospitals under disruption uncertainty for large-scale disaster preparedness

    Directory of Open Access Journals (Sweden)

    Nezir Aydin

    2016-03-01

    Full Text Available In this study, we consider field hospital location decisions for emergency treatment points in response to large scale disasters. Specifically, we developed a two-stage stochastic model that determines the number and locations of field hospitals and the allocation of injured victims to these field hospitals. Our model considers the locations as well as the failings of the existing public hospitals while deciding on the location of field hospitals that are anticipated to be opened. The model that we developed is a variant of the P-median location model and it integrates capacity restrictions both on field hospitals that are planned to be opened and the disruptions that occur in existing public hospitals. We conducted experiments to demonstrate how the proposed model can be utilized in practice in a real life problem case scenario. Results show the effects of the failings of existing hospitals, the level of failure probability and the capacity of projected field hospitals to deal with the assessment of any given emergency treatment system’s performance. Crucially, it also specifically provides an assessment on the average distance within which a victim needs to be transferred in order to be treated properly and then from this assessment, the proportion of total satisfied demand is then calculated.

  14. Validation/Uncertainty Quantification for Large Eddy Simulations of the heat flux in the Tangentially Fired Oxy-Coal Alstom Boiler Simulation Facility

    Energy Technology Data Exchange (ETDEWEB)

    Smith, P.J.; Eddings, E.G.; Ring, T.; Thornock, J.; Draper, T.; Isaac, B.; Rezeai, D.; Toth, P.; Wu, Y.; Kelly, K.

    2014-08-01

    The objective of this task is to produce predictive capability with quantified uncertainty bounds for the heat flux in commercial-scale, tangentially fired, oxy-coal boilers. Validation data came from the Alstom Boiler Simulation Facility (BSF) for tangentially fired, oxy-coal operation. This task brings together experimental data collected under Alstom’s DOE project for measuring oxy-firing performance parameters in the BSF with this University of Utah project for large eddy simulation (LES) and validation/uncertainty quantification (V/UQ). The Utah work includes V/UQ with measurements in the single-burner facility where advanced strategies for O2 injection can be more easily controlled and data more easily obtained. Highlights of the work include: • Simulations of Alstom’s 15 megawatt (MW) BSF, exploring the uncertainty in thermal boundary conditions. A V/UQ analysis showed consistency between experimental results and simulation results, identifying uncertainty bounds on the quantities of interest for this system (Subtask 9.1) • A simulation study of the University of Utah’s oxy-fuel combustor (OFC) focused on heat flux (Subtask 9.2). A V/UQ analysis was used to show consistency between experimental and simulation results. • Measurement of heat flux and temperature with new optical diagnostic techniques and comparison with conventional measurements (Subtask 9.3). Various optical diagnostics systems were created to provide experimental data to the simulation team. The final configuration utilized a mid-wave infrared (MWIR) camera to measure heat flux and temperature, which was synchronized with a high-speed, visible camera to utilize two-color pyrometry to measure temperature and soot concentration. • Collection of heat flux and temperature measurements in the University of Utah’s OFC for use is subtasks 9.2 and 9.3 (Subtask 9.4). Several replicates were carried to better assess the experimental error. Experiments were specifically designed for the

  15. Uncertainty analysis

    International Nuclear Information System (INIS)

    Thomas, R.E.

    1982-03-01

    An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software

  16. Extensive neutronic sensitivity-uncertainty analysis of a fusion reactor shielding blanket

    International Nuclear Information System (INIS)

    Hogenbirk, A.

    1994-01-01

    In this paper the results are presented of an extensive neutronic sensitivity-uncertainty study performed for the design of a shielding blanket for a next-step fusion reactor, such as ITER. A code system was used, which was developed at ECN Petten. The uncertainty in an important response parameter, the neutron heating in the inboard superconducting coils, was evaluated. Neutron transport calculations in the 100 neutron group GAM-II structure were performed using the code ANISN. For the sensitivity and uncertainty calculations the code SUSD was used. Uncertainties due to cross-section uncertainties were taken into account as well as uncertainties due to uncertainties in energy and angular distributions of scattered neutrons (SED and SAD uncertainties, respectively). The subject of direct-term uncertainties (i.e. uncertainties due to uncertainties in the kerma factors of the superconducting coils) is briefly touched upon. It is shown that SAD uncertainties, which have been largely neglected until now, contribute significantly to the total uncertainty. Moreover, the contribution of direct-term uncertainties may be large. The total uncertainty in the neutron heating, only due to Fe cross-sections, amounts to approximately 25%, which is rather large. However, uncertainty data are scarce and the data may very well be conservative. It is shown in this paper that with the code system used, sensitivity and uncertainty calculations can be performed in a straightforward way. Therefore, it is suggested that emphasis is now put on the generation of realistic, reliable covariance data for cross-sections as well as for angular and energy distributions. ((orig.))

  17. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

    2015-09-18

    The PEST++ Version 1 object-oriented parameter estimation code is here extended to Version 3 to incorporate additional algorithms and tools to further improve support for large and complex environmental modeling problems. PEST++ Version 3 includes the Gauss-Marquardt-Levenberg (GML) algorithm for nonlinear parameter estimation, Tikhonov regularization, integrated linear-based uncertainty quantification, options of integrated TCP/IP based parallel run management or external independent run management by use of a Version 2 update of the GENIE Version 1 software code, and utilities for global sensitivity analyses. The Version 3 code design is consistent with PEST++ Version 1 and continues to be designed to lower the barriers of entry for users as well as developers while providing efficient and optimized algorithms capable of accommodating large, highly parameterized inverse problems. As such, this effort continues the original focus of (1) implementing the most popular and powerful features of the PEST software suite in a fashion that is easy for novice or experienced modelers to use and (2) developing a software framework that is easy to extend.

  18. Understanding uncertainty

    CERN Document Server

    Lindley, Dennis V

    2013-01-01

    Praise for the First Edition ""...a reference for everyone who is interested in knowing and handling uncertainty.""-Journal of Applied Statistics The critically acclaimed First Edition of Understanding Uncertainty provided a study of uncertainty addressed to scholars in all fields, showing that uncertainty could be measured by probability, and that probability obeyed three basic rules that enabled uncertainty to be handled sensibly in everyday life. These ideas were extended to embrace the scientific method and to show how decisions, containing an uncertain element, could be rationally made.

  19. Segmentation and fragmentation of melt jets due to generation of large-scale structures. Observation in low subcooling conditions

    International Nuclear Information System (INIS)

    Sugiyama, Ken-ichiro; Yamada, Tsuyoshi

    1999-01-01

    In order to clarify a mechanism of melt-jet breakup and fragmentation entirely different from the mechanism of stripping, a series of experiments were carried out by using molten tin jets of 100 grams with initial temperatures from 250degC to 900degC. Molten tin jets with a small kinematic viscosity and a large thermal diffusivity were used to observe breakup and fragmentation of melt jets enhanced thermally and hydrodynamically. We observed jet columns with second-stage large-scale structures generated by the coalescence of large-scale structures recognized in the field of fluid mechanics. At a greater depth, the segmentation of jet columns between second-stage large-scale structures and the fragmentation of the segmented jet columns were observed. It is reasonable to consider that the segmentation and the fragmentation of jet columns are caused by the boiling of water hydrodynamically entrained within second-stage large-scale structures. (author)

  20. Impact of Uncertainties in Exposure Assessment on Thyroid Cancer Risk among Persons in Belarus Exposed as Children or Adolescents Due to the Chernobyl Accident.

    Directory of Open Access Journals (Sweden)

    Mark P Little

    Full Text Available The excess incidence of thyroid cancer in Ukraine and Belarus observed a few years after the Chernobyl accident is considered to be largely the result of 131I released from the reactor. Although the Belarus thyroid cancer prevalence data has been previously analyzed, no account was taken of dose measurement error.We examined dose-response patterns in a thyroid screening prevalence cohort of 11,732 persons aged under 18 at the time of the accident, diagnosed during 1996-2004, who had direct thyroid 131I activity measurement, and were resident in the most radio-actively contaminated regions of Belarus. Three methods of dose-error correction (regression calibration, Monte Carlo maximum likelihood, Bayesian Markov Chain Monte Carlo were applied.There was a statistically significant (p0.2.In summary, the relatively small contribution of unshared classical dose error in the current study results in comparatively modest effects on the regression parameters.

  1. Handling of uncertainty due to interference fringe in FT-NIR transmittance spectroscopy - Performance comparison of interference elimination techniques using glucose-water system

    Science.gov (United States)

    Beganović, Anel; Beć, Krzysztof B.; Henn, Raphael; Huck, Christian W.

    2018-05-01

    The applicability of two elimination techniques for interferences occurring in measurements with cells of short pathlength using Fourier transform near-infrared (FT-NIR) spectroscopy was evaluated. Due to the growing interest in the field of vibrational spectroscopy in aqueous biological fluids (e.g. glucose in blood), aqueous solutions of D-(+)-glucose were prepared and split into a calibration set and an independent validation set. All samples were measured with two FT-NIR spectrometers at various spectral resolutions. Moving average smoothing (MAS) and fast Fourier transform filter (FFT filter) were applied to the interference affected FT-NIR spectra in order to eliminate the interference pattern. After data pre-treatment, partial least squares regression (PLSR) models using different NIR regions were constructed using untreated (interference affected) spectra and spectra treated with MAS and FFT filter. The prediction of the independent validation set revealed information about the performance of the utilized interference elimination techniques, as well as the different NIR regions. The results showed that the combination band of water at approx. 5200 cm-1 is of great importance since its performance was superior to the one of the so-called first overtone of water at approx. 6800 cm-1. Furthermore, this work demonstrated that MAS and FFT filter are fast and easy-to-use techniques for the elimination of interference fringes in FT-NIR transmittance spectroscopy.

  2. Calculation of design uncertainties for the development of fusion reactor blankets, taking into account uncertainties in nuclear data

    International Nuclear Information System (INIS)

    Hogenbirk, A.

    1994-07-01

    The use is demonstrated of the newly developed ECN-SUSD sensitivity/uncertainty code system. With ECN-SUSD it is possible to calculate uncertainties in response parameters in fixed source calculations due to cross section uncertainties (using MF33) as well as to uncertainties in angular distributions (using MF34). It is shown that the latter contribution, which is generally neglected because of the lack of MF34-data in modern evaluations (except for EFF), is large in fusion reactor shielding calculations. (orig.)

  3. A child with colo-colonic intussusception due to a large colonic polyp: Case report and literature review

    Directory of Open Access Journals (Sweden)

    Toshiaki Takahashi

    2014-01-01

    Full Text Available Colo-colonic intussusception (CI due to a colonic polyp is a rarely reported cause of intestinal obstruction in school-aged children. Hydrostatic reduction (HR and endoscopic polypectomy are minimally invasive and technically feasible for treating CI. We report a case of CI and review the literature, focusing on the diagnosis and treatment.

  4. Measurement Uncertainty

    Science.gov (United States)

    Koch, Michael

    Measurement uncertainty is one of the key issues in quality assurance. It became increasingly important for analytical chemistry laboratories with the accreditation to ISO/IEC 17025. The uncertainty of a measurement is the most important criterion for the decision whether a measurement result is fit for purpose. It also delivers help for the decision whether a specification limit is exceeded or not. Estimation of measurement uncertainty often is not trivial. Several strategies have been developed for this purpose that will shortly be described in this chapter. In addition the different possibilities to take into account the uncertainty in compliance assessment are explained.

  5. Methodologies of Uncertainty Propagation Calculation

    International Nuclear Information System (INIS)

    Chojnacki, Eric

    2002-01-01

    After recalling the theoretical principle and the practical difficulties of the methodologies of uncertainty propagation calculation, the author discussed how to propagate input uncertainties. He said there were two kinds of input uncertainty: - variability: uncertainty due to heterogeneity, - lack of knowledge: uncertainty due to ignorance. It was therefore necessary to use two different propagation methods. He demonstrated this in a simple example which he generalised, treating the variability uncertainty by the probability theory and the lack of knowledge uncertainty by the fuzzy theory. He cautioned, however, against the systematic use of probability theory which may lead to unjustifiable and illegitimate precise answers. Mr Chojnacki's conclusions were that the importance of distinguishing variability and lack of knowledge increased as the problem was getting more and more complex in terms of number of parameters or time steps, and that it was necessary to develop uncertainty propagation methodologies combining probability theory and fuzzy theory

  6. Physical Uncertainty Bounds (PUB)

    Energy Technology Data Exchange (ETDEWEB)

    Vaughan, Diane Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Dean L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-03-19

    This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.

  7. Insufficient evidence of benefit regarding mortality due to albumin substitution in HCC-free cirrhotic patients undergoing large volume paracentesis.

    Science.gov (United States)

    Kütting, Fabian; Schubert, Jens; Franklin, Jeremy; Bowe, Andrea; Hoffmann, Vera; Demir, Muenevver; Pelc, Agnes; Nierhoff, Dirk; Töx, Ulrich; Steffen, Hans-Michael

    2017-02-01

    Current guidelines for clinical practice recommend the infusion of human albumin after large volume paracentesis. After inspecting the current evidence behind this recommendation, we decided to conduct a systematic review and meta-analysis in order to address the effect of albumin on mortality and morbidity in the context of large volume paracentesis. We performed a comprehensive search of large databases and abstract books of conference proceedings up to March 15th 2016 for randomized controlled trials, testing the infusion of human albumin against alternatives (vs no treatment, vs plasma expanders; vs vasoconstrictors) in HCC-free patients suffering from cirrhosis. We analyzed these trials with regard to mortality, changes in plasma renin activity (PRA), hyponatremia, renal impairment, recurrence of ascites with consequential re-admission into hospital and additional complications. We employed trial sequential analysis in order to calculate the number of patients required in controlled trials to be able to determine a statistically significant advantage of the administration of one agent over another with regard to mortality. We were able to include 21 trials totaling 1277 patients. While the administration of albumin prevents a rise in PRA as well as hyponatremia, no improvement in strong clinical endpoints such as mortality could be demonstrated. Trial sequential analysis showed that at least 1550 additional patients need to be recruited into RCTs and analyzed with regard to this question in order to detect or disprove a 25% mortality effect. There is insufficient evidence that the infusion of albumin after LVP significantly lowers mortality in HCC-free patients with advanced liver disease. © 2016 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.

  8. Uncertainty in hydrological signatures

    Science.gov (United States)

    McMillan, Hilary; Westerberg, Ida

    2015-04-01

    magnitude and bias, and to test how uncertainty depended on the density of the raingauge network and flow gauging station characteristics. The uncertainties were sometimes large (i.e. typical intervals of ±10-40% relative uncertainty) and highly variable between signatures. Uncertainty in the mean discharge was around ±10% for both catchments, while signatures describing the flow variability had much higher uncertainties in the Mahurangi where there was a fast rainfall-runoff response and greater high-flow rating uncertainty. Event and total runoff ratios had uncertainties from ±10% to ±15% depending on the number of rain gauges used; precipitation uncertainty was related to interpolation rather than point uncertainty. Uncertainty distributions in these signatures were skewed, and meant that differences in signature values between these catchments were often not significant. We hope that this study encourages others to use signatures in a way that is robust to data uncertainty.

  9. Search for Spectral Irregularities due to Photon-Axionlike-Particle Oscillations with the Fermi Large Area Telescope

    Science.gov (United States)

    Ajello, M.; Albert, A.; Anderson, B.; Baldini, L.; Barbiellini, G.; Bastieri, D.; Bellazzini, R.; Bissaldi, E.; Blandford, R.D.; Mirabal, N.; hide

    2016-01-01

    We report on the search for spectral irregularities induced by oscillations between photons and axion-like particles (ALPs) in the gamma-ray spectrum of NGC 1275, the central galaxy of the Perseus cluster. Using 6 years of Fermi Large Area Telescope data, we find no evidence for ALPs and exclude couplings above 5 times 10 (sup -12) per gigaelectronvolt for ALP masses less than or approximately equal to 0.5 apparent magnitude (m (sub a)) less than or approximately equal to 5 nanoelectronvolts at 95 percent confidence. The limits are competitive withthe sensitivity of planned laboratory experiments, and, together with other bounds, strongly constrain thepossibility that ALPs can reduce the gamma-ray opacity of the Universe.

  10. Diabetes insipidus due to herpes encephalitis in a patient with diffuse large cell lymphoma. A case report.

    Science.gov (United States)

    Scheinpflug, K; Schalk, E; Reschke, K; Franke, A; Mohren, M

    2006-01-01

    The major causes of central diabetes insipidus are neoplastic or infiltrative lesions of the hypothalamus or pituitary, severe head injuries and pituitary or hypothalamic surgery. Central diabetes insipidus caused by viral infections has been rarely reported in immunosuppressed patients, such as those with acquired immunodeficiency syndrome or Cushing's syndrome. We report the case of a 48-year-old woman suffering from diffuse large cell lymphoma, who developed hypotonic polyuria, hypernatriaemia and somnolence after the first course of chemotherapy with CHOEP and rituximab. Diabetes insipidus was diagnosed by low urine osmolarity and an undetectable vasopressin concentration. MRI revealed no pituitary abnormalities but encephalitis, and lumbar punction confirmed herpes zoster infection. To the best of our knowledge this is the first description of central diabetes insipidus in a lymphoma patient caused by an opportunistic CNS-infection.

  11. Predicting watershed sediment yields after wildland fire with the InVEST sediment retention model at large geographic extent in the western USA: accuracy and uncertainties

    Science.gov (United States)

    Sankey, J. B.; Kreitler, J.; McVay, J.; Hawbaker, T. J.; Vaillant, N.; Lowe, S. E.

    2014-12-01

    Wildland fire is a primary threat to watersheds that can impact water supply through increased sedimentation, water quality decline, and change the timing and amount of runoff leading to increased risk from flood and sediment natural hazards. It is of great societal importance in the western USA and throughout the world to improve understanding of how changing fire frequency, extent, and location, in conjunction with fuel treatments will affect watersheds and the ecosystem services they supply to communities. In this work we assess the utility of the InVEST Sediment Retention Model to accurately characterize vulnerability of burned watersheds to erosion and sedimentation. The InVEST tools are GIS-based implementations of common process models, engineered for high-end computing to allow the faster simulation of larger landscapes and incorporation into decision-making. The InVEST Sediment Retention Model is based on common soil erosion models (e.g., RUSLE -Revised Universal Soil Loss Equation) and determines which areas of the landscape contribute the greatest sediment loads to a hydrological network and conversely evaluate the ecosystem service of sediment retention on a watershed basis. We evaluate the accuracy and uncertainties for InVEST predictions of increased sedimentation after fire, using measured post-fire sedimentation rates available for many watersheds in different rainfall regimes throughout the western USA from an existing, large USGS database of post-fire sediment yield [synthesized in Moody J, Martin D (2009) Synthesis of sediment yields after wildland fire in different rainfall regimes in the western United States. International Journal of Wildland Fire 18: 96-115]. The ultimate goal of this work is to calibrate and implement the model to accurately predict variability in post-fire sediment yield as a function of future landscape heterogeneity predicted by wildfire simulations, and future landscape fuel treatment scenarios, within watersheds.

  12. Quantifying reactor safety margins: Application of code scaling, applicability, and uncertainty evaluation methodology to a large-break, loss-of-coolant accident

    International Nuclear Information System (INIS)

    Boyack, B.; Duffey, R.; Wilson, G.; Griffith, P.; Lellouche, G.; Levy, S.; Rohatgi, U.; Wulff, W.; Zuber, N.

    1989-12-01

    The US Nuclear Regulatory Commission (NRC) has issued a revised rule for loss-of-coolant accident/emergency core cooling system (ECCS) analysis of light water reactors to allow the use of best-estimate computer codes in safety analysis as an option. A key feature of this option requires the licensee to quantify the uncertainty of the calculations and include that uncertainty when comparing the calculated results with acceptance limits provided in 10 CFR Part 50. To support the revised ECCS rule and illustrate its application, the NRC and its contractors and consultants have developed and demonstrated an uncertainty evaluation methodology called code scaling, applicability, and uncertainty (CSAU). The CSAU methodology and an example application described in this report demonstrate that uncertainties in complex phenomena can be quantified. The methodology is structured, traceable, and practical, as is needed in the regulatory arena. The methodology is systematic and comprehensive as it addresses and integrates the scenario, experiments, code, and plant to resolve questions concerned with: (a) code capability to scale-up processes from test facility to full-scale nuclear power plants; (b) code applicability to safety studies of a postulated accident scenario in a specified nuclear power plant; and (c) quantifying uncertainties of calculated results. 127 refs., 55 figs., 40 tabs

  13. Caught you: threats to confidentiality due to the public release of large-scale genetic data sets.

    Science.gov (United States)

    Wjst, Matthias

    2010-12-29

    Large-scale genetic data sets are frequently shared with other research groups and even released on the Internet to allow for secondary analysis. Study participants are usually not informed about such data sharing because data sets are assumed to be anonymous after stripping off personal identifiers. The assumption of anonymity of genetic data sets, however, is tenuous because genetic data are intrinsically self-identifying. Two types of re-identification are possible: the "Netflix" type and the "profiling" type. The "Netflix" type needs another small genetic data set, usually with less than 100 SNPs but including a personal identifier. This second data set might originate from another clinical examination, a study of leftover samples or forensic testing. When merged to the primary, unidentified set it will re-identify all samples of that individual. Even with no second data set at hand, a "profiling" strategy can be developed to extract as much information as possible from a sample collection. Starting with the identification of ethnic subgroups along with predictions of body characteristics and diseases, the asthma kids case as a real-life example is used to illustrate that approach. Depending on the degree of supplemental information, there is a good chance that at least a few individuals can be identified from an anonymized data set. Any re-identification, however, may potentially harm study participants because it will release individual genetic disease risks to the public.

  14. Caught you: threats to confidentiality due to the public release of large-scale genetic data sets

    Directory of Open Access Journals (Sweden)

    Wjst Matthias

    2010-12-01

    Full Text Available Abstract Background Large-scale genetic data sets are frequently shared with other research groups and even released on the Internet to allow for secondary analysis. Study participants are usually not informed about such data sharing because data sets are assumed to be anonymous after stripping off personal identifiers. Discussion The assumption of anonymity of genetic data sets, however, is tenuous because genetic data are intrinsically self-identifying. Two types of re-identification are possible: the "Netflix" type and the "profiling" type. The "Netflix" type needs another small genetic data set, usually with less than 100 SNPs but including a personal identifier. This second data set might originate from another clinical examination, a study of leftover samples or forensic testing. When merged to the primary, unidentified set it will re-identify all samples of that individual. Even with no second data set at hand, a "profiling" strategy can be developed to extract as much information as possible from a sample collection. Starting with the identification of ethnic subgroups along with predictions of body characteristics and diseases, the asthma kids case as a real-life example is used to illustrate that approach. Summary Depending on the degree of supplemental information, there is a good chance that at least a few individuals can be identified from an anonymized data set. Any re-identification, however, may potentially harm study participants because it will release individual genetic disease risks to the public.

  15. Assessing Groundwater Model Uncertainty for the Central Nevada Test Area

    International Nuclear Information System (INIS)

    Pohll, Greg; Pohlmann, Karl; Hassan, Ahmed; Chapman, Jenny; Mihevc, Todd

    2002-01-01

    The purpose of this study is to quantify the flow and transport model uncertainty for the Central Nevada Test Area (CNTA). Six parameters were identified as uncertain, including the specified head boundary conditions used in the flow model, the spatial distribution of the underlying welded tuff unit, effective porosity, sorption coefficients, matrix diffusion coefficient, and the geochemical release function which describes nuclear glass dissolution. The parameter uncertainty was described by assigning prior statistical distributions for each of these parameters. Standard Monte Carlo techniques were used to sample from the parameter distributions to determine the full prediction uncertainty. Additional analysis is performed to determine the most cost-beneficial characterization activities. The maximum radius of the tritium and strontium-90 contaminant boundary was used as the output metric for evaluation of prediction uncertainty. The results indicate that combining all of the uncertainty in the parameters listed above propagates to a prediction uncertainty in the maximum radius of the contaminant boundary of 234 to 308 m and 234 to 302 m, for tritium and strontium-90, respectively. Although the uncertainty in the input parameters is large, the prediction uncertainty in the contaminant boundary is relatively small. The relatively small prediction uncertainty is primarily due to the small transport velocities such that large changes in the uncertain input parameters causes small changes in the contaminant boundary. This suggests that the model is suitable in terms of predictive capability for the contaminant boundary delineation

  16. Uncertainty theory

    CERN Document Server

    Liu, Baoding

    2015-01-01

    When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, c...

  17. Uncertainty in oil projects

    International Nuclear Information System (INIS)

    Limperopoulos, G.J.

    1995-01-01

    This report presents an oil project valuation under uncertainty by means of two well-known financial techniques: The Capital Asset Pricing Model (CAPM) and The Black-Scholes Option Pricing Formula. CAPM gives a linear positive relationship between expected rate of return and risk but does not take into consideration the aspect of flexibility which is crucial for an irreversible investment as an oil price is. Introduction of investment decision flexibility by using real options can increase the oil project value substantially. Some simple tests for the importance of uncertainty in stock market for oil investments are performed. Uncertainty in stock returns is correlated with aggregate product market uncertainty according to Pindyck (1991). The results of the tests are not satisfactory due to the short data series but introducing two other explanatory variables the interest rate and Gross Domestic Product make the situation better. 36 refs., 18 figs., 6 tabs

  18. Uncertainty in eddy covariance measurements and its application to physiological models

    Science.gov (United States)

    D.Y. Hollinger; A.D. Richardson; A.D. Richardson

    2005-01-01

    Flux data are noisy, and this uncertainty is largely due to random measurement error. Knowledge of uncertainty is essential for the statistical evaluation of modeled andmeasured fluxes, for comparison of parameters derived by fitting models to measured fluxes and in formal data-assimilation efforts. We used the difference between simultaneous measurements from two...

  19. W nano-fuzzes: A metastable state formed due to large-flux He{sup +} irradiation at an elevated temperature

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Yunfeng; Liu, Lu; Lu, Bing; Ni, Weiyuan; Liu, Dongping, E-mail: dongping.liu@dlnu.edu.cn

    2016-12-15

    W nano-fuzzes have been formed due to the large-flux and low-energy (200eV) He{sup +} irradiation at W surface temperature of 1480 °C. Microscopic evolution of W nano-fuzzes during annealing or low-energy (200 eV) He{sup +} bombardments has been observed using scanning electron microscopy and thermal desorption spectroscopy. Our measurements show that both annealing and He{sup +} bombardments can significantly alter the structure of W nano-fuzzes. W nano-fuzzes are thermally unstable due to the He release during annealing, and they are easily sputtered during He{sup +} bombardments. The current study shows that W nano-fuzzes act as a metastable state during low-energy and large-flux He{sup +} irradiation at an elevated temperature. - Highlights: • W nano-fuzzes microscopic evolution during annealing or He{sup +} irradiated have been measured. • W nano-fuzzes are thermally unstable due to He release during annealing. • He are released from the top layer of W fuzzes by annealing. • Metastable W nano-fuzzes are formed due to He{sup +} irradiation at an elevated temperature.

  20. Assessment of SFR Wire Wrap Simulation Uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Delchini, Marc-Olivier G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Popov, Emilian L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Pointer, William David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Swiler, Laura P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-09-30

    Predictive modeling and simulation of nuclear reactor performance and fuel are challenging due to the large number of coupled physical phenomena that must be addressed. Models that will be used for design or operational decisions must be analyzed for uncertainty to ascertain impacts to safety or performance. Rigorous, structured uncertainty analyses are performed by characterizing the model’s input uncertainties and then propagating the uncertainties through the model to estimate output uncertainty. This project is part of the ongoing effort to assess modeling uncertainty in Nek5000 simulations of flow configurations relevant to the advanced reactor applications of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. Three geometries are under investigation in these preliminary assessments: a 3-D pipe, a 3-D 7-pin bundle, and a single pin from the Thermal-Hydraulic Out-of-Reactor Safety (THORS) facility. Initial efforts have focused on gaining an understanding of Nek5000 modeling options and integrating Nek5000 with Dakota. These tasks are being accomplished by demonstrating the use of Dakota to assess parametric uncertainties in a simple pipe flow problem. This problem is used to optimize performance of the uncertainty quantification strategy and to estimate computational requirements for assessments of complex geometries. A sensitivity analysis to three turbulent models was conducted for a turbulent flow in a single wire wrapped pin (THOR) geometry. Section 2 briefly describes the software tools used in this study and provides appropriate references. Section 3 presents the coupling interface between Dakota and a computational fluid dynamic (CFD) code (Nek5000 or STARCCM+), with details on the workflow, the scripts used for setting up the run, and the scripts used for post-processing the output files. In Section 4, the meshing methods used to generate the THORS and 7-pin bundle meshes are explained. Sections 5, 6 and 7 present numerical results

  1. Teaching Uncertainties

    Science.gov (United States)

    Duerdoth, Ian

    2009-01-01

    The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…

  2. Calibration uncertainty

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Anglov, Thomas

    2002-01-01

    Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...

  3. Identifying and Analyzing Uncertainty Structures in the TRMM Microwave Imager Precipitation Product over Tropical Ocean Basins

    Science.gov (United States)

    Liu, Jianbo; Kummerow, Christian D.; Elsaesser, Gregory S.

    2016-01-01

    Despite continuous improvements in microwave sensors and retrieval algorithms, our understanding of precipitation uncertainty is quite limited, due primarily to inconsistent findings in studies that compare satellite estimates to in situ observations over different parts of the world. This study seeks to characterize the temporal and spatial properties of uncertainty in the Tropical Rainfall Measuring Mission Microwave Imager surface rainfall product over tropical ocean basins. Two uncertainty analysis frameworks are introduced to qualitatively evaluate the properties of uncertainty under a hierarchy of spatiotemporal data resolutions. The first framework (i.e. 'climate method') demonstrates that, apart from random errors and regionally dependent biases, a large component of the overall precipitation uncertainty is manifested in cyclical patterns that are closely related to large-scale atmospheric modes of variability. By estimating the magnitudes of major uncertainty sources independently, the climate method is able to explain 45-88% of the monthly uncertainty variability. The percentage is largely resolution dependent (with the lowest percentage explained associated with a 1 deg x 1 deg spatial/1 month temporal resolution, and highest associated with a 3 deg x 3 deg spatial/3 month temporal resolution). The second framework (i.e. 'weather method') explains regional mean precipitation uncertainty as a summation of uncertainties associated with individual precipitation systems. By further assuming that self-similar recurring precipitation systems yield qualitatively comparable precipitation uncertainties, the weather method can consistently resolve about 50 % of the daily uncertainty variability, with only limited dependence on the regions of interest.

  4. Uncertainties in repository modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, J.R.

    1996-12-31

    The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling.

  5. Uncertainties in repository modeling

    International Nuclear Information System (INIS)

    Wilson, J.R.

    1996-01-01

    The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling

  6. Large-scale hydrological modeling for calculating water stress indices: implications of improved spatiotemporal resolution, surface-groundwater differentiation, and uncertainty characterization.

    Science.gov (United States)

    Scherer, Laura; Venkatesh, Aranya; Karuppiah, Ramkumar; Pfister, Stephan

    2015-04-21

    Physical water scarcities can be described by water stress indices. These are often determined at an annual scale and a watershed level; however, such scales mask seasonal fluctuations and spatial heterogeneity within a watershed. In order to account for this level of detail, first and foremost, water availability estimates must be improved and refined. State-of-the-art global hydrological models such as WaterGAP and UNH/GRDC have previously been unable to reliably reflect water availability at the subbasin scale. In this study, the Soil and Water Assessment Tool (SWAT) was tested as an alternative to global models, using the case study of the Mississippi watershed. While SWAT clearly outperformed the global models at the scale of a large watershed, it was judged to be unsuitable for global scale simulations due to the high calibration efforts required. The results obtained in this study show that global assessments miss out on key aspects related to upstream/downstream relations and monthly fluctuations, which are important both for the characterization of water scarcity in the Mississippi watershed and for water footprints. Especially in arid regions, where scarcity is high, these models provide unsatisfying results.

  7. Uncertainty calculations made easier

    International Nuclear Information System (INIS)

    Hogenbirk, A.

    1994-07-01

    The results are presented of a neutron cross section sensitivity/uncertainty analysis performed in a complicated 2D model of the NET shielding blanket design inside the ITER torus design, surrounded by the cryostat/biological shield as planned for ITER. The calculations were performed with a code system developed at ECN Petten, with which sensitivity/uncertainty calculations become relatively simple. In order to check the deterministic neutron transport calculations (performed with DORT), calculations were also performed with the Monte Carlo code MCNP. Care was taken to model the 2.0 cm wide gaps between two blanket segments, as the neutron flux behind the vacuum vessel is largely determined by neutrons streaming through these gaps. The resulting neutron flux spectra are in excellent agreement up to the end of the cryostat. It is noted, that at this position the attenuation of the neutron flux is about 1 l orders of magnitude. The uncertainty in the energy integrated flux at the beginning of the vacuum vessel and at the beginning of the cryostat was determined in the calculations. The uncertainty appears to be strongly dependent on the exact geometry: if the gaps are filled with stainless steel, the neutron spectrum changes strongly, which results in an uncertainty of 70% in the energy integrated flux at the beginning of the cryostat in the no-gap-geometry, compared to an uncertainty of only 5% in the gap-geometry. Therefore, it is essential to take into account the exact geometry in sensitivity/uncertainty calculations. Furthermore, this study shows that an improvement of the covariance data is urgently needed in order to obtain reliable estimates of the uncertainties in response parameters in neutron transport calculations. (orig./GL)

  8. Demand Uncertainty

    DEFF Research Database (Denmark)

    Nguyen, Daniel Xuyen

    This paper presents a model of trade that explains why firms wait to export and why many exporters fail. Firms face uncertain demands that are only realized after the firm enters the destination. The model retools the timing of uncertainty resolution found in productivity heterogeneity models....... This retooling addresses several shortcomings. First, the imperfect correlation of demands reconciles the sales variation observed in and across destinations. Second, since demands for the firm's output are correlated across destinations, a firm can use previously realized demands to forecast unknown demands...... in untested destinations. The option to forecast demands causes firms to delay exporting in order to gather more information about foreign demand. Third, since uncertainty is resolved after entry, many firms enter a destination and then exit after learning that they cannot profit. This prediction reconciles...

  9. Citizen Candidates Under Uncertainty

    OpenAIRE

    Eguia, Jon X.

    2005-01-01

    In this paper we make two contributions to the growing literature on "citizen-candidate" models of representative democracy. First, we add uncertainty about the total vote count. We show that in a society with a large electorate, where the outcome of the election is uncertain and where winning candidates receive a large reward from holding office, there will be a two-candidate equilibrium and no equilibria with a single candidate. Second, we introduce a new concept of equilibrium, which we te...

  10. Humeral fractures due to low-energy trauma: an epidemiological survey in patients referred to a large emergency department in Northern Italy.

    Science.gov (United States)

    Pedrazzoni, M; Abbate, B; Verzicco, I; Pedrazzini, A; Benatti, M; Cervellin, G

    2015-01-01

    This survey describes the epidemiology of approximately 1800 low-energy humeral fractures seen in a large emergency department in Northern Italy over 7 years (2007-2013), highlighting the differences from previous Italian studies. The purpose of this study was to determine the incidence of humeral fractures due to low-energy trauma in patients 40 years of age or older referred to a large Emergency Department (Parma, Northern Italy) in a 7-year period (2007-2013). All humeral fractures referred to the emergency department of the Academic Hospital of Parma (the main hospital in the province with a catchment area of approximately 345,000) were retrieved from the hospital database using both ICD-9CM codes and text strings. The diagnosis of humeral fracture due to low-energy trauma was confirmed by medical records and X-ray reports, after exclusion of injuries due to a clear-cut high-energy trauma or cancer. The query identified 1843 humeral fractures (1809 first fractures), with a clear predominance in women (78 %). Fractures of the proximal humerus represented the large majority of humeral fractures (more than 85 %), with an incidence progressively increasing with age (more than 60-fold in women and 20-fold in men). Simultaneous fractures (hip in particular) were frequent especially after 85 years of age (1 out of 8 cases). When compared to other Italian studies, the incidence of humeral fractures was significantly lower than that derived from discharge data corrected for hospitalization rate (standardized rate ratio 0.74; p energy humeral fractures in Italy. Our results partly differ from previous Italian studies based on indirect estimations.

  11. Implications of nuclear data uncertainties to reactor design

    International Nuclear Information System (INIS)

    Greebler, P.; Hutchins, B.A.; Cowan, C.L.

    1970-01-01

    Uncertainties in nuclear data require significant allowances to be made in the design and the operating conditions of reactor cores and of shielded-reactor-plant and fuel-processing systems. These allowances result in direct cost increases due to overdesign of components and equipment and reduced core and fuel operating performance. Compromising the allowances for data uncertainties has indirect cost implications due to increased risks of failure to meet plant and fuel performance objectives, with warrantees involved in some cases, and to satisfy licensed safety requirements. Fast breeders are the most sensitive power reactors to the uncertainties in nuclear data over the neutron energy range of interest for fission reactors, and this paper focuses on the implications of the data uncertainties to design and operation of fast breeder reactors and fuel-processing systems. The current status of uncertainty in predicted physics parameters due to data uncertainties is reviewed and compared with the situation in 1966 and that projected for within the next two years due to anticipated data improvements. Implications of the uncertainties in the predicted physics parameters to design and operation are discussed for both a near-term prototype or demonstration breeder plant (∼300 MW(e)) and a longer-term large (∼1000 MW(e)) plant. Significant improvements in the nuclear data have been made during the past three years, the most important of these to fast power reactors being the 239 Pu alpha below 15 keV. The most important remaining specific data uncertainties are illustrated by their individual contributions to the computational uncertainty of selected physics parameters, and recommended priorities and accuracy requirements for improved data are presented

  12. Dynamics of soil carbon stocks due to large-scale land use changes across the former Soviet Union during the 20th century

    Science.gov (United States)

    Kurganova, Irina; Prishchepov, Alexander V.; Schierhorn, Florian; Lopes de Gerenyu, Valentin; Müller, Daniel; Kuzyakov, Yakov

    2016-04-01

    Land use change is a major driver of land-atmosphere carbon (C) fluxes. The largest net C fluxes caused by LUC are attributed to the conversion of native unmanaged ecosystems to croplands and vice versa. Here, we present the changes of soil organic carbon (SOC) stocks in response to large-scale land use changes in the former Soviet Union from 1953-2012. Widespread and rapid conversion of native ecosystems to croplands occurred in the course of the Virgin Lands Campaign (VLC) between 1954 to 1963 in the Soviet Union, when more than 45 million hectares (Mha) were ploughed in south-eastern Russia and northern Kazakhstan in order to expand domestic food production. After 1991, the collapse of the Soviet Union triggered the abandonment of around 75 Mha across the post-Soviet states. To assess SOC dynamics, we generated a static cropland mask for 2009 based on three global cropland maps. We used the cropland mask to spatially disaggregate annual sown area statistics at province level based on the suitability of each plot for crop production, which yielded land use maps for each year from 1954 to 2012 for all post-Soviet states. To estimate the SOC-dynamics due to the VLC and post-Soviet croplands abandonment, we used available experimental data, own field measurements, and soil maps. A bookkeeping approach was applied to assess the total changes in SOC-stocks in response to large-scale land use changes in the former Soviet Union. The massive croplands expansion during VLC resulted in a substantial loss of SOC - 611±47 Mt C and 241±11 Mt C for the upper 0-50 cm soil layer during the first 20 years of cultivation for Russia and Kazakhstan, respectively. These magnitudes are similar to C losses due to the plowing up of the prairies in USA in the mid-1930s. The total SOC sequestration due to post-Soviet croplands abandonment was estimated at 72.2±6.0 Mt C per year from 1991 to 2010. This amount of carbon equals about 40% of the current fossil fuel emission for this

  13. Sensitivity and uncertainty analysis

    CERN Document Server

    Cacuci, Dan G; Navon, Ionel Michael

    2005-01-01

    As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c

  14. Uncertainty, joint uncertainty, and the quantum uncertainty principle

    International Nuclear Information System (INIS)

    Narasimhachar, Varun; Poostindouz, Alireza; Gour, Gilad

    2016-01-01

    Historically, the element of uncertainty in quantum mechanics has been expressed through mathematical identities called uncertainty relations, a great many of which continue to be discovered. These relations use diverse measures to quantify uncertainty (and joint uncertainty). In this paper we use operational information-theoretic principles to identify the common essence of all such measures, thereby defining measure-independent notions of uncertainty and joint uncertainty. We find that most existing entropic uncertainty relations use measures of joint uncertainty that yield themselves to a small class of operational interpretations. Our notion relaxes this restriction, revealing previously unexplored joint uncertainty measures. To illustrate the utility of our formalism, we derive an uncertainty relation based on one such new measure. We also use our formalism to gain insight into the conditions under which measure-independent uncertainty relations can be found. (paper)

  15. Uncertainty quantification in resonance absorption

    International Nuclear Information System (INIS)

    Williams, M.M.R.

    2012-01-01

    We assess the uncertainty in the resonance escape probability due to uncertainty in the neutron and radiation line widths for the first 21 resonances in 232 Th as given by . Simulation, quadrature and polynomial chaos methods are used and the resonance data are assumed to obey a beta distribution. We find the uncertainty in the total resonance escape probability to be the equivalent, in reactivity, of 75–130 pcm. Also shown are pdfs of the resonance escape probability for each resonance and the variation of the uncertainty with temperature. The viability of the polynomial chaos expansion method is clearly demonstrated.

  16. Dealing with exploration uncertainties

    International Nuclear Information System (INIS)

    Capen, E.

    1992-01-01

    Exploration for oil and gas should fulfill the most adventurous in their quest for excitement and surprise. This paper tries to cover that tall order. The authors will touch on the magnitude of the uncertainty (which is far greater than in most other businesses), the effects of not knowing target sizes very well, how to build uncertainty into analyses naturally, how to tie reserves and chance estimates to economics, and how to look at the portfolio effect of an exploration program. With no apologies, the authors will be using a different language for some readers - the language of uncertainty, which means probability and statistics. These tools allow one to combine largely subjective exploration information with the more analytical data from the engineering and economic side

  17. Photometric Uncertainties

    Science.gov (United States)

    Zou, Xiao-Duan; Li, Jian-Yang; Clark, Beth Ellen; Golish, Dathon

    2018-01-01

    The OSIRIS-REx spacecraft, launched in September, 2016, will study the asteroid Bennu and return a sample from its surface to Earth in 2023. Bennu is a near-Earth carbonaceous asteroid which will provide insight into the formation and evolution of the solar system. OSIRIS-REx will first approach Bennu in August 2018 and will study the asteroid for approximately two years before sampling. OSIRIS-REx will develop its photometric model (including Lommel-Seelinger, ROLO, McEwen, Minnaert and Akimov) of Bennu with OCAM and OVIRS during the Detailed Survey mission phase. The model developed during this phase will be used to photometrically correct the OCAM and OVIRS data.Here we present the analysis of the error for the photometric corrections. Based on our testing data sets, we find:1. The model uncertainties is only correct when we use the covariance matrix to calculate, because the parameters are highly correlated.2. No evidence of domination of any parameter in each model.3. And both model error and the data error contribute to the final correction error comparably.4. We tested the uncertainty module on fake and real data sets, and find that model performance depends on the data coverage and data quality. These tests gave us a better understanding of how different model behave in different case.5. L-S model is more reliable than others. Maybe because the simulated data are based on L-S model. However, the test on real data (SPDIF) does show slight advantage of L-S, too. ROLO is not reliable to use when calculating bond albedo. The uncertainty of McEwen model is big in most cases. Akimov performs unphysical on SOPIE 1 data.6. Better use L-S as our default choice, this conclusion is based mainly on our test on SOPIE data and IPDIF.

  18. Reliability analysis under epistemic uncertainty

    International Nuclear Information System (INIS)

    Nannapaneni, Saideep; Mahadevan, Sankaran

    2016-01-01

    This paper proposes a probabilistic framework to include both aleatory and epistemic uncertainty within model-based reliability estimation of engineering systems for individual limit states. Epistemic uncertainty is considered due to both data and model sources. Sparse point and/or interval data regarding the input random variables leads to uncertainty regarding their distribution types, distribution parameters, and correlations; this statistical uncertainty is included in the reliability analysis through a combination of likelihood-based representation, Bayesian hypothesis testing, and Bayesian model averaging techniques. Model errors, which include numerical solution errors and model form errors, are quantified through Gaussian process models and included in the reliability analysis. The probability integral transform is used to develop an auxiliary variable approach that facilitates a single-level representation of both aleatory and epistemic uncertainty. This strategy results in an efficient single-loop implementation of Monte Carlo simulation (MCS) and FORM/SORM techniques for reliability estimation under both aleatory and epistemic uncertainty. Two engineering examples are used to demonstrate the proposed methodology. - Highlights: • Epistemic uncertainty due to data and model included in reliability analysis. • A novel FORM-based approach proposed to include aleatory and epistemic uncertainty. • A single-loop Monte Carlo approach proposed to include both types of uncertainties. • Two engineering examples used for illustration.

  19. Discussion of OECD LWR Uncertainty Analysis in Modelling Benchmark

    International Nuclear Information System (INIS)

    Ivanov, K.; Avramova, M.; Royer, E.; Gillford, J.

    2013-01-01

    The demand for best estimate calculations in nuclear reactor design and safety evaluations has increased in recent years. Uncertainty quantification has been highlighted as part of the best estimate calculations. The modelling aspects of uncertainty and sensitivity analysis are to be further developed and validated on scientific grounds in support of their performance and application to multi-physics reactor simulations. The Organization for Economic Co-operation and Development (OECD) / Nuclear Energy Agency (NEA) Nuclear Science Committee (NSC) has endorsed the creation of an Expert Group on Uncertainty Analysis in Modelling (EGUAM). Within the framework of activities of EGUAM/NSC the OECD/NEA initiated the Benchmark for Uncertainty Analysis in Modelling for Design, Operation, and Safety Analysis of Light Water Reactor (OECD LWR UAM benchmark). The general objective of the benchmark is to propagate the predictive uncertainties of code results through complex coupled multi-physics and multi-scale simulations. The benchmark is divided into three phases with Phase I highlighting the uncertainty propagation in stand-alone neutronics calculations, while Phase II and III are focused on uncertainty analysis of reactor core and system respectively. This paper discusses the progress made in Phase I calculations, the Specifications for Phase II and the incoming challenges in defining Phase 3 exercises. The challenges of applying uncertainty quantification to complex code systems, in particular the time-dependent coupled physics models are the large computational burden and the utilization of non-linear models (expected due to the physics coupling). (authors)

  20. Effect of activation cross section uncertainties in transmutation analysis of realistic low-activation steels for IFMIF

    Energy Technology Data Exchange (ETDEWEB)

    Cabellos, O.; Garcya-Herranz, N.; Sanz, J. [Institute of Nuclear Fusion, UPM, Madrid (Spain); Cabellos, O.; Garcya-Herranz, N.; Fernandez, P.; Fernandez, B. [Dept. of Nuclear Engineering, UPM, Madrid (Spain); Sanz, J. [Dept. of Power Engineering, UNED, Madrid (Spain); Reyes, S. [Safety, Environment and Health Group, ITER Joint Work Site, Cadarache Center (France)

    2008-07-01

    We address uncertainty analysis to draw conclusions on the reliability of the activation calculation in the International Fusion Materials Irradiation Facility (IFMIF) under the potential impact of activation cross section uncertainties. The Monte Carlo methodology implemented in ACAB code gives the uncertainty estimates due to the synergetic/global effect of the complete set of cross section uncertainties. An element-by-element analysis has been demonstrated as a helpful tool to easily analyse the transmutation performance of irradiated materials.The uncertainty analysis results showed that for times over about 24 h the relative error in the contact dose rate can be as large as 23 per cent. We have calculated the effect of cross section uncertainties in the IFMIF activation of all different elements. For EUROFER, uncertainties in H and He elements are 7.3% and 5.6%, respectively. We have found significant uncertainties in the transmutation response for C, P and Nb.

  1. Extremely large nonsaturating magnetoresistance and ultrahigh mobility due to topological surface states in the metallic Bi2Te3 topological insulator

    Science.gov (United States)

    Shrestha, K.; Chou, M.; Graf, D.; Yang, H. D.; Lorenz, B.; Chu, C. W.

    2017-05-01

    Weak antilocalization (WAL) effects in Bi2Te3 single crystals have been investigated at high and low bulk charge-carrier concentrations. At low charge-carrier density the WAL curves scale with the normal component of the magnetic field, demonstrating the dominance of topological surface states in magnetoconductivity. At high charge-carrier density the WAL curves scale with neither the applied field nor its normal component, implying a mixture of bulk and surface conduction. WAL due to topological surface states shows no dependence on the nature (electrons or holes) of the bulk charge carriers. The observations of an extremely large nonsaturating magnetoresistance and ultrahigh mobility in the samples with lower carrier density further support the presence of surface states. The physical parameters characterizing the WAL effects are calculated using the Hikami-Larkin-Nagaoka formula. At high charge-carrier concentrations, there is a greater number of conduction channels and a decrease in the phase coherence length compared to low charge-carrier concentrations. The extremely large magnetoresistance and high mobility of topological insulators have great technological value and can be exploited in magnetoelectric sensors and memory devices.

  2. Confronting the Uncertainty in Aerosol Forcing Using Comprehensive Observational Data

    Science.gov (United States)

    Johnson, J. S.; Regayre, L. A.; Yoshioka, M.; Pringle, K.; Sexton, D.; Lee, L.; Carslaw, K. S.

    2017-12-01

    The effect of aerosols on cloud droplet concentrations and radiative properties is the largest uncertainty in the overall radiative forcing of climate over the industrial period. In this study, we take advantage of a large perturbed parameter ensemble of simulations from the UK Met Office HadGEM-UKCA model (the aerosol component of the UK Earth System Model) to comprehensively sample uncertainty in aerosol forcing. Uncertain aerosol and atmospheric parameters cause substantial aerosol forcing uncertainty in climatically important regions. As the aerosol radiative forcing itself is unobservable, we investigate the potential for observations of aerosol and radiative properties to act as constraints on the large forcing uncertainty. We test how eight different theoretically perfect aerosol and radiation observations can constrain the forcing uncertainty over Europe. We find that the achievable constraint is weak unless many diverse observations are used simultaneously. This is due to the complex relationships between model output responses and the multiple interacting parameter uncertainties: compensating model errors mean there are many ways to produce the same model output (known as model equifinality) which impacts on the achievable constraint. However, using all eight observable quantities together we show that the aerosol forcing uncertainty can potentially be reduced by around 50%. This reduction occurs as we reduce a large sample of model variants (over 1 million) that cover the full parametric uncertainty to around 1% that are observationally plausible.Constraining the forcing uncertainty using real observations is a more complex undertaking, in which we must account for multiple further uncertainties including measurement uncertainties, structural model uncertainties and the model discrepancy from reality. Here, we make a first attempt to determine the true potential constraint on the forcing uncertainty from our model that is achievable using a comprehensive

  3. Association between high homocyst(e)ine and ischemic stroke due to large- and small-artery disease but not other etiologic subtypes of ischemic stroke.

    Science.gov (United States)

    Eikelboom, J W; Hankey, G J; Anand, S S; Lofthouse, E; Staples, N; Baker, R I

    2000-05-01

    Elevated plasma homocyst(e)ine may be a causal and modifiable risk factor for ischemic stroke, but the results of previous studies have been conflicting. One possible explanation is that homocyst(e)ine may only be associated with certain pathophysiological subtypes of ischemic stroke. We conducted a case-control study of 219 hospital cases with a first-ever ischemic stroke and 205 randomly selected community control subjects stratified by age, sex, and postal code. With the use of established criteria, cases of stroke were classified by etiologic subtype in a blinded fashion. The prevalence of conventional vascular risk factors, fasting plasma homocyst(e)ine levels, vitamin levels, and nucleotide 677 methylene tetrahydrofolate reductase (MTHFR) genotypes were determined in cases and controls. Increasing homocyst(e)ine was a strong and independent risk factor for ischemic stroke (adjusted OR 2.7, 95% CI 1.4 to 5.1 for a 5-micromol/L increase in fasting plasma homocyst(e)ine from 10 to 15 micromol/L). Compared with the lowest quartile, the highest quartile of homocyst(e)ine was associated with an adjusted OR of ischemic stroke of 2.2 (95% CI 1.1 to 4.2). Mean plasma homocyst(e)ine was significantly higher in cases of ischemic stroke due to large-artery disease (14.1 micromol/L, 95% CI 12.5 to 15.9, Pine, the upper 3 quartiles were associated with an adjusted OR of ischemic stroke due to large-artery disease of 3.0 (95% CI 0.8 to 10.8) for the second quartile, 5.6 (95% CI 1.6 to 20) for the third quartile, and 8.7 (95% CI 2.4 to 32) for the fourth quartile (P for trend=0.0005). However, despite a clear association between the TT MTHFR genotype and elevated fasting plasma homocyst(e)ine, there was no association between MTHFR genotype and ischemic stroke or subtype of ischemic stroke. There is a strong, graded association between increasing plasma homocyst(e)ine and ischemic stroke caused by large-artery atherosclerosis and, to a much lesser extent, small

  4. Effect of uncertainties on probabilistic-based design capacity of hydrosystems

    Science.gov (United States)

    Tung, Yeou-Koung

    2018-02-01

    Hydrosystems engineering designs involve analysis of hydrometric data (e.g., rainfall, floods) and use of hydrologic/hydraulic models, all of which contribute various degrees of uncertainty to the design process. Uncertainties in hydrosystem designs can be generally categorized into aleatory and epistemic types. The former arises from the natural randomness of hydrologic processes whereas the latter are due to knowledge deficiency in model formulation and model parameter specification. This study shows that the presence of epistemic uncertainties induces uncertainty in determining the design capacity. Hence, the designer needs to quantify the uncertainty features of design capacity to determine the capacity with a stipulated performance reliability under the design condition. Using detention basin design as an example, the study illustrates a methodological framework by considering aleatory uncertainty from rainfall and epistemic uncertainties from the runoff coefficient, curve number, and sampling error in design rainfall magnitude. The effects of including different items of uncertainty and performance reliability on the design detention capacity are examined. A numerical example shows that the mean value of the design capacity of the detention basin increases with the design return period and this relation is found to be practically the same regardless of the uncertainty types considered. The standard deviation associated with the design capacity, when subject to epistemic uncertainty, increases with both design frequency and items of epistemic uncertainty involved. It is found that the epistemic uncertainty due to sampling error in rainfall quantiles should not be ignored. Even with a sample size of 80 (relatively large for a hydrologic application) the inclusion of sampling error in rainfall quantiles resulted in a standard deviation about 2.5 times higher than that considering only the uncertainty of the runoff coefficient and curve number. Furthermore, the

  5. Uncertainties in historical pollution data from sedimentary records from an Australian urban floodplain lake

    Science.gov (United States)

    Lintern, A.; Leahy, P.; Deletic, A.; Heijnis, H.; Zawadzki, A.; Gadd, P.; McCarthy, D.

    2018-05-01

    Sediment cores from aquatic environments can provide valuable information about historical pollution levels and sources. However, there is little understanding of the uncertainties associated with these findings. The aim of this study is to fill this knowledge gap by proposing a framework for quantifying the uncertainties in historical heavy metal pollution records reconstructed from sediment cores. This uncertainty framework consists of six sources of uncertainty: uncertainties in (1) metals analysis methods, (2) spatial variability of sediment core heavy metal profiles, (3) sub-sampling intervals, (4) the sediment chronology, (5) the assumption that metal levels in bed sediments reflect the magnitude of metal inputs into the aquatic system, and (6) post-depositional transformation of metals. We apply this uncertainty framework to an urban floodplain lake in South-East Australia (Willsmere Billabong). We find that for this site, uncertainties in historical dated heavy metal profiles can be up to 176%, largely due to uncertainties in the sediment chronology, and in the assumption that the settled heavy metal mass is equivalent to the heavy metal mass entering the aquatic system. As such, we recommend that future studies reconstructing historical pollution records using sediment cores from aquatic systems undertake an investigation of the uncertainties in the reconstructed pollution record, using the uncertainty framework provided in this study. We envisage that quantifying and understanding the uncertainties associated with the reconstructed pollution records will facilitate the practical application of sediment core heavy metal profiles in environmental management projects.

  6. Uncertainty of climate change impacts and consequences on the prediction of future hydrological trends

    International Nuclear Information System (INIS)

    Minville, M.; Brissette, F.; Leconte, R.

    2008-01-01

    In the future, water is very likely to be the resource that will be most severely affected by climate change. It has been shown that small perturbations in precipitation frequency and/or quantity can result in significant impacts on the mean annual discharge. Moreover, modest changes in natural inflows result in larger changes in reservoir storage. There is however great uncertainty linked to changes in both the magnitude and direction of future hydrological trends. This presentation discusses the various sources of this uncertainty and their potential impact on the prediction of future hydrological trends. A companion paper will look at adaptation potential, taking into account some of the sources of uncertainty discussed in this presentation. Uncertainty is separated into two main components: climatic uncertainty and 'model and methods' uncertainty. Climatic uncertainty is linked to uncertainty in future greenhouse gas emission scenarios (GHGES) and to general circulation models (GCMs), whose representation of topography and climate processes is imperfect, in large part due to computational limitations. The uncertainty linked to natural variability (which may or may not increase) is also part of the climatic uncertainty. 'Model and methods' uncertainty regroups the uncertainty linked to the different approaches and models needed to transform climate data so that they can be used by hydrological models (such as downscaling methods) and the uncertainty of the models themselves and of their use in a changed climate. The impacts of the various sources of uncertainty on the hydrology of a watershed are demonstrated on the Peribonka River basin (Quebec, Canada). The results indicate that all sources of uncertainty can be important and outline the importance of taking these sources into account for any impact and adaptation studies. Recommendations are outlined for such studies. (author)

  7. Uncertainty and sensitivity analysis in nuclear accident consequence assessment

    International Nuclear Information System (INIS)

    Karlberg, Olof.

    1989-01-01

    This report contains the results of a four year project in research contracts with the Nordic Cooperation in Nuclear Safety and the National Institute for Radiation Protection. An uncertainty/sensitivity analysis methodology consisting of Latin Hypercube sampling and regression analysis was applied to an accident consequence model. A number of input parameters were selected and the uncertainties related to these parameter were estimated within a Nordic group of experts. Individual doses, collective dose, health effects and their related uncertainties were then calculated for three release scenarios and for a representative sample of meteorological situations. From two of the scenarios the acute phase after an accident were simulated and from one the long time consequences. The most significant parameters were identified. The outer limits of the calculated uncertainty distributions are large and will grow to several order of magnitudes for the low probability consequences. The uncertainty in the expectation values are typical a factor 2-5 (1 Sigma). The variation in the model responses due to the variation of the weather parameters is fairly equal to the parameter uncertainty induced variation. The most important parameters showed out to be different for each pathway of exposure, which could be expected. However, the overall most important parameters are the wet deposition coefficient and the shielding factors. A general discussion of the usefulness of uncertainty analysis in consequence analysis is also given. (au)

  8. Uncertainty Evaluation of Best Estimate Calculation Results

    International Nuclear Information System (INIS)

    Glaeser, H.

    2006-01-01

    Efforts are underway in Germany to perform analysis using best estimate computer codes and to include uncertainty evaluation in licensing. The German Reactor Safety Commission (RSK) issued a recommendation to perform uncertainty analysis in loss of coolant accident safety analyses (LOCA), recently. A more general requirement is included in a draft revision of the German Nuclear Regulation which is an activity of the German Ministry of Environment and Reactor Safety (BMU). According to the recommendation of the German RSK to perform safety analyses for LOCA in licensing the following deterministic requirements have still to be applied: Most unfavourable single failure, Unavailability due to preventive maintenance, Break location, Break size and break type, Double ended break, 100 percent through 200 percent, Large, medium and small break, Loss of off-site power, Core power (at accident initiation the most unfavourable conditions and values have to be assumed which may occur under normal operation taking into account the set-points of integral power and power density control. Measurement and calibration errors can be considered statistically), Time of fuel cycle. Analysis using best estimate codes with evaluation of uncertainties is the only way to quantify conservatisms with regard to code models and uncertainties of plant, fuel parameters and decay heat. This is especially the case for approaching licensing limits, e.g. due to power up-rates, higher burn-up and higher enrichment. Broader use of best estimate analysis is therefore envisaged in the future. Since some deterministic unfavourable assumptions regarding availability of NPP systems are still used, some conservatism in best-estimate analyses remains. Methods of uncertainty analyses have been developed and applied by the vendor Framatome ANP as well as by GRS in Germany. The GRS development was sponsored by the German Ministry of Economy and Labour (BMWA). (author)

  9. Properties and modeling of GWAS when complex disease risk is due to non-complementing, deleterious mutations in genes of large effect.

    Directory of Open Access Journals (Sweden)

    Kevin R Thornton

    Full Text Available Current genome-wide association studies (GWAS have high power to detect intermediate frequency SNPs making modest contributions to complex disease, but they are underpowered to detect rare alleles of large effect (RALE. This has led to speculation that the bulk of variation for most complex diseases is due to RALE. One concern with existing models of RALE is that they do not make explicit assumptions about the evolution of a phenotype and its molecular basis. Rather, much of the existing literature relies on arbitrary mapping of phenotypes onto genotypes obtained either from standard population-genetic simulation tools or from non-genetic models. We introduce a novel simulation of a 100-kilobase gene region, based on the standard definition of a gene, in which mutations are unconditionally deleterious, are continuously arising, have partially recessive and non-complementing effects on phenotype (analogous to what is widely observed for most Mendelian disorders, and are interspersed with neutral markers that can be genotyped. Genes evolving according to this model exhibit a characteristic GWAS signature consisting of an excess of marginally significant markers. Existing tests for an excess burden of rare alleles in cases have low power while a simple new statistic has high power to identify disease genes evolving under our model. The structure of linkage disequilibrium between causative mutations and significantly associated markers under our model differs fundamentally from that seen when rare causative markers are assumed to be neutral. Rather than tagging single haplotypes bearing a large number of rare causative alleles, we find that significant SNPs in a GWAS tend to tag single causative mutations of small effect relative to other mutations in the same gene. Our results emphasize the importance of evaluating the power to detect associations under models that are genetically and evolutionarily motivated.

  10. Uncertainty and its propagation in dynamics models

    International Nuclear Information System (INIS)

    Devooght, J.

    1994-01-01

    The purpose of this paper is to bring together some characteristics due to uncertainty when we deal with dynamic models and therefore to propagation of uncertainty. The respective role of uncertainty and inaccuracy is examined. A mathematical formalism based on Chapman-Kolmogorov equation allows to define a open-quotes subdynamicsclose quotes where the evolution equation takes the uncertainty into account. The problem of choosing or combining models is examined through a loss function associated to a decision

  11. Uncertainty analysis of light water reactor unit fuel pin cells

    Energy Technology Data Exchange (ETDEWEB)

    Kamerow, S.; Ivanov, K., E-mail: sln107@PSU.EDU, E-mail: kni1@PSU.EDU [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, PA (United States); Moreno, C. Arenas, E-mail: cristina.arenas@UPC.EDU [Department of Physics and Nuclear Engineering, Technical University of Catalonia, Barcelona (Spain)

    2011-07-01

    The study explored the calculation of uncertainty based on available covariance data and computational tools. Uncertainty due to temperature changes and different fuel compositions are the main focus of this analysis. Selected unit fuel pin cells were analyzed according to the OECD LWR UAM benchmark specifications. Criticality and uncertainty analyses were performed using TSUNAMI-1D sequence in SCALE 6.0. It was found that uncertainties increase with increasing temperature while k{sub eff} decreases. This increase in the uncertainty is due to the increase in sensitivity of the largest contributor of uncertainty, namely nuclide reaction {sup 238}U (n, gamma). The sensitivity grew larger as the capture cross-section of {sup 238}U expanded due to Doppler broadening. In addition, three different compositions (UOx, MOx, and UOxGd{sub 2}O{sub 3}) of fuel cells were analyzed. It showed a remarkable increase in uncertainty in k{sub eff} for the case of the MOx fuel cell and UOxGd{sub 2}O{sub 3} fuel cell. The increase in the uncertainty of k{sub eff} in UOxGd{sub 2}O{sub 3} fuel was nearly twice of that in MOx fuel and almost four times the amount in UOx fuel. The components of the uncertainties in k{sub eff} in each case were examined and it was found that the neutron-nuclide reaction of {sup 238}U, mainly (n,n'), contributed the most to the uncertainties in the cases of MOx and UOxGd{sub 2}O{sub 3}. At higher energy, the covariance coefficient matrix of {sup 238}U (n,n') to {sup 238}U (n,n') and {sup 238}U (n,n') cross-section showed very large values. Further, examination of the UOxGd{sub 2}O{sub 3} case found that the {sup 238}U (n,n') became the dominant contributor to the uncertainty because most of the thermal neutrons in the cell were absorbed by Gadolinium in UOxGd{sub 2}O{sub 3} case and thus shifting the neutron spectrum to higher energy. For the MOx case on other hand, {sup 239}Pu has a very strong absorption cross-section at low energy

  12. Climate Projections and Uncertainty Communication.

    Science.gov (United States)

    Joslyn, Susan L; LeClerc, Jared E

    2016-01-01

    Lingering skepticism about climate change might be due in part to the way climate projections are perceived by members of the public. Variability between scientists' estimates might give the impression that scientists disagree about the fact of climate change rather than about details concerning the extent or timing. Providing uncertainty estimates might clarify that the variability is due in part to quantifiable uncertainty inherent in the prediction process, thereby increasing people's trust in climate projections. This hypothesis was tested in two experiments. Results suggest that including uncertainty estimates along with climate projections leads to an increase in participants' trust in the information. Analyses explored the roles of time, place, demographic differences (e.g., age, gender, education level, political party affiliation), and initial belief in climate change. Implications are discussed in terms of the potential benefit of adding uncertainty estimates to public climate projections. Copyright © 2015 Cognitive Science Society, Inc.

  13. Uncertainty analysis in safety assessment

    International Nuclear Information System (INIS)

    Lemos, Francisco Luiz de; Sullivan, Terry

    1997-01-01

    Nuclear waste disposal is a very complex subject which requires the study of many different fields of science, like hydro geology, meteorology, geochemistry, etc. In addition, the waste disposal facilities are designed to last for a very long period of time. Both of these conditions make safety assessment projections filled with uncertainty. This paper addresses approaches for treatment of uncertainties in the safety assessment modeling due to the variability of data and some current approaches used to deal with this problem. (author)

  14. Nuclear Physical Uncertainties in Modeling X-Ray Bursts

    Science.gov (United States)

    Regis, Eric; Amthor, A. Matthew

    2017-09-01

    Type I x-ray bursts occur when a neutron star accretes material from the surface of another star in a compact binary star system. For certain accretion rates and material compositions, much of the nuclear material is burned in short, explosive bursts. Using a one-dimensional stellar model, Kepler, and a comprehensive nuclear reaction rate library, ReacLib, we have simulated chains of type I x-ray bursts. Unfortunately, there are large remaining uncertainties in the nuclear reaction rates involved, since many of the isotopes reacting are unstable and have not yet been studied experimentally. Some individual reactions, when varied within their estimated uncertainty, alter the light curves dramatically. This limits our ability to understand the structure of the neutron star. Previous studies have looked at the effects of individual reaction rate uncertainties. We have applied a Monte Carlo method ``-simultaneously varying a set of reaction rates'' -in order to probe the expected uncertainty in x-ray burst behaviour due to the total uncertainty in all nuclear reaction rates. Furthermore, we aim to discover any nonlinear effects due to the coupling between different reaction rates. Early results show clear non-linear effects. This research was made possible by NSF-DUE Grant 1317446, BUScholars Program.

  15. ESFR core optimization and uncertainty studies

    International Nuclear Information System (INIS)

    Rineiski, A.; Vezzoni, B.; Zhang, D.; Marchetti, M.; Gabrielli, F.; Maschek, W.; Chen, X.-N.; Buiron, L.; Krepel, J.; Sun, K.; Mikityuk, K.; Polidoro, F.; Rochman, D.; Koning, A.J.; DaCruz, D.F.; Tsige-Tamirat, H.; Sunderland, R.

    2015-01-01

    In the European Sodium Fast Reactor (ESFR) project supported by EURATOM in 2008-2012, a concept for a large 3600 MWth sodium-cooled fast reactor design was investigated. In particular, reference core designs with oxide and carbide fuel were optimized to improve their safety parameters. Uncertainties in these parameters were evaluated for the oxide option. Core modifications were performed first to reduce the sodium void reactivity effect. Introduction of a large sodium plenum with an absorber layer above the core and a lower axial fertile blanket improve the total sodium void effect appreciably, bringing it close to zero for a core with fresh fuel, in line with results obtained worldwide, while not influencing substantially other core physics parameters. Therefore an optimized configuration, CONF2, with a sodium plenum and a lower blanket was established first and used as a basis for further studies in view of deterioration of safety parameters during reactor operation. Further options to study were an inner fertile blanket, introduction of moderator pins, a smaller core height, special designs for pins, such as 'empty' pins, and subassemblies. These special designs were proposed to facilitate melted fuel relocation in order to avoid core re-criticality under severe accident conditions. In the paper further CONF2 modifications are compared in terms of safety and fuel balance. They may bring further improvements in safety, but their accurate assessment requires additional studies, including transient analyses. Uncertainty studies were performed by employing a so-called Total Monte-Carlo method, for which a large number of nuclear data files is produced for single isotopes and then used in Monte-Carlo calculations. The uncertainties for the criticality, sodium void and Doppler effects, effective delayed neutron fraction due to uncertainties in basic nuclear data were assessed for an ESFR core. They prove applicability of the available nuclear data for ESFR

  16. Evaluation of uncertainty in dam-break analysis resulting from dynamic representation of a reservoir; Evaluation de l'incertitude due au modele de representation du reservoir dans les analyses de rupture de barrage

    Energy Technology Data Exchange (ETDEWEB)

    Tchamen, G.W.; Gaucher, J. [Hydro-Quebec Production, Montreal, PQ (Canada). Direction Barrage et Environnement, Unite Barrages et Hydraulique

    2010-08-15

    Owners and operators of high capacity dams in Quebec have a legal obligation to conduct dam break analysis for each of their dams in order to ensure public safety. This paper described traditional hydraulic methodologies and models used to perform dam break analyses. In particular, it examined the influence of the reservoir drawdown submodel on the numerical results of a dam break analysis. Numerical techniques from the field of fluid mechanics and aerodynamics have provided the basis for developing effective hydrodynamic codes that reduce the level of uncertainties associated with dam-break analysis. A static representation that considers the storage curve was compared with a dynamic representation based on Saint-Venant equations and the real bathymetry of the reservoir. The comparison was based on breach of reservoir, maximum water level, flooded area, and wave arrival time in the valley downstream. The study showed that the greatest difference in attained water level was in the vicinity of the dam, and the difference decreased as the distance from the reservoir increased. The analysis showed that the static representation overestimated the maximum depth and inundated area by as much as 20 percent. This overestimation can be reduced by 30 to 40 percent by using dynamic representation. A dynamic model based on a synthetic trapezoidal reconstruction of the storage curve was used, given the lack of bathymetric data for the reservoir. It was concluded that this model can significantly reduce the uncertainty associated with the static model. 7 refs., 9 tabs., 7 figs.

  17. Uncertainty analysis techniques

    International Nuclear Information System (INIS)

    Marivoet, J.; Saltelli, A.; Cadelli, N.

    1987-01-01

    The origin of the uncertainty affecting Performance Assessments, as well as their propagation to dose and risk results is discussed. The analysis is focused essentially on the uncertainties introduced by the input parameters, the values of which may range over some orders of magnitude and may be given as probability distribution function. The paper briefly reviews the existing sampling techniques used for Monte Carlo simulations and the methods for characterizing the output curves, determining their convergence and confidence limits. Annual doses, expectation values of the doses and risks are computed for a particular case of a possible repository in clay, in order to illustrate the significance of such output characteristics as the mean, the logarithmic mean and the median as well as their ratios. The report concludes that provisionally, due to its better robustness, such estimation as the 90th percentile may be substituted to the arithmetic mean for comparison of the estimated doses with acceptance criteria. In any case, the results obtained through Uncertainty Analyses must be interpreted with caution as long as input data distribution functions are not derived from experiments reasonably reproducing the situation in a well characterized repository and site

  18. Uncertainty and measurement

    International Nuclear Information System (INIS)

    Landsberg, P.T.

    1990-01-01

    This paper explores how the quantum mechanics uncertainty relation can be considered to result from measurements. A distinction is drawn between the uncertainties obtained by scrutinising experiments and the standard deviation type of uncertainty definition used in quantum formalism. (UK)

  19. Characterizing Epistemic Uncertainty for Launch Vehicle Designs

    Science.gov (United States)

    Novack, Steven D.; Rogers, Jim; Hark, Frank; Al Hassan, Mohammad

    2016-01-01

    NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty are rendered obsolete since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods.This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper shows how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.

  20. Balancing uncertainties

    International Nuclear Information System (INIS)

    Horton, S.G.

    1984-01-01

    The author presents a system planning perspective on how Ontario Hydro is viewing the future, and where nuclear power fits into that future. Before the 1980s Ontario experienced a steady seven percent per year growth in power demand. Shifting patterns of energy demand have made planning much more difficult. In the early 80s growth in demand fell short of predictions. It is hard to tell what level of demand to plan for in the future. With respect to any energy option, a utility planner or board of directors would want to know when it will be delivered, what it will cost when it is delivered, what it will cost to operate, how long it will last as an economic energy producer, and how all of these factors will be affected by future changes. Ontario Hydro's studies show that nuclear power continues to be the preferred option for large blocks of base load capacity. By 1996 Ontario Hydro will have saved about 10 billion 1983 dollars by using nuclear power. The utility continues to study both sides of the supply-demand equation, looking at conservation as an alternative to constructing new generating facilities and attempting to become aware of shifts in demand trends as soon as they happen

  1. Uncertainties in Nuclear Proliferation Modeling

    International Nuclear Information System (INIS)

    Kim, Chul Min; Yim, Man-Sung; Park, Hyeon Seok

    2015-01-01

    There have been various efforts in the research community to understand the determinants of nuclear proliferation and develop quantitative tools to predict nuclear proliferation events. Such systematic approaches have shown the possibility to provide warning for the international community to prevent nuclear proliferation activities. However, there are still large debates for the robustness of the actual effect of determinants and projection results. Some studies have shown that several factors can cause uncertainties in previous quantitative nuclear proliferation modeling works. This paper analyzes the uncertainties in the past approaches and suggests future works in the view of proliferation history, analysis methods, and variable selection. The research community still lacks the knowledge for the source of uncertainty in current models. Fundamental problems in modeling will remain even other advanced modeling method is developed. Before starting to develop fancy model based on the time dependent proliferation determinants' hypothesis, using graph theory, etc., it is important to analyze the uncertainty of current model to solve the fundamental problems of nuclear proliferation modeling. The uncertainty from different proliferation history coding is small. Serious problems are from limited analysis methods and correlation among the variables. Problems in regression analysis and survival analysis cause huge uncertainties when using the same dataset, which decreases the robustness of the result. Inaccurate variables for nuclear proliferation also increase the uncertainty. To overcome these problems, further quantitative research should focus on analyzing the knowledge suggested on the qualitative nuclear proliferation studies

  2. Uncertainties in effective dose estimates of adult CT head scans: The effect of head size

    International Nuclear Information System (INIS)

    Gregory, Kent J.; Bibbo, Giovanni; Pattison, John E.

    2009-01-01

    Purpose: This study is an extension of a previous study where the uncertainties in effective dose estimates from adult CT head scans were calculated using four CT effective dose estimation methods, three of which were computer programs (CT-EXPO, CTDOSIMETRY, and IMPACTDOSE) and one that involved the dose length product (DLP). However, that study did not include the uncertainty contribution due to variations in head sizes. Methods: The uncertainties due to head size variations were estimated by first using the computer program data to calculate doses to small and large heads. These doses were then compared with doses calculated for the phantom heads used by the computer programs. An uncertainty was then assigned based on the difference between the small and large head doses and the doses of the phantom heads. Results: The uncertainties due to head size variations alone were found to be between 4% and 26% depending on the method used and the patient gender. When these uncertainties were included with the results of the previous study, the overall uncertainties in effective dose estimates (stated at the 95% confidence interval) were 20%-31% (CT-EXPO), 15%-30% (CTDOSIMETRY), 20%-36% (IMPACTDOSE), and 31%-40% (DLP). Conclusions: For the computer programs, the lower overall uncertainties were still achieved when measured values of CT dose index were used rather than tabulated values. For DLP dose estimates, head size variations made the largest (for males) and second largest (for females) contributions to effective dose uncertainty. An improvement in the uncertainty of the DLP method dose estimates will be achieved if head size variation can be taken into account.

  3. Uncertainties in effective dose estimates of adult CT head scans: The effect of head size

    Energy Technology Data Exchange (ETDEWEB)

    Gregory, Kent J.; Bibbo, Giovanni; Pattison, John E. [Department of Medical Physics, Royal Adelaide Hospital, Adelaide, South Australia 5000 (Australia) and School of Electrical and Information Engineering (Applied Physics), University of South Australia, Mawson Lakes, South Australia 5095 (Australia); Division of Medical Imaging, Women' s and Children' s Hospital, North Adelaide, South Australia 5006 (Australia) and School of Electrical and Information Engineering (Applied Physics), University of South Australia, Mawson Lakes, South Australia 5095 (Australia); School of Electrical and Information Engineering (Applied Physics), University of South Australia, Mawson Lakes, South Australia 5095 (Australia)

    2009-09-15

    Purpose: This study is an extension of a previous study where the uncertainties in effective dose estimates from adult CT head scans were calculated using four CT effective dose estimation methods, three of which were computer programs (CT-EXPO, CTDOSIMETRY, and IMPACTDOSE) and one that involved the dose length product (DLP). However, that study did not include the uncertainty contribution due to variations in head sizes. Methods: The uncertainties due to head size variations were estimated by first using the computer program data to calculate doses to small and large heads. These doses were then compared with doses calculated for the phantom heads used by the computer programs. An uncertainty was then assigned based on the difference between the small and large head doses and the doses of the phantom heads. Results: The uncertainties due to head size variations alone were found to be between 4% and 26% depending on the method used and the patient gender. When these uncertainties were included with the results of the previous study, the overall uncertainties in effective dose estimates (stated at the 95% confidence interval) were 20%-31% (CT-EXPO), 15%-30% (CTDOSIMETRY), 20%-36% (IMPACTDOSE), and 31%-40% (DLP). Conclusions: For the computer programs, the lower overall uncertainties were still achieved when measured values of CT dose index were used rather than tabulated values. For DLP dose estimates, head size variations made the largest (for males) and second largest (for females) contributions to effective dose uncertainty. An improvement in the uncertainty of the DLP method dose estimates will be achieved if head size variation can be taken into account.

  4. Sensitiveness Analysis of Neutronic Parameters Due to Uncertainty in Thermo-hydraulic parameters on CAREM-25 Reactor; Analisis de Sensibilidad de los Parametros Neutronicos ante Incertezas en los Parametros Termohidraulicos en el Reactor CAREM-25

    Energy Technology Data Exchange (ETDEWEB)

    Serra, Oscar [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)

    2000-07-01

    Some studies were done about the effect of the uncertainty in the values of several thermo-hydraulic parameters on the core behaviour of the CAREM-25 reactor.By using the chain codes CITVAP-THERMIT and the perturbation the reference states, it was found that concerning to the total power, the effects were not very important, but were much bigger for the pressure.Furthermore were hardly significant in the presence of any perturbation on the void fraction calculation and the fuel temperature.The reactivity and the power peaking factor had highly important changes in the case of the coolant flow.We conclude that the use of this procedure is adequate and useful to our purpose.

  5. Relational uncertainty in service dyads

    DEFF Research Database (Denmark)

    Kreye, Melanie

    2017-01-01

    in service dyads and how they resolve it through suitable organisational responses to increase the level of service quality. Design/methodology/approach: We apply the overall logic of Organisational Information-Processing Theory (OIPT) and present empirical insights from two industrial case studies collected...... the relational uncertainty increased the functional quality while resolving the partner’s organisational uncertainty increased the technical quality of the delivered service. Originality: We make two contributions. First, we introduce relational uncertainty to the OM literature as the inability to predict...... and explain the actions of a partnering organisation due to a lack of knowledge about their abilities and intentions. Second, we present suitable organisational responses to relational uncertainty and their effect on service quality....

  6. The uncertainties in estimating measurement uncertainties

    International Nuclear Information System (INIS)

    Clark, J.P.; Shull, A.H.

    1994-01-01

    All measurements include some error. Whether measurements are used for accountability, environmental programs or process support, they are of little value unless accompanied by an estimate of the measurements uncertainty. This fact is often overlooked by the individuals who need measurements to make decisions. This paper will discuss the concepts of measurement, measurements errors (accuracy or bias and precision or random error), physical and error models, measurement control programs, examples of measurement uncertainty, and uncertainty as related to measurement quality. Measurements are comparisons of unknowns to knowns, estimates of some true value plus uncertainty; and are no better than the standards to which they are compared. Direct comparisons of unknowns that match the composition of known standards will normally have small uncertainties. In the real world, measurements usually involve indirect comparisons of significantly different materials (e.g., measuring a physical property of a chemical element in a sample having a matrix that is significantly different from calibration standards matrix). Consequently, there are many sources of error involved in measurement processes that can affect the quality of a measurement and its associated uncertainty. How the uncertainty estimates are determined and what they mean is as important as the measurement. The process of calculating the uncertainty of a measurement itself has uncertainties that must be handled correctly. Examples of chemistry laboratory measurement will be reviewed in this report and recommendations made for improving measurement uncertainties

  7. Model Uncertainty for Bilinear Hysteretic Systems

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    1984-01-01

    . The statistical uncertainty -due to lack of information can e.g. be taken into account by describing the variables by predictive density functions, Veneziano [2). In general, model uncertainty is the uncertainty connected with mathematical modelling of the physical reality. When structural reliability analysis...... is related to the concept of a failure surface (or limit state surface) in the n-dimensional basic variable space then model uncertainty is at least due to the neglected variables, the modelling of the failure surface and the computational technique used. A more precise definition is given in section 2...

  8. Water supply infrastructure planning under multiple uncertainties: A differentiated approach

    Science.gov (United States)

    Fletcher, S.; Strzepek, K.

    2017-12-01

    Many water planners face increased pressure on water supply systems from increasing demands from population and economic growth in combination with uncertain water supply. Supply uncertainty arises from short-term climate variability and long-term climate change as well as uncertainty in groundwater availability. Social and economic uncertainties - such as sectoral competition for water, food and energy security, urbanization, and environmental protection - compound physical uncertainty. Further, the varying risk aversion of stakeholders and water managers makes it difficult to assess the necessity of expensive infrastructure investments to reduce risk. We categorize these uncertainties on two dimensions: whether they can be updated over time by collecting additional information, and whether the uncertainties can be described probabilistically or are "deep" uncertainties whose likelihood is unknown. Based on this, we apply a decision framework that combines simulation for probabilistic uncertainty, scenario analysis for deep uncertainty, and multi-stage decision analysis for uncertainties that are reduced over time with additional information. In light of these uncertainties and the investment costs of large infrastructure, we propose the assessment of staged, modular infrastructure and information updating as a hedge against risk. We apply this framework to cases in Melbourne, Australia and Riyadh, Saudi Arabia. Melbourne is a surface water system facing uncertain population growth and variable rainfall and runoff. A severe drought from 1997 to 2009 prompted investment in a 150 MCM/y reverse osmosis desalination plan with a capital cost of 3.5 billion. Our analysis shows that flexible design in which a smaller portion of capacity is developed initially with the option to add modular capacity in the future can mitigate uncertainty and reduce the expected lifetime costs by up to 1 billion. In Riyadh, urban water use relies on fossil groundwater aquifers and

  9. An advanced joint inversion system for CO2 storage modeling with large date sets for characterization and real-time monitoring-enhancing storage performance and reducing failure risks under uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Kitanidis, Peter [Stanford Univ., CA (United States)

    2016-04-30

    As large-scale, commercial storage projects become operational, the problem of utilizing information from diverse sources becomes more critically important. In this project, we developed, tested, and applied an advanced joint data inversion system for CO2 storage modeling with large data sets for use in site characterization and real-time monitoring. Emphasis was on the development of advanced and efficient computational algorithms for joint inversion of hydro-geophysical data, coupled with state-of-the-art forward process simulations. The developed system consists of (1) inversion tools using characterization data, such as 3D seismic survey (amplitude images), borehole log and core data, as well as hydraulic, tracer and thermal tests before CO2 injection, (2) joint inversion tools for updating the geologic model with the distribution of rock properties, thus reducing uncertainty, using hydro-geophysical monitoring data, and (3) highly efficient algorithms for directly solving the dense or sparse linear algebra systems derived from the joint inversion. The system combines methods from stochastic analysis, fast linear algebra, and high performance computing. The developed joint inversion tools have been tested through synthetic CO2 storage examples.

  10. Application of status uncertainty analysis methods for AP1000 LBLOCA calculation

    International Nuclear Information System (INIS)

    Zhang Shunxiang; Liang Guoxing

    2012-01-01

    Parameter uncertainty analysis is developed by using the reasonable method to establish the response relations between input parameter uncertainties and output uncertainties. The application of the parameter uncertainty analysis makes the simulation of plant state more accuracy and improves the plant economy with reasonable security assurance. The AP1000 LBLOCA was analyzed in this paper and the results indicate that the random sampling statistical analysis method, sensitivity analysis numerical method and traditional error propagation analysis method can provide quite large peak cladding temperature (PCT) safety margin, which is much helpful for choosing suitable uncertainty analysis method to improve the plant economy. Additionally, the random sampling statistical analysis method applying mathematical statistics theory makes the largest safety margin due to the reducing of the conservation. Comparing with the traditional conservative bounding parameter analysis method, the random sampling method can provide the PCT margin of 100 K, while the other two methods can only provide 50-60 K. (authors)

  11. Uncertainty in social dilemmas

    OpenAIRE

    Kwaadsteniet, Erik Willem de

    2007-01-01

    This dissertation focuses on social dilemmas, and more specifically, on environmental uncertainty in these dilemmas. Real-life social dilemma situations are often characterized by uncertainty. For example, fishermen mostly do not know the exact size of the fish population (i.e., resource size uncertainty). Several researchers have therefore asked themselves the question as to how such uncertainty influences people’s choice behavior. These researchers have repeatedly concluded that uncertainty...

  12. Water hammer and column separation due to accidental simultaneous closure of control valves in a large scale two-phase flow experimental test rig

    NARCIS (Netherlands)

    Bergant, A.; Westende, van 't J.M.C.; Koppel, T.; Gale, J.; Hou, Q.; Pandula, Z.; Tijsseling, A.S.

    2010-01-01

    A large-scale pipeline test rig at Deltares, Delft, The Netherlands has been used for filling and emptying experiments. Tests have been conducted in a horizontal 250 mm diameter PVC pipe of 258 m length with control valves at the downstream and upstream ends. This paper investigates the accidental

  13. Propagation of dynamic measurement uncertainty

    International Nuclear Information System (INIS)

    Hessling, J P

    2011-01-01

    The time-dependent measurement uncertainty has been evaluated in a number of recent publications, starting from a known uncertain dynamic model. This could be defined as the 'downward' propagation of uncertainty from the model to the targeted measurement. The propagation of uncertainty 'upward' from the calibration experiment to a dynamic model traditionally belongs to system identification. The use of different representations (time, frequency, etc) is ubiquitous in dynamic measurement analyses. An expression of uncertainty in dynamic measurements is formulated for the first time in this paper independent of representation, joining upward as well as downward propagation. For applications in metrology, the high quality of the characterization may be prohibitive for any reasonably large and robust model to pass the whiteness test. This test is therefore relaxed by not directly requiring small systematic model errors in comparison to the randomness of the characterization. Instead, the systematic error of the dynamic model is propagated to the uncertainty of the measurand, analogously but differently to how stochastic contributions are propagated. The pass criterion of the model is thereby transferred from the identification to acceptance of the total accumulated uncertainty of the measurand. This increases the relevance of the test of the model as it relates to its final use rather than the quality of the calibration. The propagation of uncertainty hence includes the propagation of systematic model errors. For illustration, the 'upward' propagation of uncertainty is applied to determine if an appliance box is damaged in an earthquake experiment. In this case, relaxation of the whiteness test was required to reach a conclusive result

  14. Propagation of statistical and nuclear data uncertainties in Monte Carlo burn-up calculations

    International Nuclear Information System (INIS)

    Garcia-Herranz, Nuria; Cabellos, Oscar; Sanz, Javier; Juan, Jesus; Kuijper, Jim C.

    2008-01-01

    Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB. A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants. This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files

  15. Propagation of statistical and nuclear data uncertainties in Monte Carlo burn-up calculations

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Herranz, Nuria [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain)], E-mail: nuria@din.upm.es; Cabellos, Oscar [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain); Sanz, Javier [Departamento de Ingenieria Energetica, Universidad Nacional de Educacion a Distancia, UNED (Spain); Juan, Jesus [Laboratorio de Estadistica, Universidad Politecnica de Madrid, UPM (Spain); Kuijper, Jim C. [NRG - Fuels, Actinides and Isotopes Group, Petten (Netherlands)

    2008-04-15

    Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB. A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants. This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files.

  16. Inventories and sales uncertainty\\ud

    OpenAIRE

    Caglayan, M.; Maioli, S.; Mateut, S.

    2011-01-01

    We investigate the empirical linkages between sales uncertainty and firms´ inventory investment behavior while controlling for firms´ financial strength. Using large panels of manufacturing firms from several European countries we find that higher sales uncertainty leads to larger stocks of inventories. We also identify an indirect effect of sales uncertainty on inventory accumulation through the financial strength of firms. Our results provide evidence that financial strength mitigates the a...

  17. Simulation of impurity transport in the peripheral plasma due to the emission of dust in long pulse discharges on the Large Helical Device

    Directory of Open Access Journals (Sweden)

    M. Shoji

    2017-08-01

    Full Text Available Two different plasma termination processes by dust emission were observed in long pulse discharges in the Large Helical Device. One is a plasma termination caused by large amounts of carbon dust released from a lower divertor region. The other is termination caused by stainless steel (iron dust emission from the surface of a helical coil can. The effect of the dust emission on the sustainment of the long pulse discharges are investigated using a three-dimensional edge plasma transport code (EMC3-EIRENE coupled with a dust transport code (DUSTT. The simulation shows that the plasma is more influenced by the iron dust emission from the helical coil can than by the carbon dust emission from the divertor region. The simulation revealed that the plasma flow in divertor legs is quite effective for preventing dust from terminating the long pulse discharges.

  18. Sensitivities and uncertainties of modeled ground temperatures in mountain environments

    Directory of Open Access Journals (Sweden)

    S. Gubler

    2013-08-01

    discretization parameters. We show that the temporal resolution should be at least 1 h to ensure errors less than 0.2 °C in modeled MAGT, and the uppermost ground layer should at most be 20 mm thick. Within the topographic setting, the total parametric output uncertainties expressed as the length of the 95% uncertainty interval of the Monte Carlo simulations range from 0.5 to 1.5 °C for clay and silt, and ranges from 0.5 to around 2.4 °C for peat, sand, gravel and rock. These uncertainties are comparable to the variability of ground surface temperatures measured within 10 m × 10 m grids in Switzerland. The increased uncertainties for sand, peat and gravel are largely due to their sensitivity to the hydraulic conductivity.

  19. Uncertainty analysis in safety assessment

    Energy Technology Data Exchange (ETDEWEB)

    Lemos, Francisco Luiz de [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN), Belo Horizonte, MG (Brazil); Sullivan, Terry [Brookhaven National Lab., Upton, NY (United States)

    1997-12-31

    Nuclear waste disposal is a very complex subject which requires the study of many different fields of science, like hydro geology, meteorology, geochemistry, etc. In addition, the waste disposal facilities are designed to last for a very long period of time. Both of these conditions make safety assessment projections filled with uncertainty. This paper addresses approaches for treatment of uncertainties in the safety assessment modeling due to the variability of data and some current approaches used to deal with this problem. (author) 13 refs.; e-mail: lemos at bnl.gov; sulliva1 at bnl.gov

  20. Spontaneous Retroperitoneal Hemorrhage (Wunderlich Syndrome due to Large Upper Pole Renal Angiomyolipoma: Does Robotic-Assisted Laparoscopic Partial Nephrectomy Have a Role in Primary Treatment?

    Directory of Open Access Journals (Sweden)

    Achilles Ploumidis

    2013-01-01

    Full Text Available Spontaneous rapture with consequent retroperitoneal hemorrhage (Wunderlich’s syndrome is the complication mostly feared from large renal angiomyolipomas (RAMLs. In hemodynamic stable patients, minimal invasive therapies have superseded open surgery as the mainstay of treatment, with contemporary cases mostly treated by selective arterial embolization. Robotic-assisted laparoscopic partial nephrectomy (RALPN is an established minimal access treatment that has been used in the past for benign and malignant lesions of the kidney in the elective setting, but rarely in urgent situations as primary treatment. We present a case of a ruptured RAML in a young female treated effectively by RALPN.

  1. How much can the number of jabiru stork (Ciconiidae nests vary due to change of flood extension in a large Neotropical floodplain?

    Directory of Open Access Journals (Sweden)

    Guilherme Mourão

    2010-10-01

    Full Text Available The jabiru stork, Jabiru mycteria (Lichtenstein, 1819, a large, long-legged wading bird occurring in lowland wetlands from southern Mexico to northern Argentina, is considered endangered in a large portion of its distribution range. We conducted aerial surveys to estimate the number of jabiru active nests in the Brazilian Pantanal (140,000 km² in September of 1991-1993, 1998, 2000-2002, and 2004. Corrected densities of active nests were regressed against the annual hydrologic index (AHI, an index of flood extension in the Pantanal based on the water level of the Paraguay River. Annual nest density was a non-linear function of the AHI, modeled by the equation 6.5 · 10-8 · AHI1.99 (corrected r² = 0.72, n = 7. We applied this model to the AHI between 1900 and 2004. The results indicate that the number of jabiru nests may have varied from about 220 in 1971 to more than 23,000 in the nesting season of 1921, and the estimates for our study period (1991 to 2004 averaged about 12,400 nests. Our model indicates that the inter-annual variations in flooding extent can determine dramatic changes in the number of active jabiru nests. Since the jabiru stork responds negatively to drier conditions in the Pantanal, direct human-induced changes in the hydrological patterns, as well as the effects of global climate change, may strongly jeopardize the population in the region.

  2. Orientation and uncertainties

    International Nuclear Information System (INIS)

    Peters, H.P.; Hennen, L.

    1990-01-01

    The authors report on the results of three representative surveys that made a closer inquiry into perceptions and valuations of information and information sources concering Chernobyl. If turns out that the information sources are generally considered little trustworthy. This was generally attributable to the interpretation of the events being tied to attitudes in the atmonic energy issue. The greatest credit was given to television broadcasting. The authors summarize their discourse as follows: There is good reason to interpret the widespread uncertainty after Chernobyl as proof of the fact that large parts of the population are prepared and willing to assume a critical stance towards information and prefer to draw their information from various sources representing different positions. (orig.) [de

  3. Uncertainty propagation through dynamic models of assemblies of mechanical structures

    International Nuclear Information System (INIS)

    Daouk, Sami

    2016-01-01

    When studying the behaviour of mechanical systems, mathematical models and structural parameters are usually considered deterministic. Return on experience shows however that these elements are uncertain in most cases, due to natural variability or lack of knowledge. Therefore, quantifying the quality and reliability of the numerical model of an industrial assembly remains a major question in low-frequency dynamics. The purpose of this thesis is to improve the vibratory design of bolted assemblies through setting up a dynamic connector model that takes account of different types and sources of uncertainty on stiffness parameters, in a simple, efficient and exploitable in industrial context. This work has been carried out in the framework of the SICODYN project, led by EDF R and D, that aims to characterise and quantify, numerically and experimentally, the uncertainties in the dynamic behaviour of bolted industrial assemblies. Comparative studies of several numerical methods of uncertainty propagation demonstrate the advantage of using the Lack-Of-Knowledge theory. An experimental characterisation of uncertainties in bolted structures is performed on a dynamic test rig and on an industrial assembly. The propagation of many small and large uncertainties through different dynamic models of mechanical assemblies leads to the assessment of the efficiency of the Lack-Of-Knowledge theory and its applicability in an industrial environment. (author)

  4. Instrument uncertainty predictions

    International Nuclear Information System (INIS)

    Coutts, D.A.

    1991-07-01

    The accuracy of measurements and correlations should normally be provided for most experimental activities. The uncertainty is a measure of the accuracy of a stated value or equation. The uncertainty term reflects a combination of instrument errors, modeling limitations, and phenomena understanding deficiencies. This report provides several methodologies to estimate an instrument's uncertainty when used in experimental work. Methods are shown to predict both the pretest and post-test uncertainty

  5. [Measures against Radiation Exposure Due to Large-Scale Nuclear Accident in Distant Place--Radioactive Materials in Nagasaki from Fukushima Daiichi Nuclear Power Plant].

    Science.gov (United States)

    Yuan, Jun; Sera, Koichiro; Takatsuji, Toshihiro

    2015-01-01

    To investigate human health effects of radiation exposure due to possible future nuclear accidents in distant places and other various findings of analysis of the radioactive materials contaminating the atmosphere of Nagasaki due to the Fukushima Daiichi Nuclear Power Plant accident. The concentrations of radioactive materials in aerosols in the atmosphere of Nagasaki were measured using a germanium semiconductor detector from March 2011 to March 2013. Internal exposure dose was calculated in accordance with ICRP Publ. 72. Air trajectories were analyzed using NOAA and METEX web-based systems. (134)Cs and (137)Cs were repeatedly detected. The air trajectory analysis showed that (134)Cs and (137)Cs flew directly from the Fukushima Daiichi Nuclear Power Plant from March to April 2011. However, the direct air trajectories were rarely detected after this period even when (134)Cs and (137)Cs were detected after this period. The activity ratios ((134)Cs/(137)Cs) of almost all the samples converted to those in March 2011 were about unity. This strongly suggests that the (134)Cs and (137)Cs detected mainly originated from the Fukushima Daiichi Nuclear Power Plant accident in March 2011. Although the (134)Cs and (137)Cs concentrations per air volume were very low and the human health effects of internal exposure via inhalation is expected to be negligible, the specific activities (concentrations per aerosol mass) were relatively high. It was found that possible future nuclear accidents may cause severe radioactive contaminations, which may require radiation exposure control of farm goods to more than 1000 km from places of nuclear accidents.

  6. The state of the art of the impact of sampling uncertainty on measurement uncertainty

    Science.gov (United States)

    Leite, V. J.; Oliveira, E. C.

    2018-03-01

    The measurement uncertainty is a parameter that marks the reliability and can be divided into two large groups: sampling and analytical variations. Analytical uncertainty is a controlled process, performed in the laboratory. The same does not occur with the sampling uncertainty, which, because it faces several obstacles and there is no clarity on how to perform the procedures, has been neglected, although it is admittedly indispensable to the measurement process. This paper aims at describing the state of the art of sampling uncertainty and at assessing its relevance to measurement uncertainty.

  7. Observation of enhanced radial transport of energetic ion due to energetic particle mode destabilized by helically-trapped energetic ion in the Large Helical Device

    Science.gov (United States)

    Ogawa, K.; Isobe, M.; Kawase, H.; Nishitani, T.; Seki, R.; Osakabe, M.; LHD Experiment Group

    2018-04-01

    A deuterium experiment was initiated to achieve higher-temperature and higher-density plasmas in March 2017 in the Large Helical Device (LHD). The central ion temperature notably increases compared with that in hydrogen experiments. However, an energetic particle mode called the helically-trapped energetic-ion-driven resistive interchange (EIC) mode is often excited by intensive perpendicular neutral beam injections on high ion-temperature discharges. The mode leads to significant decrease of the ion temperature or to limiting the sustainment of the high ion-temperature state. To understand the effect of EIC on the energetic ion confinement, the radial transport of energetic ions is studied by means of the neutron flux monitor and vertical neutron camera newly installed on the LHD. Decreases of the line-integrated neutron profile in core channels show that helically-trapped energetic ions are lost from the plasma.

  8. Clinical experience in the screening and management of a large kindred with familial isolated pituitary adenoma due to an aryl hydrocarbon receptor interacting protein (AIP) mutation.

    Science.gov (United States)

    Williams, Fred; Hunter, Steven; Bradley, Lisa; Chahal, Harvinder S; Storr, Helen L; Akker, Scott A; Kumar, Ajith V; Orme, Stephen M; Evanson, Jane; Abid, Noina; Morrison, Patrick J; Korbonits, Márta; Atkinson, A Brew

    2014-04-01

    Germline AIP mutations usually cause young-onset acromegaly with low penetrance in a subset of familial isolated pituitary adenoma families. We describe our experience with a large family with R304* AIP mutation and discuss some of the diagnostic dilemmas and management issues. The aim of the study was to identify and screen mutation carriers in the family. Forty-three family members participated in the study. The study was performed in university hospitals. We conducted genetic and endocrine screening of family members. We identified 18 carriers of the R304* mutation, three family members with an AIP-variant A299V, and two family members who harbored both changes. One of the two index cases presented with gigantism and pituitary apoplexy, the other presented with young-onset acromegaly, and both had surgery and radiotherapy. After genetic and clinical screening of the family, two R304* carriers were diagnosed with acromegaly. They underwent transsphenoidal surgery after a short period of somatostatin analog treatment. One of these two patients is in remission; the other achieved successful pregnancy despite suboptimal control of acromegaly. One of the A299V carrier family members was previously diagnosed with a microprolactinoma; we consider this case to be a phenocopy. Height of the unaffected R304* carrier family members is not different compared to noncarrier relatives. Families with AIP mutations present particular problems such as the occurrence of large invasive tumors, poor response to medical treatment, difficulties with fertility and management of pregnancy, and the finding of AIP sequence variants of unknown significance. Because disease mostly develops at a younger age and penetrance is low, the timing and duration of the follow-up of carriers without overt disease requires further study. The psychological and financial impact of prolonged clinical screening must be considered. Excellent relationships between the family, endocrinologists, and

  9. Enucleaton of the right eye due to large choroidal melanoma with simultaneous penetrating cornea transplantation from OD to OS (Case report.

    Directory of Open Access Journals (Sweden)

    E. A. Korchuganova

    2013-01-01

    Full Text Available There is presentation case report of 75‑year old woman with choroidal melanoma (T3N0M0 on the right eye and failed graft on the left pseudophakic eye with far advanced glaucoma and ARMD. No treatment was given to the leading eye with VA 0,2. VA of OS = 1 / ∞ pr.l.certa; PKP OS in 2008 for pseudophakic bullous keratopathy on the eye with far advanced glaucoma. IOP was normal after previous filtering surgery. After PKP VA = 0,04; clear graft during 2 years; then gradually opacification and vascularization occurred. VA dropped to light perception. Echography OD — tumor h 8,29 mm, d 21,77 mm. No ingrowth of tumor into anterior segment of the eye; VA OD = 0 (no light perception. Concerning the need to enucleate the right eye with large choroidal melanoma, the advantage of using corneal autograft from OD to OS, location of tumor in the posterior pole with no ingrowth in anterior segment, the decision was made to perform the following operation — to enucleate the right eye and transplant simultaneously corneal graft from OD on OS. Patient was discharged from the Ophthalmology Hospital with VA OS = 0,01, during next week VA improved to 0,02. 8,0 mm graft isclear, fixed with 8 interruptured and continious suture 10 / 0‑nylon. Anterior chamber — normal depth, atrophic iris, stable position of PC IOL. Optic nerve head is pale with subtotal deep glaucomatous excavation. Conclusion: presented case report demonstrates the rarepossibility to use cornea after enucleation the eye with large malignant tumor (located in the posterior pole for grafting in the only eye with failed vascularised graft. It was the only possibility for this patient to restore some vision.

  10. Enucleaton of the right eye due to large choroidal melanoma with simultaneous penetrating cornea transplantation from OD to OS (Case report.

    Directory of Open Access Journals (Sweden)

    E. A. Korchuganova

    2014-07-01

    Full Text Available There is presentation case report of 75‑year old woman with choroidal melanoma (T3N0M0 on the right eye and failed graft on the left pseudophakic eye with far advanced glaucoma and ARMD. No treatment was given to the leading eye with VA 0,2. VA of OS = 1 / ∞ pr.l.certa; PKP OS in 2008 for pseudophakic bullous keratopathy on the eye with far advanced glaucoma. IOP was normal after previous filtering surgery. After PKP VA = 0,04; clear graft during 2 years; then gradually opacification and vascularization occurred. VA dropped to light perception. Echography OD — tumor h 8,29 mm, d 21,77 mm. No ingrowth of tumor into anterior segment of the eye; VA OD = 0 (no light perception. Concerning the need to enucleate the right eye with large choroidal melanoma, the advantage of using corneal autograft from OD to OS, location of tumor in the posterior pole with no ingrowth in anterior segment, the decision was made to perform the following operation — to enucleate the right eye and transplant simultaneously corneal graft from OD on OS. Patient was discharged from the Ophthalmology Hospital with VA OS = 0,01, during next week VA improved to 0,02. 8,0 mm graft isclear, fixed with 8 interruptured and continious suture 10 / 0‑nylon. Anterior chamber — normal depth, atrophic iris, stable position of PC IOL. Optic nerve head is pale with subtotal deep glaucomatous excavation. Conclusion: presented case report demonstrates the rarepossibility to use cornea after enucleation the eye with large malignant tumor (located in the posterior pole for grafting in the only eye with failed vascularised graft. It was the only possibility for this patient to restore some vision.

  11. Late-Life Depressive Symptoms and Lifetime History of Major Depression: Cognitive Deficits are Largely Due to Incipient Dementia rather than Depression.

    Science.gov (United States)

    Heser, Kathrin; Bleckwenn, Markus; Wiese, Birgitt; Mamone, Silke; Riedel-Heller, Steffi G; Stein, Janine; Lühmann, Dagmar; Posselt, Tina; Fuchs, Angela; Pentzek, Michael; Weyerer, Siegfried; Werle, Jochen; Weeg, Dagmar; Bickel, Horst; Brettschneider, Christian; König, Hans-Helmut; Maier, Wolfgang; Scherer, Martin; Wagner, Michael

    2016-08-01

    Late-life depression is frequently accompanied by cognitive impairments. Whether these impairments indicate a prodromal state of dementia, or are a symptomatic expression of depression per se is not well-studied. In a cohort of very old initially non-demented primary care patients (n = 2,709, mean age = 81.1 y), cognitive performance was compared between groups of participants with or without elevated depressive symptoms and with or without subsequent dementia using ANCOVA (adjusted for age, sex, and education). Logistic regression analyses were computed to predict subsequent dementia over up to six years of follow-up. The same analytical approach was performed for lifetime major depression. Participants with elevated depressive symptoms without subsequent dementia showed only small to medium cognitive deficits. In contrast, participants with depressive symptoms with subsequent dementia showed medium to very large cognitive deficits. In adjusted logistic regression models, learning and memory deficits predicted the risk for subsequent dementia in participants with depressive symptoms. Participants with a lifetime history of major depression without subsequent dementia showed no cognitive deficits. However, in adjusted logistic regression models, learning and orientation deficits predicted the risk for subsequent dementia also in participants with lifetime major depression. Marked cognitive impairments in old age depression should not be dismissed as "depressive pseudodementia", but require clinical attention as a possible sign of incipient dementia. Non-depressed elderly with a lifetime history of major depression, who remained free of dementia during follow-up, had largely normal cognitive performance.

  12. Uncertainty analysis and validation of environmental models. The empirically based uncertainty analysis

    International Nuclear Information System (INIS)

    Monte, Luigi; Hakanson, Lars; Bergstroem, Ulla; Brittain, John; Heling, Rudie

    1996-01-01

    The principles of Empirically Based Uncertainty Analysis (EBUA) are described. EBUA is based on the evaluation of 'performance indices' that express the level of agreement between the model and sets of empirical independent data collected in different experimental circumstances. Some of these indices may be used to evaluate the confidence limits of the model output. The method is based on the statistical analysis of the distribution of the index values and on the quantitative relationship of these values with the ratio 'experimental data/model output'. Some performance indices are described in the present paper. Among these, the so-called 'functional distance' (d) between the logarithm of model output and the logarithm of the experimental data, defined as d 2 =Σ n 1 ( ln M i - ln O i ) 2 /n where M i is the i-th experimental value, O i the corresponding model evaluation and n the number of the couplets 'experimental value, predicted value', is an important tool for the EBUA method. From the statistical distribution of this performance index, it is possible to infer the characteristics of the distribution of the ratio 'experimental data/model output' and, consequently to evaluate the confidence limits for the model predictions. This method was applied to calculate the uncertainty level of a model developed to predict the migration of radiocaesium in lacustrine systems. Unfortunately, performance indices are affected by the uncertainty of the experimental data used in validation. Indeed, measurement results of environmental levels of contamination are generally associated with large uncertainty due to the measurement and sampling techniques and to the large variability in space and time of the measured quantities. It is demonstrated that this non-desired effect, in some circumstances, may be corrected by means of simple formulae

  13. Correlated uncertainties in integral data

    International Nuclear Information System (INIS)

    McCracken, A.K.

    1978-01-01

    The use of correlated uncertainties in calculational data is shown in cases investigated to lead to a reduction in the uncertainty of calculated quantities of importance to reactor design. It is stressed however that such reductions are likely to be important in a minority of cases of practical interest. The effect of uncertainties in detector cross-sections is considered and is seen to be, in some cases, of equal importance to that in the data used in calculations. Numerical investigations have been limited by the sparse information available on data correlations; some comparisons made of these data reveal quite large inconsistencies for both detector cross-sections and cross-section of interest for reactor calculations

  14. Evaluation of uncertainties in MUF for a LWR fuel fabrication plant. Pt.2 - Pt.4

    International Nuclear Information System (INIS)

    Mennerdahl, D.

    1984-09-01

    MUF (Material Unaccounted For) is a parameter defined as the estimated loss of materials during a certain period of time. A suitable method for uncertainty and bias estimations has been developed. The method was specifically adjusted for a facility like the ASEA-ATOM fuel fabrication plant. Operations that are expected to contribute to the uncertainties have been compiled. Information that is required for the application of the developed method is described. Proposals for simplification of the required information without losing the accuracy are suggested. ASEA-ATOM had earlier determined uncertainty data for the scales that are used for nuclear materials. The statistical uncertainties included random errors, short-term and long-term systematic errors. Information for the determination of biases was also determined (constants and formulas). The method proposed by ASEA-ATOM for the determination of uncertainties due to the scales is compatible with the method proposed in this report. For other operations than weighing, the information from ASEA-ATOM is limited. Such operations are completely dominating the total uncertainty in MUF. Examples of calculations of uncertainties and bias are given for uranium oxide powders in large containers. Examples emphasize the differences between various statistical errors (random and systematic errors) and biases (known errors). The importance of correlations between different items in the inventories is explained. A specific correlation of great importance is the use of nominal factors (uranium concentration). A portable personal computer can be used to determine uncertainties in MUF. (author)

  15. The role of scientific uncertainty in compliance with the Kyoto Protocol to the Climate Change Convention

    International Nuclear Information System (INIS)

    Gupta, Joyeeta; Olsthoorn, Xander; Rotenberg, Edan

    2003-01-01

    Under the climate change treaties, developed countries are under a quantitative obligation to limit their emissions of greenhouse gases (GHG). This paper argues that although the climate change regime is setting up various measures and mechanisms, there will still be significant uncertainty about the actual emission reductions and the effectiveness of the regime will depend largely on how countries actually implement their obligations in practice. These uncertainties arise from the calculation of emissions from each source, the tallying up these emissions, adding or deducting changes due to land use change and forestry (LUCF) and finally from subtracting or adding emission reduction units (ERUs). Further, it points to the problem of uncertainty in the reductions as opposed to the uncertainty in the inventories themselves. The protocols have temporarily opted to deal with these problems through harmonisation in reporting methodologies and to seek transparency by calling on parties involved to use specific guidelines and to report on their uncertainty. This paper concludes that this harmonisation of reporting methodologies does not account for regional differences and that while transparency will indicate when countries are adopting strategies that have high uncertainty; it will not help to increase the effectiveness of the protocol. Uncertainty about compliance then becomes a critical issue. This paper proposes to reduce this uncertainty in compliance by setting a minimum requirement for the probability of compliance

  16. Uncertainty Quantification in Numerical Aerodynamics

    KAUST Repository

    Litvinenko, Alexander

    2017-05-16

    We consider uncertainty quantification problem in aerodynamic simulations. We identify input uncertainties, classify them, suggest an appropriate statistical model and, finally, estimate propagation of these uncertainties into the solution (pressure, velocity and density fields as well as the lift and drag coefficients). The deterministic problem under consideration is a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. Input uncertainties include: uncertain angle of attack, the Mach number, random perturbations in the airfoil geometry, mesh, shock location, turbulence model and parameters of this turbulence model. This problem requires efficient numerical/statistical methods since it is computationally expensive, especially for the uncertainties caused by random geometry variations which involve a large number of variables. In numerical section we compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry [D.Liu et al \\'17]. For modeling we used the TAU code, developed in DLR, Germany.

  17. Soil carbon sequestration due to post-Soviet cropland abandonment: estimates from a large-scale soil organic carbon field inventory.

    Science.gov (United States)

    Wertebach, Tim-Martin; Hölzel, Norbert; Kämpf, Immo; Yurtaev, Andrey; Tupitsin, Sergey; Kiehl, Kathrin; Kamp, Johannes; Kleinebecker, Till

    2017-09-01

    The break-up of the Soviet Union in 1991 triggered cropland abandonment on a continental scale, which in turn led to carbon accumulation on abandoned land across Eurasia. Previous studies have estimated carbon accumulation rates across Russia based on large-scale modelling. Studies that assess carbon sequestration on abandoned land based on robust field sampling are rare. We investigated soil organic carbon (SOC) stocks using a randomized sampling design along a climatic gradient from forest steppe to Sub-Taiga in Western Siberia (Tyumen Province). In total, SOC contents were sampled on 470 plots across different soil and land-use types. The effect of land use on changes in SOC stock was evaluated, and carbon sequestration rates were calculated for different age stages of abandoned cropland. While land-use type had an effect on carbon accumulation in the topsoil (0-5 cm), no independent land-use effects were found for deeper SOC stocks. Topsoil carbon stocks of grasslands and forests were significantly higher than those of soils managed for crops and under abandoned cropland. SOC increased significantly with time since abandonment. The average carbon sequestration rate for soils of abandoned cropland was 0.66 Mg C ha -1  yr -1 (1-20 years old, 0-5 cm soil depth), which is at the lower end of published estimates for Russia and Siberia. There was a tendency towards SOC saturation on abandoned land as sequestration rates were much higher for recently abandoned (1-10 years old, 1.04 Mg C ha -1  yr -1 ) compared to earlier abandoned crop fields (11-20 years old, 0.26 Mg C ha -1  yr -1 ). Our study confirms the global significance of abandoned cropland in Russia for carbon sequestration. Our findings also suggest that robust regional surveys based on a large number of samples advance model-based continent-wide SOC prediction. © 2017 John Wiley & Sons Ltd.

  18. Development of tsunami fragility evaluation methods by large scale experiments. Part 2. Validation of the applicability of evaluation methods of impact force due to tsunami floating debris

    International Nuclear Information System (INIS)

    Takabatake, Daisuke; Kihara, Naoto; Kaida, Hideki; Miyagawa, Yoshinori; Ikeno, Masaaki; Shibayama, Atsushi

    2015-01-01

    In order to examine the applicability of the existing estimation equations of the impact force due to tsunami floating debris, the collision tests are carried out. In the experiments, logs and full-scale light car are used. In this report, two types of existing equations, one is based on the Young's module of the debris (Eq.A) and the other one is based on the stiffness of the debris (Eq.B), are focused on. The estimated impact forces using Eq.A with log's Young module obtained by the material test agree with measured forces obtained by the collision test. But Eq.A does not applicate to a car because it is not easy to determine the Young's module of a car. On the other hand, the estimated impact forces using Eq.B with car's stiffness obtained by the static loading test agree with measured forces obtained by the collision test. This indicates that Eq.B unable us to estimate impact force of the floating debris such as car if the stiffness of the debris is determined. (author)

  19. Endogenous Cushing’s Syndrome with Precocious Puberty in an 8-Year-Old Boy due to a Large Unilateral Adrenal Adenoma

    Directory of Open Access Journals (Sweden)

    Muhammad Rajib Hossain

    2013-01-01

    Full Text Available Adrenocortical tumors (ACTs causing Cushing’s syndrome are extremely rare in children and adolescents. Bilateral macronodular adrenocortical disease which is a component of the McCune-Albright syndrome is the most common cause of endogenous Cushing’s syndrome. We report the case of a boy with Cushing’s syndrome who presented with obesity and growth retardation. The child was hypertensive. The biochemical evaluation revealed that his serum cortisol levels were 25.80 g/dL, with a concomitant plasma ACTH level of 10.0 pg/mL and nonsuppressed serum cortisol on high-dose dexamethasone suppression test (HDDST to be 20.38 g/dL. Computed tomography of the abdomen demonstrated a 8 × 6 × 5 cm left adrenal mass with internal calcifications. Following preoperative stabilization, laparotomy was carried out which revealed a lobulated left adrenal mass with intact capsule weighing 120 grams. Histopathological examination revealed a benign cortical neoplastic lesion, suggestive of adrenal adenoma; composed of large polygonal cells with centrally placed nuclei and prominent nucleoli without capsular and vascular invasion. On the seventh postoperative day, cortisol levels were within normal range indicating biochemical remission of Cushing’s syndrome. On followup after three months, the patient showed significant clinical improvement and had lost moderate amount of weight and adrenal imaging was found to be normal.

  20. MELAS and Kearns–Sayre overlap syndrome due to the mtDNA m. A3243G mutation and large-scale mtDNA deletions

    Directory of Open Access Journals (Sweden)

    Nian Yu

    2016-09-01

    Full Text Available This paper reported an unusual manifestation of a 19-year-old Chinese male patient presented with a complex phenotype of mitochondrial encephalomyopathy, lactic acidosis and stroke-like episodes (MELAS syndrome and Kearns–Sayre syndrome (KSS. He was admitted to our hospital with the chief complaint of “acute fever, headache and slow reaction for 21 days”. He was initially misdiagnosed as “viral encephalitis”. This Chinese man with significant past medical history of intolerating fatigue presented paroxysmal neurobehavioral attacks that started about 10 years ago. During this span, 3 or 4 attack clusters were described during which several attacks occurred over a few days. The further examination found that the hallmark signs of this patient included progressive myoclonus epilepsy, cerebellar ataxia, hearing loss, myopathic weakness, ophthalmoparesis, pigmentary retinopathy and bifascicular heart block (Wolff–Parkinson–White syndrome. By young age the disease progression is characterized by the addition of migraine, vomiting, and stroke-like episodes, symptoms of MELAS expression, which indicated completion of the MELAS/KSS overlap syndrome. The m. A3243G mitochondrial DNA mutation and single large-scale mtDNA deletions were found in this patient. This mutation has been reported with MELAS, KSS, myopathy, deafness and mental disorder with cognitive impairment. This is the first description with a MELAS/KSS syndrome in Chinese.

  1. Mitigation of Ground Vibration due to Collapse of a Large-Scale Cooling Tower with Novel Application of Materials as Cushions

    Directory of Open Access Journals (Sweden)

    Feng Lin

    2017-01-01

    Full Text Available Ground vibration induced by the collapse of large-scale cooling towers in nuclear power plants (NPPs has recently been realized as a potential secondary disaster to adjacent nuclear-related facilities with demands for vibration mitigation. The previous concept to design cooling towers and nuclear-related facilities operating in a containment as isolated components in NPPs is inappropriate in a limited site which is the cases for inland NPPs in China. This paper presents a numerical study on the mitigation of ground vibration in a “cooling tower-soil-containment” system via a novel application of two materials acting as cushions underneath cooling towers, that is, foamed concrete and a “tube assembly.” Comprehensive “cooling tower-cushion-soil” models were built with reasonable cushion material models. Computational cases were performed to demonstrate the effect of vibration mitigation using seven earthquake waves. Results found that collapse-induced ground vibrations at a point with a distance of 300 m were reduced in average by 91%, 79%, and 92% in radial, tangential, and vertical directions when foamed concrete was used, and the vibrations at the same point were reduced by 53%, 32%, and 59% when the “tube assembly” was applied, respectively. Therefore, remarkable vibration mitigation was achieved in both cases to enhance the resilience of the “cooling tower-soil-containment” system against the secondary disaster.

  2. Rapidly progressive renal disease as part of Wolfram syndrome in a large inbred Turkish family due to a novel WFS1 mutation (p.Leu511Pro).

    Science.gov (United States)

    Yuca, Sevil Ari; Rendtorff, Nanna Dahl; Boulahbel, Houda; Lodahl, Marianne; Tranebjærg, Lisbeth; Cesur, Yasar; Dogan, Murat; Yilmaz, Cahide; Akgun, Cihangir; Acikgoz, Mehmet

    2012-01-01

    Wolfram syndrome, also named "DIDMOAD" (diabetes insipidus, diabetes mellitus, optic atrophy, and deafness), is an inherited association of juvenile-onset diabetes mellitus and optic atrophy as key diagnostic criteria. Renal tract abnormalities and neurodegenerative disorder may occur in the third and fourth decade. The wolframin gene, WFS1, associated with this syndrome, is located on chromosome 4p16.1. Many mutations have been described since the identification of WFS1 as the cause of Wolfram syndrome. We identified a new homozygous WFS1 mutation (c.1532T>C; p.Leu511Pro) causing Wolfram syndrome in a large inbred Turkish family. The patients showed early onset of IDDM, diabetes insipidus, optic atrophy, sensorineural hearing impairment and very rapid progression to renal failure before age 12 in three females. Ectopic expression of the wolframin mutant in HEK cells results in greatly reduced levels of protein expression compared to wild-type wolframin, strongly supporting that this mutation is disease-causing. The mutation showed perfect segregation with disease in the family, characterized by early and severe clinical manifestations. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  3. Uncertainty analysis of LBLOCA for Advanced Heavy Water Reactor

    International Nuclear Information System (INIS)

    Srivastava, A.; Lele, H.G.; Ghosh, A.K.; Kushwaha, H.S.

    2008-01-01

    The main objective of safety analysis is to demonstrate in a robust way that all safety requirements are met, i.e. sufficient margins exist between real values of important parameters and their threshold values at which damage of the barriers against release of radioactivity would occur. As stated in the IAEA Safety Requirements for Design of NPPs 'a safety analysis of the plant design shall be conducted in which methods of both deterministic and probabilistic analysis shall be applied'. It is required that 'the computer programs, analytical methods and plant models used in the safety analysis shall be verified and validated, and adequate consideration shall be given to uncertainties'. Uncertainties are present in calculations due to the computer codes, initial and boundary conditions, plant state, fuel parameters, scaling and numerical solution algorithm. All conservative approaches, still widely used, were introduced to cover uncertainties due to limited capability for modelling and understanding of physical phenomena at the early stages of safety analysis. The results obtained by this approach are quite unrealistic and the level of conservatism is not fully known. Another approach is the use of Best Estimate (BE) codes with realistic initial and boundary conditions. If this approach is selected, it should be based on statistically combined uncertainties for plant initial and boundary conditions, assumptions and code models. The current trends are going into direction of the best estimate code with some conservative assumptions of the system with realistic input data with uncertainty analysis. The BE analysis with evaluation of uncertainties offers, in addition, a way to quantify the existing plant safety margins. Its broader use in the future is therefore envisaged, even though it is not always feasible because of the difficulty of quantifying code uncertainties with sufficiently narrow range for every phenomenon and for each accident sequence. In this paper

  4. Uncertainty quantification theory, implementation, and applications

    CERN Document Server

    Smith, Ralph C

    2014-01-01

    The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...

  5. Report on the uncertainty methods study

    International Nuclear Information System (INIS)

    1998-06-01

    The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI, has compared five methods for calculating the uncertainty in the predictions of advanced 'best estimate' thermal-hydraulic codes: the Pisa method (based on extrapolation from integral experiments) and four methods identifying and combining input uncertainties. Three of these, the GRS, IPSN and ENUSA methods, use subjective probability distributions, and one, the AEAT method, performs a bounding analysis. Each method has been used to calculate the uncertainty in specified parameters for the LSTF SB-CL-18 5% cold leg small break LOCA experiment in the ROSA-IV Large Scale Test Facility (LSTF). The uncertainty analysis was conducted essentially blind and the participants did not use experimental measurements from the test as input apart from initial and boundary conditions. Participants calculated uncertainty ranges for experimental parameters including pressurizer pressure, primary circuit inventory and clad temperature (at a specified position) as functions of time

  6. Reusable launch vehicle model uncertainties impact analysis

    Science.gov (United States)

    Chen, Jiaye; Mu, Rongjun; Zhang, Xin; Deng, Yanpeng

    2018-03-01

    Reusable launch vehicle(RLV) has the typical characteristics of complex aerodynamic shape and propulsion system coupling, and the flight environment is highly complicated and intensely changeable. So its model has large uncertainty, which makes the nominal system quite different from the real system. Therefore, studying the influences caused by the uncertainties on the stability of the control system is of great significance for the controller design. In order to improve the performance of RLV, this paper proposes the approach of analyzing the influence of the model uncertainties. According to the typical RLV, the coupling dynamic and kinematics models are built. Then different factors that cause uncertainties during building the model are analyzed and summed up. After that, the model uncertainties are expressed according to the additive uncertainty model. Choosing the uncertainties matrix's maximum singular values as the boundary model, and selecting the uncertainties matrix's norm to show t how much the uncertainty factors influence is on the stability of the control system . The simulation results illustrate that the inertial factors have the largest influence on the stability of the system, and it is necessary and important to take the model uncertainties into consideration before the designing the controller of this kind of aircraft( like RLV, etc).

  7. Uncertainty analysis guide

    International Nuclear Information System (INIS)

    Andres, T.H.

    2002-05-01

    This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)

  8. Uncertainty analysis guide

    Energy Technology Data Exchange (ETDEWEB)

    Andres, T.H

    2002-05-01

    This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)

  9. Uncertainty and Cognitive Control

    Directory of Open Access Journals (Sweden)

    Faisal eMushtaq

    2011-10-01

    Full Text Available A growing trend of neuroimaging, behavioural and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1 There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2 There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3 The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the need for control; (4 Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders.

  10. Geological-structural models used in SR 97. Uncertainty analysis

    Energy Technology Data Exchange (ETDEWEB)

    Saksa, P.; Nummela, J. [FINTACT Oy (Finland)

    1998-10-01

    The uncertainty of geological-structural models was studied for the three sites in SR 97, called Aberg, Beberg and Ceberg. The evaluation covered both regional and site scale models, the emphasis being placed on fracture zones in the site scale. Uncertainty is a natural feature of all geoscientific investigations. It originates from measurements (errors in data, sampling limitations, scale variation) and conceptualisation (structural geometries and properties, ambiguous geometric or parametric solutions) to name the major ones. The structures of A-, B- and Ceberg are fracture zones of varying types. No major differences in the conceptualisation between the sites were noted. One source of uncertainty in the site models is the non-existence of fracture and zone information in the scale from 10 to 300 - 1000 m. At Aberg the development of the regional model has been performed very thoroughly. At the site scale one major source of uncertainty is that a clear definition of the target area is missing. Structures encountered in the boreholes are well explained and an interdisciplinary approach in interpretation have taken place. Beberg and Ceberg regional models contain relatively large uncertainties due to the investigation methodology and experience available at that time. In site scale six additional structures were proposed both to Beberg and Ceberg to variant analysis of these sites. Both sites include uncertainty in the form of many non-interpreted fractured sections along the boreholes. Statistical analysis gives high occurrences of structures for all three sites: typically 20 - 30 structures/km{sup 3}. Aberg has highest structural frequency, Beberg comes next and Ceberg has the lowest. The borehole configuration, orientations and surveying goals were inspected to find whether preferences or factors causing bias were present. Data from Aberg supports the conclusion that Aespoe sub volume would be an anomalously fractured, tectonised unit of its own. This means that

  11. Geological-structural models used in SR 97. Uncertainty analysis

    International Nuclear Information System (INIS)

    Saksa, P.; Nummela, J.

    1998-10-01

    The uncertainty of geological-structural models was studied for the three sites in SR 97, called Aberg, Beberg and Ceberg. The evaluation covered both regional and site scale models, the emphasis being placed on fracture zones in the site scale. Uncertainty is a natural feature of all geoscientific investigations. It originates from measurements (errors in data, sampling limitations, scale variation) and conceptualisation (structural geometries and properties, ambiguous geometric or parametric solutions) to name the major ones. The structures of A-, B- and Ceberg are fracture zones of varying types. No major differences in the conceptualisation between the sites were noted. One source of uncertainty in the site models is the non-existence of fracture and zone information in the scale from 10 to 300 - 1000 m. At Aberg the development of the regional model has been performed very thoroughly. At the site scale one major source of uncertainty is that a clear definition of the target area is missing. Structures encountered in the boreholes are well explained and an interdisciplinary approach in interpretation have taken place. Beberg and Ceberg regional models contain relatively large uncertainties due to the investigation methodology and experience available at that time. In site scale six additional structures were proposed both to Beberg and Ceberg to variant analysis of these sites. Both sites include uncertainty in the form of many non-interpreted fractured sections along the boreholes. Statistical analysis gives high occurrences of structures for all three sites: typically 20 - 30 structures/km 3 . Aberg has highest structural frequency, Beberg comes next and Ceberg has the lowest. The borehole configuration, orientations and surveying goals were inspected to find whether preferences or factors causing bias were present. Data from Aberg supports the conclusion that Aespoe sub volume would be an anomalously fractured, tectonised unit of its own. This means that the

  12. Assurance of risk assessment and protection distant transportation and fall out of pollutants under large anthropogenic on nuclear power stations due to mountainous regional peculiarities

    International Nuclear Information System (INIS)

    Tsitskishvili, M.; Tsitskishvili, N.; Kordzakhia, G.; Valiaev, A.; Kazakov, S.; Aitmatov, I.; Petrov, V.

    2005-01-01

    Full text: All types of industrial activities require the norms of protection, assessment of corresponding risks to preserve the pollution and degradation of corresponding areas. To make available the sustainable development of the country the risk assessment of possible accidents on the big enterprises is foreseen that provides preparedness of the country and possibility of the prevention measures and mitigation of the accidents. While big anthropogenic accidents in mountainous countries - the main paths for transportation of the pollution are the rivers and sea basins. Due to overpopulation of these areas assessment of the pollution risks are very important. Problem of forecast and distant atmospheric transportation of the toxic products and corresponding risk assessment under anthropogenic damages is multi-component and depends on meteorological conditions and frontier layer of atmosphere. Generally, for real relief and basic fields the problem is not solved yet especially taking into consideration the big level and shortest time of the process being of the natural anthropogenic accidents in mountainous regions. Usually, geostropic drawing for determined relief is used. Integral differential equations taking into consideration a physical- chemical characteristic of the pollutants, their transformations, fall out, coagulations, washing out and self rectification in general cannot be solved. In last time essential success in formalization of above-mentioned equations i.e. carrying out some simplifications give possibility to establish necessary modeling on the basis of numerical calculations. In the most general case forecasting model is essentially limited because of bulky size of accounting schemes and necessity of powerful and high-speed computers. Main ways of achievement of further success is connected with so called 'seasonal typification' with applied a priory calculation of probabilistic picture of the pollutants concentration fields, as well as

  13. Changes in groundwater reserves and radiocarbon and chloride content due to a wet period intercalated in an arid climate sequence in a large unconfined aquifer

    Science.gov (United States)

    Custodio, E.; Jódar, J.; Herrera, C.; Custodio-Ayala, J.; Medina, A.

    2018-01-01

    The concentration of atmospheric tracers in groundwater samples collected from springs and deep wells is, in most cases, the result of a mixture of waters with a wide range of residence times in the ground. Such is the case of an unconfined aquifer recharged over all its surface area. Concentrations greatly differ from the homogeneous residence time case. Data interpretation relies on knowledge of the groundwater flow pattern. To study relatively large systems, the conservative ion chloride and the decaying radiocarbon (14C) are considered. Radiocarbon (14C) activity in groundwater, after correction to discount the non-biogenic contribution, is often taken as an indication of water age, while chloride can be used to quantify recharge. In both cases, the observed tracer content in groundwater is an average value over a wide range which is related to water renewal time in the ground. This is shown considering an unconfined aquifer recharged all over its area under arid conditions, in which a period of greater recharge happened some millennia ago. The mathematical solution is given. As the solution cannot be made general, to show and discuss the changes in water reserve and in chloride and radiocarbon concentration (apparent ages), two scenarios are worked out, which are loosely related to current conditions in Northern Chile. It is shown that tracer concentration and the estimated water age are not directly related to the time since recharge took place. The existence of a previous wetter-than-present period has an important and lasting effect on current aquifer water reserves and chloride concentration, although the effect on radiocarbon activity is less pronounced. Chloride concentrations are smaller than in current recharge and apparent 14C ages do not coincide with the timing, duration and characteristics of the wet period, except in the case in which recharge before and after the wet period is negligible and dead aquifer reserves are non-significant. The use of

  14. Due diligence

    International Nuclear Information System (INIS)

    Sanghera, G.S.

    1999-01-01

    The Occupational Health and Safety (OHS) Act requires that every employer shall ensure the health and safety of workers in the workplace. Issues regarding the practices at workplaces and how they should reflect the standards of due diligence were discussed. Due diligence was described as being the need for employers to identify hazards in the workplace and to take active steps to prevent workers from potentially dangerous incidents. The paper discussed various aspects of due diligence including policy, training, procedures, measurement and enforcement. The consequences of contravening the OHS Act were also described

  15. Quantifying the uncertainty in heritability.

    Science.gov (United States)

    Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

    2014-05-01

    The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.

  16. DS02 uncertainty analysis

    International Nuclear Information System (INIS)

    Kaul, Dean C.; Egbert, Stephen D.; Woolson, William A.

    2005-01-01

    In order to avoid the pitfalls that so discredited DS86 and its uncertainty estimates, and to provide DS02 uncertainties that are both defensible and credible, this report not only presents the ensemble uncertainties assembled from uncertainties in individual computational elements and radiation dose components but also describes how these relate to comparisons between observed and computed quantities at critical intervals in the computational process. These comparisons include those between observed and calculated radiation free-field components, where observations include thermal- and fast-neutron activation and gamma-ray thermoluminescence, which are relevant to the estimated systematic uncertainty for DS02. The comparisons also include those between calculated and observed survivor shielding, where the observations consist of biodosimetric measurements for individual survivors, which are relevant to the estimated random uncertainty for DS02. (J.P.N.)

  17. Application of extended statistical combination of uncertainties methodology for digital nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    In, Wang Ki; Uh, Keun Sun; Chul, Kim Heui [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-02-01

    A technically more direct statistical combinations of uncertainties methodology, extended SCU (XSCU), was applied to statistically combine the uncertainties associated with the DNBR alarm setpoint and the DNBR trip setpoint of digital nuclear power plants. The modified SCU (MSCU) methodology is currently used as the USNRC approved design methodology to perform the same function. In this report, the MSCU and XSCU methodologies were compared in terms of the total uncertainties and the net margins to the DNBR alarm and trip setpoints. The MSCU methodology resulted in the small total penalties due to a significantly negative bias which are quite large. However the XSCU methodology gave the virtually unbiased total uncertainties. The net margins to the DNBR alarm and trip setpoints by the MSCU methodology agree with those by the XSCU methodology within statistical variations. (Author) 12 refs., 17 figs., 5 tabs.

  18. Towards a different attitude to uncertainty

    Directory of Open Access Journals (Sweden)

    Guy Pe'er

    2014-10-01

    Full Text Available The ecological literature deals with uncertainty primarily from the perspective of how to reduce it to acceptable levels. However, the current rapid and ubiquitous environmental changes, as well as anticipated rates of change, pose novel conditions and complex dynamics due to which many sources of uncertainty are difficult or even impossible to reduce. These include both uncertainty in knowledge (epistemic uncertainty and societal responses to it. Under these conditions, an increasing number of studies ask how one can deal with uncertainty as it is. Here, we explore the question how to adopt an overall alternative attitude to uncertainty, which accepts or even embraces it. First, we show that seeking to reduce uncertainty may be counterproductive under some circumstances. It may yield overconfidence, ignoring early warning signs, policy- and societal stagnation, or irresponsible behaviour if personal certainty is offered by externalization of environmental costs. We then demonstrate that uncertainty can have positive impacts by driving improvements in knowledge, promoting cautious action, contributing to keeping societies flexible and adaptable, enhancing awareness, support and involvement of the public in nature conservation, and enhancing cooperation and communication. We discuss the risks of employing a certainty paradigm on uncertain knowledge, the potential benefits of adopting an alternative attitude to uncertainty, and the need to implement such an attitude across scales – from adaptive management at the local scale, to the evolving Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES at the global level.

  19. Model uncertainty and probability

    International Nuclear Information System (INIS)

    Parry, G.W.

    1994-01-01

    This paper discusses the issue of model uncertainty. The use of probability as a measure of an analyst's uncertainty as well as a means of describing random processes has caused some confusion, even though the two uses are representing different types of uncertainty with respect to modeling a system. The importance of maintaining the distinction between the two types is illustrated with a simple example

  20. Robust nonlinear control of nuclear reactors under model uncertainty

    International Nuclear Information System (INIS)

    Park, Moon Ghu

    1993-02-01

    A nonlinear model-based control method is developed for the robust control of a nuclear reactor. The nonlinear plant model is used to design a unique control law which covers a wide operating range. The robustness is a crucial factor for the fully automatic control of reactor power due to time-varying, uncertain parameters, and state estimation error, or unmodeled dynamics. A variable structure control (VSC) method is introduced which consists of an adaptive performance specification (fime control) after the tracking error reaches the narrow boundary-layer by a time-optimal control (coarse control). Variable structure control is a powerful method for nonlinear system controller design which has inherent robustness to parameter variations or external disturbances using the known uncertainty bounds, and it requires very low computational efforts. In spite of its desirable properties, conventional VSC presents several important drawbacks that limit its practical applicability. One of the most undesirable phenomena is chattering, which implies extremely high control activity and may excite high-frequency unmodeled dynamics. This problem is due to the neglected actuator time-delay or sampling effects. The problem was partially remedied by replacing chattering control by a smooth control inter-polation in a boundary layer neighnboring a time-varying sliding surface. But, for the nuclear reactor systems which has very fast dynamic response, the sampling effect may destroy the narrow boundary layer when a large uncertainty bound is used. Due to the very short neutron life time, large uncertainty bound leads to the high gain in feedback control. To resolve this problem, a derivative feedback is introduced that gives excellent performance by reducing the uncertainty bound. The stability of tracking error dynamics is guaranteed by the second method of Lyapunov using the two-level uncertainty bounds that are obtained from the knowledge of uncertainty bound and the estimated

  1. Uncertainty in artificial intelligence

    CERN Document Server

    Kanal, LN

    1986-01-01

    How to deal with uncertainty is a subject of much controversy in Artificial Intelligence. This volume brings together a wide range of perspectives on uncertainty, many of the contributors being the principal proponents in the controversy.Some of the notable issues which emerge from these papers revolve around an interval-based calculus of uncertainty, the Dempster-Shafer Theory, and probability as the best numeric model for uncertainty. There remain strong dissenting opinions not only about probability but even about the utility of any numeric method in this context.

  2. Uncertainties in hydrogen combustion

    International Nuclear Information System (INIS)

    Stamps, D.W.; Wong, C.C.; Nelson, L.S.

    1988-01-01

    Three important areas of hydrogen combustion with uncertainties are identified: high-temperature combustion, flame acceleration and deflagration-to-detonation transition, and aerosol resuspension during hydrogen combustion. The uncertainties associated with high-temperature combustion may affect at least three different accident scenarios: the in-cavity oxidation of combustible gases produced by core-concrete interactions, the direct containment heating hydrogen problem, and the possibility of local detonations. How these uncertainties may affect the sequence of various accident scenarios is discussed and recommendations are made to reduce these uncertainties. 40 references

  3. Modeling of uncertainties in biochemical reactions.

    Science.gov (United States)

    Mišković, Ljubiša; Hatzimanikatis, Vassily

    2011-02-01

    Mathematical modeling is an indispensable tool for research and development in biotechnology and bioengineering. The formulation of kinetic models of biochemical networks depends on knowledge of the kinetic properties of the enzymes of the individual reactions. However, kinetic data acquired from experimental observations bring along uncertainties due to various experimental conditions and measurement methods. In this contribution, we propose a novel way to model the uncertainty in the enzyme kinetics and to predict quantitatively the responses of metabolic reactions to the changes in enzyme activities under uncertainty. The proposed methodology accounts explicitly for mechanistic properties of enzymes and physico-chemical and thermodynamic constraints, and is based on formalism from systems theory and metabolic control analysis. We achieve this by observing that kinetic responses of metabolic reactions depend: (i) on the distribution of the enzymes among their free form and all reactive states; (ii) on the equilibrium displacements of the overall reaction and that of the individual enzymatic steps; and (iii) on the net fluxes through the enzyme. Relying on this observation, we develop a novel, efficient Monte Carlo sampling procedure to generate all states within a metabolic reaction that satisfy imposed constrains. Thus, we derive the statistics of the expected responses of the metabolic reactions to changes in enzyme levels and activities, in the levels of metabolites, and in the values of the kinetic parameters. We present aspects of the proposed framework through an example of the fundamental three-step reversible enzymatic reaction mechanism. We demonstrate that the equilibrium displacements of the individual enzymatic steps have an important influence on kinetic responses of the enzyme. Furthermore, we derive the conditions that must be satisfied by a reversible three-step enzymatic reaction operating far away from the equilibrium in order to respond to

  4. Critical mid-term uncertainties in long-term decarbonisation pathways

    International Nuclear Information System (INIS)

    Usher, Will; Strachan, Neil

    2012-01-01

    Over the next decade, large energy investments are required in the UK to meet growing energy service demands and legally binding emission targets under a pioneering policy agenda. These are necessary despite deep mid-term (2025–2030) uncertainties over which national policy makers have little control. We investigate the effect of two critical mid-term uncertainties on optimal near-term investment decisions using a two-stage stochastic energy system model. The results show that where future fossil fuel prices are uncertain: (i) the near term hedging strategy to 2030 differs from any one deterministic fuel price scenario and is structurally dissimilar to a simple ‘average’ of the deterministic scenarios, and (ii) multiple recourse strategies from 2030 are perturbed by path dependencies caused by hedging investments. Evaluating the uncertainty under a decarbonisation agenda shows that fossil fuel price uncertainty is very expensive at around £20 billion. The addition of novel mitigation options reduces the value of fossil fuel price uncertainty to £11 billion. Uncertain biomass import availability shows a much lower value of uncertainty at £300 million. This paper reveals the complex relationship between the flexibility of the energy system and mitigating the costs of uncertainty due to the path-dependencies caused by the long-life times of both infrastructures and generation technologies. - Highlights: ► Critical mid-term uncertainties affect near-term investments in UK energy system. ► Deterministic scenarios give conflicting near-term actions. ► Stochastic scenarios give one near-term hedging strategy. ► Technologies exhibit path dependency or flexibility. ► Fossil fuel price uncertainty is very expensive, biomass availability uncertainty is not.

  5. Climate change impact assessment and adaptation under uncertainty

    NARCIS (Netherlands)

    Wardekker, J.A.

    2011-01-01

    Expected impacts of climate change are associated with large uncertainties, particularly at the local level. Adaptation scientists, practitioners, and decision-makers will need to find ways to cope with these uncertainties. Several approaches have been suggested as ‘uncertainty-proof’ to some

  6. Uncertainty propagation in urban hydrology water quality modelling

    NARCIS (Netherlands)

    Torres Matallana, Arturo; Leopold, U.; Heuvelink, G.B.M.

    2016-01-01

    Uncertainty is often ignored in urban hydrology modelling. Engineering practice typically ignores uncertainties and uncertainty propagation. This can have large impacts, such as the wrong dimensioning of urban drainage systems and the inaccurate estimation of pollution in the environment caused

  7. Roughness coefficient and its uncertainty in gravel-bed river

    Directory of Open Access Journals (Sweden)

    Ji-Sung Kim

    2010-06-01

    Full Text Available Manning's roughness coefficient was estimated for a gravel-bed river reach using field measurements of water level and discharge, and the applicability of various methods used for estimation of the roughness coefficient was evaluated. Results show that the roughness coefficient tends to decrease with increasing discharge and water depth, and over a certain range it appears to remain constant. Comparison of roughness coefficients calculated by field measurement data with those estimated by other methods shows that, although the field-measured values provide approximate roughness coefficients for relatively large discharge, there seems to be rather high uncertainty due to the difference in resultant values. For this reason, uncertainty related to the roughness coefficient was analyzed in terms of change in computed variables. On average, a 20% increase of the roughness coefficient causes a 7% increase in the water depth and an 8% decrease in velocity, but there may be about a 15% increase in the water depth and an equivalent decrease in velocity for certain cross-sections in the study reach. Finally, the validity of estimated roughness coefficient based on field measurements was examined. A 10% error in discharge measurement may lead to more than 10% uncertainty in roughness coefficient estimation, but corresponding uncertainty in computed water depth and velocity is reduced to approximately 5%. Conversely, the necessity for roughness coefficient estimation by field measurement is confirmed.

  8. Reducing uncertainty in geostatistical description with well testing pressure data

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, A.C.; He, Nanqun [Univ. of Tulsa, OK (United States); Oliver, D.S. [Chevron Petroleum Technology Company, La Habra, CA (United States)

    1997-08-01

    Geostatistics has proven to be an effective tool for generating realizations of reservoir properties conditioned to static data, e.g., core and log data and geologic knowledge. Due to the lack of closely spaced data in the lateral directions, there will be significant variability in reservoir descriptions generated by geostatistical simulation, i.e., significant uncertainty in the reservoir descriptions. In past work, we have presented procedures based on inverse problem theory for generating reservoir descriptions (rock property fields) conditioned to pressure data and geostatistical information represented as prior means for log-permeability and porosity and variograms. Although we have shown that the incorporation of pressure data reduces the uncertainty below the level contained in the geostatistical model based only on static information (the prior model), our previous results assumed did not explicitly account for uncertainties in the prior means and the parameters defining the variogram model. In this work, we investigate how pressure data can help detect errors in the prior means. If errors in the prior means are large and are not taken into account, realizations conditioned to pressure data represent incorrect samples of the a posteriori probability density function for the rock property fields, whereas, if the uncertainty in the prior mean is incorporated properly into the model, one obtains realistic realizations of the rock property fields.

  9. Numerical Continuation Methods for Intrusive Uncertainty Quantification Studies

    Energy Technology Data Exchange (ETDEWEB)

    Safta, Cosmin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Najm, Habib N. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Phipps, Eric Todd [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    Rigorous modeling of engineering systems relies on efficient propagation of uncertainty from input parameters to model outputs. In recent years, there has been substantial development of probabilistic polynomial chaos (PC) Uncertainty Quantification (UQ) methods, enabling studies in expensive computational models. One approach, termed ”intrusive”, involving reformulation of the governing equations, has been found to have superior computational performance compared to non-intrusive sampling-based methods in relevant large-scale problems, particularly in the context of emerging architectures. However, the utility of intrusive methods has been severely limited due to detrimental numerical instabilities associated with strong nonlinear physics. Previous methods for stabilizing these constructions tend to add unacceptably high computational costs, particularly in problems with many uncertain parameters. In order to address these challenges, we propose to adapt and improve numerical continuation methods for the robust time integration of intrusive PC system dynamics. We propose adaptive methods, starting with a small uncertainty for which the model has stable behavior and gradually moving to larger uncertainty where the instabilities are rampant, in a manner that provides a suitable solution.

  10. Uncertainty in social dilemmas

    NARCIS (Netherlands)

    Kwaadsteniet, Erik Willem de

    2007-01-01

    This dissertation focuses on social dilemmas, and more specifically, on environmental uncertainty in these dilemmas. Real-life social dilemma situations are often characterized by uncertainty. For example, fishermen mostly do not know the exact size of the fish population (i.e., resource size

  11. Uncertainty and Climate Change

    OpenAIRE

    Berliner, L. Mark

    2003-01-01

    Anthropogenic, or human-induced, climate change is a critical issue in science and in the affairs of humankind. Though the target of substantial research, the conclusions of climate change studies remain subject to numerous uncertainties. This article presents a very brief review of the basic arguments regarding anthropogenic climate change with particular emphasis on uncertainty.

  12. Deterministic uncertainty analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1987-01-01

    Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig

  13. Uncertainty and simulation

    International Nuclear Information System (INIS)

    Depres, B.; Dossantos-Uzarralde, P.

    2009-01-01

    More than 150 researchers and engineers from universities and the industrial world met to discuss on the new methodologies developed around assessing uncertainty. About 20 papers were presented and the main topics were: methods to study the propagation of uncertainties, sensitivity analysis, nuclear data covariances or multi-parameter optimisation. This report gathers the contributions of CEA researchers and engineers

  14. Conditional uncertainty principle

    Science.gov (United States)

    Gour, Gilad; Grudka, Andrzej; Horodecki, Michał; Kłobus, Waldemar; Łodyga, Justyna; Narasimhachar, Varun

    2018-04-01

    We develop a general operational framework that formalizes the concept of conditional uncertainty in a measure-independent fashion. Our formalism is built upon a mathematical relation which we call conditional majorization. We define conditional majorization and, for the case of classical memory, we provide its thorough characterization in terms of monotones, i.e., functions that preserve the partial order under conditional majorization. We demonstrate the application of this framework by deriving two types of memory-assisted uncertainty relations, (1) a monotone-based conditional uncertainty relation and (2) a universal measure-independent conditional uncertainty relation, both of which set a lower bound on the minimal uncertainty that Bob has about Alice's pair of incompatible measurements, conditioned on arbitrary measurement that Bob makes on his own system. We next compare the obtained relations with their existing entropic counterparts and find that they are at least independent.

  15. Measurement uncertainty and probability

    CERN Document Server

    Willink, Robin

    2013-01-01

    A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.

  16. Two-dimensional cross-section and SED uncertainty analysis for the Fusion Engineering Device (FED)

    International Nuclear Information System (INIS)

    Embrechts, M.J.; Urban, W.T.; Dudziak, D.J.

    1982-01-01

    The theory of two-dimensional cross-section and secondary-energy-distribution (SED) sensitivity was implemented by developing a two-dimensional sensitivity and uncertainty analysis code, SENSIT-2D. Analyses of the Fusion Engineering Design (FED) conceptual inboard shield indicate that, although the calculated uncertainties in the 2-D model are of the same order of magnitude as those resulting from the 1-D model, there might be severe differences. The more complex the geometry, the more compulsory a 2-D analysis becomes. Specific results show that the uncertainty for the integral heating of the toroidal field (TF) coil for the FED is 114.6%. The main contributors to the cross-section uncertainty are chromium and iron. Contributions to the total uncertainty were smaller for nickel, copper, hydrogen and carbon. All analyses were performed with the Los Alamos 42-group cross-section library generated from ENDF/B-V data, and the COVFILS covariance matrix library. The large uncertainties due to chromium result mainly from large convariances for the chromium total and elastic scattering cross sections

  17. Hydroclimatic risks and uncertainty in the global power sector

    Science.gov (United States)

    Gidden, Matthew; Byers, Edward; Greve, Peter; Kahil, Taher; Parkinson, Simon; Raptis, Catherine; Rogelj, Joeri; Satoh, Yusuke; van Vliet, Michelle; Wada, Yoshide; Krey, Volker; Langan, Simon; Riahi, Keywan

    2017-04-01

    Approximately 80% of the world's electricity supply depends on reliable water resources. Thermoelectric and hydropower plants have been impacted by low flows and floods in recent years, notably in the US, Brazil, France, and China, amongst other countries. The dependence on reliable flows imputes a large vulnerability to the electricity supply system due to hydrological variability and the impacts of climate change. Using an updated dataset of global electricity capacity with global climate and hydrological data from the ISI-MIP project, we present an overview analysis of power sector vulnerability to hydroclimatic risks, including low river flows and peak flows. We show how electricity generation in individual countries and transboundary river basins can be impacted, helping decision-makers identify key at-risk geographical regions. Furthermore, our use of a multi-model ensemble of climate and hydrological models allows us to quantify the uncertainty of projected impacts, such that basin-level risks and uncertainty can be compared.

  18. Output gap uncertainty and real-time monetary policy

    Directory of Open Access Journals (Sweden)

    Francesco Grigoli

    2015-12-01

    Full Text Available Output gap estimates are subject to a wide range of uncertainty owing principally to the difficulty in distinguishing between cycle and trend in real time. We show that country desks tend to overestimate economic slack, especially during recessions, and that uncertainty in initial output gap estimates persists several years. Only a small share of output gap revisions is predictable based on output dynamics, data quality, and policy frameworks. We also show that for a group of Latin American inflation targeters the prescriptions from monetary policy rules are subject to large changes due to revised output gap estimates. These explain a sizable proportion of the deviation of inflation from target, suggesting this information is not accounted for in real-time policy decisions.

  19. Climate Certainties and Uncertainties

    International Nuclear Information System (INIS)

    Morel, Pierre

    2012-01-01

    In issue 380 of Futuribles in December 2011, Antonin Pottier analysed in detail the workings of what is today termed 'climate scepticism' - namely the propensity of certain individuals to contest the reality of climate change on the basis of pseudo-scientific arguments. He emphasized particularly that what fuels the debate on climate change is, largely, the degree of uncertainty inherent in the consequences to be anticipated from observation of the facts, not the description of the facts itself. In his view, the main aim of climate sceptics is to block the political measures for combating climate change. However, since they do not admit to this political posture, they choose instead to deny the scientific reality. This month, Futuribles complements this socio-psychological analysis of climate-sceptical discourse with an - in this case, wholly scientific - analysis of what we know (or do not know) about climate change on our planet. Pierre Morel gives a detailed account of the state of our knowledge in the climate field and what we are able to predict in the medium/long-term. After reminding us of the influence of atmospheric meteorological processes on the climate, he specifies the extent of global warming observed since 1850 and the main origin of that warming, as revealed by the current state of knowledge: the increase in the concentration of greenhouse gases. He then describes the changes in meteorological regimes (showing also the limits of climate simulation models), the modifications of hydrological regimes, and also the prospects for rises in sea levels. He also specifies the mechanisms that may potentially amplify all these phenomena and the climate disasters that might ensue. Lastly, he shows what are the scientific data that cannot be disregarded, the consequences of which are now inescapable (melting of the ice-caps, rises in sea level etc.), the only remaining uncertainty in this connection being the date at which these things will happen. 'In this

  20. Uncertainty propagation in probabilistic risk assessment: A comparative study

    International Nuclear Information System (INIS)

    Ahmed, S.; Metcalf, D.R.; Pegram, J.W.

    1982-01-01

    Three uncertainty propagation techniques, namely method of moments, discrete probability distribution (DPD), and Monte Carlo simulation, generally used in probabilistic risk assessment, are compared and conclusions drawn in terms of the accuracy of the results. For small uncertainty in the basic event unavailabilities, the three methods give similar results. For large uncertainty, the method of moments is in error, and the appropriate method is to propagate uncertainty in the discrete form either by DPD method without sampling or by Monte Carlo. (orig.)

  1. Computational chemical product design problems under property uncertainties

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Cignitti, Stefano; Abildskov, Jens

    2017-01-01

    Three different strategies of how to combine computational chemical product design with Monte Carlo based methods for uncertainty analysis of chemical properties are outlined. One method consists of a computer-aided molecular design (CAMD) solution and a post-processing property uncertainty...... fluid design. While the higher end of the uncertainty range of the process model output is similar for the best performing fluids, the lower end of the uncertainty range differs largely....

  2. Uncertainty analysis for secondary energy distributions

    International Nuclear Information System (INIS)

    Gerstl, S.A.W.

    1978-01-01

    In many transport calculations the integral design parameter of interest (response) is determined mainly by secondary particles such as gamma rays from (n,γ) reactions or secondary neutrons from inelastic scattering events or (n,2n) reactions. Standard sensitivity analysis usually allows to calculate the sensitivities to the production cross sections of such secondaries, but an extended formalism is needed to also obtain the sensitivities to the energy distribution of the generated secondary particles. For a 30-group standard cross-section set 84% of all non-zero table positions pertain to the description of secondary energy distributions (SED's) and only 16% to the actual reaction cross sections. Therefore, any sensitivity/uncertainty analysis which does not consider the effects of SED's is incomplete and neglects most of the input data. This paper describes the methods of how sensitivity profiles for SED's are obtained and used to estimate the uncertainty of an integral response due to uncertainties in these SED's. The detailed theory is documented elsewhere and implemented in the LASL sensitivity code SENSIT. SED sensitivity profiles have proven particularly valuable in cross-section uncertainty analyses for fusion reactors. Even when the production cross sections for secondary neutrons were assumed to be without error, the uncertainties in the energy distribution of these secondaries produced appreciable uncertainties in the calculated tritium breeding rate. However, complete error files for SED's are presently nonexistent. Therefore, methods will be described that allow rough error estimates due to estimated SED uncertainties based on integral SED sensitivities

  3. Estimating the uncertainty in thermochemical calculations for oxygen-hydrogen combustors

    Science.gov (United States)

    Sims, Joseph David

    The thermochemistry program CEA2 was combined with the statistical thermodynamics program PAC99 in a Monte Carlo simulation to determine the uncertainty in several CEA2 output variables due to uncertainty in thermodynamic reference values for the reactant and combustion species. In all, six typical performance parameters were examined, along with the required intermediate calculations (five gas properties and eight stoichiometric coefficients), for three hydrogen-oxygen combustors: a main combustor, an oxidizer preburner and a fuel preburner. The three combustors were analyzed in two different modes: design mode, where, for the first time, the uncertainty in thermodynamic reference values---taken from the literature---was considered (inputs to CEA2 were specified and so had no uncertainty); and data reduction mode, where inputs to CEA2 did have uncertainty. The inputs to CEA2 were contrived experimental measurements that were intended to represent the typical combustor testing facility. In design mode, uncertainties in the performance parameters were on the order of 0.1% for the main combustor, on the order of 0.05% for the oxidizer preburner and on the order of 0.01% for the fuel preburner. Thermodynamic reference values for H2O were the dominant sources of uncertainty, as was the assigned enthalpy for liquid oxygen. In data reduction mode, uncertainties in performance parameters increased significantly as a result of the uncertainties in experimental measurements compared to uncertainties in thermodynamic reference values. Main combustor and fuel preburner theoretical performance values had uncertainties of about 0.5%, while the oxidizer preburner had nearly 2%. Associated experimentally-determined performance values for all three combustors were 3% to 4%. The dominant sources of uncertainty in this mode were the propellant flowrates. These results only apply to hydrogen-oxygen combustors and should not be generalized to every propellant combination. Species for

  4. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Science.gov (United States)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2018-03-01

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  5. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Energy Technology Data Exchange (ETDEWEB)

    Huan, Xun [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Geraci, Gianluca [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eldred, Michael S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vane, Zachary P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Lacaze, Guilhem [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Oefelein, Joseph C. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2018-02-09

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  6. Brine migration resulting from CO2 injection into saline aquifers – An approach to risk estimation including various levels of uncertainty

    DEFF Research Database (Denmark)

    Walter, Lena; Binning, Philip John; Oladyshkin, Sergey

    2012-01-01

    resulting from displaced brine. Quantifying risk on the basis of numerical simulations requires consideration of different kinds of uncertainties and this study considers both, scenario uncertainty and statistical uncertainty. Addressing scenario uncertainty involves expert opinion on relevant geological......Comprehensive risk assessment is a major task for large-scale projects such as geological storage of CO2. Basic hazards are damage to the integrity of caprocks, leakage of CO2, or reduction of groundwater quality due to intrusion of fluids. This study focuses on salinization of freshwater aquifers...... for large-scale 3D models including complex physics. Therefore, we apply a model reduction based on arbitrary polynomial chaos expansion combined with probabilistic collocation method. It is shown that, dependent on data availability, both types of uncertainty can be equally significant. The presented study...

  7. Uncertainty Propagation in OMFIT

    Science.gov (United States)

    Smith, Sterling; Meneghini, Orso; Sung, Choongki

    2017-10-01

    A rigorous comparison of power balance fluxes and turbulent model fluxes requires the propagation of uncertainties in the kinetic profiles and their derivatives. Making extensive use of the python uncertainties package, the OMFIT framework has been used to propagate covariant uncertainties to provide an uncertainty in the power balance calculation from the ONETWO code, as well as through the turbulent fluxes calculated by the TGLF code. The covariant uncertainties arise from fitting 1D (constant on flux surface) density and temperature profiles and associated random errors with parameterized functions such as a modified tanh. The power balance and model fluxes can then be compared with quantification of the uncertainties. No effort is made at propagating systematic errors. A case study will be shown for the effects of resonant magnetic perturbations on the kinetic profiles and fluxes at the top of the pedestal. A separate attempt at modeling the random errors with Monte Carlo sampling will be compared to the method of propagating the fitting function parameter covariant uncertainties. Work supported by US DOE under DE-FC02-04ER54698, DE-FG2-95ER-54309, DE-SC 0012656.

  8. Uncertainties in Safety Analysis. A literature review

    International Nuclear Information System (INIS)

    Ekberg, C.

    1995-05-01

    The purpose of the presented work has been to give a short summary of the origins of many uncertainties arising in the designing and performance assessment of a repository for spent nuclear fuel. Some different methods to treat these uncertainties is also included. The methods and conclusions are in many cases general in the sense that they are applicable to many other disciplines where simulations are used. As a conclusion it may be noted that uncertainties of different origin have been discussed and debated, but one large group, e.g. computer simulations, where the methods to make a more explicit investigation exists, have not been investigated in a satisfying way. 50 refs

  9. Uncertainties in Safety Analysis. A literature review

    Energy Technology Data Exchange (ETDEWEB)

    Ekberg, C [Chalmers Univ. of Technology, Goeteborg (Sweden). Dept. of Nuclear Chemistry

    1995-05-01

    The purpose of the presented work has been to give a short summary of the origins of many uncertainties arising in the designing and performance assessment of a repository for spent nuclear fuel. Some different methods to treat these uncertainties is also included. The methods and conclusions are in many cases general in the sense that they are applicable to many other disciplines where simulations are used. As a conclusion it may be noted that uncertainties of different origin have been discussed and debated, but one large group, e.g. computer simulations, where the methods to make a more explicit investigation exists, have not been investigated in a satisfying way. 50 refs.

  10. Verification of uncertainty budgets

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Madsen, B.S.

    2005-01-01

    , and therefore it is essential that the applicability of the overall uncertainty budget to actual measurement results be verified on the basis of current experimental data. This should be carried out by replicate analysis of samples taken in accordance with the definition of the measurand, but representing...... the full range of matrices and concentrations for which the budget is assumed to be valid. In this way the assumptions made in the uncertainty budget can be experimentally verified, both as regards sources of variability that are assumed negligible, and dominant uncertainty components. Agreement between...

  11. Evaluation of uncertainty of adaptive radiation therapy

    International Nuclear Information System (INIS)

    Garcia Molla, R.; Gomez Martin, C.; Vidueira, L.; Juan-Senabre, X.; Garcia Gomez, R.

    2013-01-01

    This work is part of tests to perform to its acceptance in the clinical practice. The uncertainties of adaptive radiation, and which will separate the study, can be divided into two large parts: dosimetry in the CBCT and RDI. At each stage, their uncertainties are quantified and a level of action from which it would be reasonable to adapt the plan may be obtained with the total. (Author)

  12. Uncertainty and validation. Effect of model complexity on uncertainty estimates

    International Nuclear Information System (INIS)

    Elert, M.

    1996-09-01

    In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root

  13. Electron-cyclotron maser utilizing free-electron two-quantum magnetic-wiggler radiation, and explanation of effective laser injection in an electron cyclotron maser as lift-up of saturated power level arisen from uncertainty in electron energy due to electron's transverse wiggling

    Science.gov (United States)

    Kim, S. H.

    2017-12-01

    We reason that in the free-electron radiation if the transition rate τ is less than the radiation frequency ν, the radiation is of broad-band spectrum whereas if τ ≫ ν, the radiation is of monochromatic. We find that when a weaker magnetic wiggler (MW) is superpositioned on a predominantly strong uniform magnetic field, free-electron two-quantum magnetic-wiggler (FETQMW) radiation takes place. In FETQMW radiation, the MW and the electron's intrinsic motivity to change its internal configuration through radiation play as two first-order perturbers while the uniform magnetic field acts as the sole zeroth-order perturber. When Δ E≪ hν, where Δ E is the uncertainty in the electron energy produced by transverse wiggling due to the MW in conjuction with a Heisenberg's uncertainty principle Δ EΔ x h and E = ( m 2 c 4 + c 2 p 2)1/2, the power of FETQMW radiation cannot exceed hν 2. However, we find that this power cap is lifted by the amount of νΔ E when Δ E ≫ hν holds [1,2]. This lift-up of the saturated radiation power is the responsible mechanism for the effective external injection of a 20 kW maser in an electron-cyclotron maser (ECM). We find that an MW-added ECM with radius 5 cm and length 1 m and operating parameters of the present beam technology can yield laser power of 50 MW at the radiation wavelength of 0.001 cm.

  14. Evaluating prediction uncertainty

    International Nuclear Information System (INIS)

    McKay, M.D.

    1995-03-01

    The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented

  15. Uncertainty analysis in Monte Carlo criticality computations

    International Nuclear Information System (INIS)

    Qi Ao

    2011-01-01

    Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.

  16. Uncertainties and climatic change

    International Nuclear Information System (INIS)

    De Gier, A.M.; Opschoor, J.B.; Van de Donk, W.B.H.J.; Hooimeijer, P.; Jepma, J.; Lelieveld, J.; Oerlemans, J.; Petersen, A.

    2008-01-01

    Which processes in the climate system are misunderstood? How are scientists dealing with uncertainty about climate change? What will be done with the conclusions of the recently published synthesis report of the IPCC? These and other questions were answered during the meeting 'Uncertainties and climate change' that was held on Monday 26 November 2007 at the KNAW in Amsterdam. This report is a compilation of all the presentations and provides some conclusions resulting from the discussions during this meeting. [mk] [nl

  17. Mechanics and uncertainty

    CERN Document Server

    Lemaire, Maurice

    2014-01-01

    Science is a quest for certainty, but lack of certainty is the driving force behind all of its endeavors. This book, specifically, examines the uncertainty of technological and industrial science. Uncertainty and Mechanics studies the concepts of mechanical design in an uncertain setting and explains engineering techniques for inventing cost-effective products. Though it references practical applications, this is a book about ideas and potential advances in mechanical science.

  18. Uncertainty: lotteries and risk

    OpenAIRE

    Ávalos, Eloy

    2011-01-01

    In this paper we develop the theory of uncertainty in a context where the risks assumed by the individual are measurable and manageable. We primarily use the definition of lottery to formulate the axioms of the individual's preferences, and its representation through the utility function von Neumann - Morgenstern. We study the expected utility theorem and its properties, the paradoxes of choice under uncertainty and finally the measures of risk aversion with monetary lotteries.

  19. Managing project risks and uncertainties

    Directory of Open Access Journals (Sweden)

    Mike Mentis

    2015-01-01

    Full Text Available This article considers threats to a project slipping on budget, schedule and fit-for-purpose. Threat is used here as the collective for risks (quantifiable bad things that can happen and uncertainties (poorly or not quantifiable bad possible events. Based on experience with projects in developing countries this review considers that (a project slippage is due to uncertainties rather than risks, (b while eventuation of some bad things is beyond control, managed execution and oversight are still the primary means to keeping within budget, on time and fit-for-purpose, (c improving project delivery is less about bigger and more complex and more about coordinated focus, effectiveness and developing thought-out heuristics, and (d projects take longer and cost more partly because threat identification is inaccurate, the scope of identified threats is too narrow, and the threat assessment product is not integrated into overall project decision-making and execution. Almost by definition, what is poorly known is likely to cause problems. Yet it is not just the unquantifiability and intangibility of uncertainties causing project slippage, but that they are insufficiently taken into account in project planning and execution that cause budget and time overruns. Improving project performance requires purpose-driven and managed deployment of scarce seasoned professionals. This can be aided with independent oversight by deeply experienced panelists who contribute technical insights and can potentially show that diligence is seen to be done.

  20. Justification for recommended uncertainties

    International Nuclear Information System (INIS)

    Pronyaev, V.G.; Badikov, S.A.; Carlson, A.D.

    2007-01-01

    The uncertainties obtained in an earlier standards evaluation were considered to be unrealistically low by experts of the US Cross Section Evaluation Working Group (CSEWG). Therefore, the CSEWG Standards Subcommittee replaced the covariance matrices of evaluated uncertainties by expanded percentage errors that were assigned to the data over wide energy groups. There are a number of reasons that might lead to low uncertainties of the evaluated data: Underestimation of the correlations existing between the results of different measurements; The presence of unrecognized systematic uncertainties in the experimental data can lead to biases in the evaluated data as well as to underestimations of the resulting uncertainties; Uncertainties for correlated data cannot only be characterized by percentage uncertainties or variances. Covariances between evaluated value at 0.2 MeV and other points obtained in model (RAC R matrix and PADE2 analytical expansion) and non-model (GMA) fits of the 6 Li(n,t) TEST1 data and the correlation coefficients are presented and covariances between the evaluated value at 0.045 MeV and other points (along the line or column of the matrix) as obtained in EDA and RAC R matrix fits of the data available for reactions that pass through the formation of the 7 Li system are discussed. The GMA fit with the GMA database is shown for comparison. The following diagrams are discussed: Percentage uncertainties of the evaluated cross section for the 6 Li(n,t) reaction and the for the 235 U(n,f) reaction; estimation given by CSEWG experts; GMA result with full GMA database, including experimental data for the 6 Li(n,t), 6 Li(n,n) and 6 Li(n,total) reactions; uncertainties in the GMA combined fit for the standards; EDA and RAC R matrix results, respectively. Uncertainties of absolute and 252 Cf fission spectrum averaged cross section measurements, and deviations between measured and evaluated values for 235 U(n,f) cross-sections in the neutron energy range 1

  1. Controls on gas transfer velocities in a large river

    Science.gov (United States)

    The emission of biogenic gases from large rivers can be an important component of regional greenhouse gas budgets. However, emission rate estimates are often poorly constrained due to uncertainties in the air-water gas exchange rate. We used the floating chamber method to estim...

  2. Uncertainty in hydraulic tests in fractured rock

    International Nuclear Information System (INIS)

    Ji, Sung-Hoon; Koh, Yong-Kwon

    2014-01-01

    Interpretation of hydraulic tests in fractured rock has uncertainty because of the different hydraulic properties of a fractured rock to a porous medium. In this study, we reviewed several interesting phenomena which show uncertainty in a hydraulic test at a fractured rock and discussed their origins and the how they should be considered during site characterisation. Our results show that the estimated hydraulic parameters of a fractured rock from a hydraulic test are associated with uncertainty due to the changed aperture and non-linear groundwater flow during the test. Although the magnitude of these two uncertainties is site-dependent, the results suggest that it is recommended to conduct a hydraulic test with a little disturbance from the natural groundwater flow to consider their uncertainty. Other effects reported from laboratory and numerical experiments such as the trapping zone effect (Boutt, 2006) and the slip condition effect (Lee, 2014) can also introduce uncertainty to a hydraulic test, which should be evaluated in a field test. It is necessary to consider the way how to evaluate the uncertainty in the hydraulic property during the site characterisation and how to apply it to the safety assessment of a subsurface repository. (authors)

  3. Dosimetric uncertainty in prostate cancer proton radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Lin Liyong; Vargas, Carlos; Hsi Wen; Indelicato, Daniel; Slopsema, Roelf; Li Zuofeng; Yeung, Daniel; Horne, Dave; Palta, Jatinder [University of Florida Proton Therapy Institute, Jacksonville, Florida 32206 (United States)

    2008-11-15

    option producing a 2 mm sharper penumbra at the isocenter can reduce the magnitude of maximal doses to the RW by 2% compared to the alternate option utilizing the same block margin of 7 mm. The dose to 0.1 cc of the femoral head on the distal side of the lateral-posterior oblique beam is increased by 25 CGE for a patient with 25 cc of rectal gas. Conclusion: Variation in the rectal and bladder wall DVHs due to uncertainty in the position of the organs relative to the location of sharp dose falloff gradients should be accounted for when evaluating treatment plans. The proton beam delivery option producing a sharper penumbra reduces maximal doses to the rectal wall. Lateral-posterior oblique beams should be avoided in patients prone to develop a large amount of rectal gas.

  4. Dosimetric uncertainty in prostate cancer proton radiotherapy.

    Science.gov (United States)

    Lin, Liyong; Vargas, Carlos; Hsi, Wen; Indelicato, Daniel; Slopsema, Roelf; Li, Zuofeng; Yeung, Daniel; Horne, Dave; Palta, Jatinder

    2008-11-01

    magnitude of maximal doses to the RW by 2% compared to the alternate option utilizing the same block margin of 7 mm. The dose to 0.1 cc of the femoral head on the distal side of the lateral-posterior oblique beam is increased by 25 CGE for a patient with 25 cc of rectal gas. Variation in the rectal and bladder wall DVHs due to uncertainty in the position of the organs relative to the location of sharp dose falloff gradients should be accounted for when evaluating treatment plans. The proton beam delivery option producing a sharper penumbra reduces maximal doses to the rectal wall. Lateral-posterior oblique beams should be avoided in patients prone to develop a large amount of rectal gas.

  5. Uncertainties in Forecasting Streamflow using Entropy Theory

    Science.gov (United States)

    Cui, H.; Singh, V. P.

    2017-12-01

    Streamflow forecasting is essential in river restoration, reservoir operation, power generation, irrigation, navigation, and water management. However, there is always uncertainties accompanied in forecast, which may affect the forecasting results and lead to large variations. Therefore, uncertainties must be considered and be assessed properly when forecasting streamflow for water management. The aim of our work is to quantify the uncertainties involved in forecasting streamflow and provide reliable streamflow forecast. Despite that streamflow time series are stochastic, they exhibit seasonal and periodic patterns. Therefore, streamflow forecasting entails modeling seasonality, periodicity, and its correlation structure, and assessing uncertainties. This study applies entropy theory to forecast streamflow and measure uncertainties during the forecasting process. To apply entropy theory for streamflow forecasting, spectral analysis is combined to time series analysis, as spectral analysis can be employed to characterize patterns of streamflow variation and identify the periodicity of streamflow. That is, it permits to extract significant information for understanding the streamflow process and prediction thereof. Application of entropy theory for streamflow forecasting involves determination of spectral density, determination of parameters, and extension of autocorrelation function. The uncertainties brought by precipitation input, forecasting model and forecasted results are measured separately using entropy. With information theory, how these uncertainties transported and aggregated during these processes will be described.

  6. Resolving uncertainty in chemical speciation determinations

    Science.gov (United States)

    Smith, D. Scott; Adams, Nicholas W. H.; Kramer, James R.

    1999-10-01

    Speciation determinations involve uncertainty in system definition and experimentation. Identification of appropriate metals and ligands from basic chemical principles, analytical window considerations, types of species and checking for consistency in equilibrium calculations are considered in system definition uncertainty. A systematic approach to system definition limits uncertainty in speciation investigations. Experimental uncertainty is discussed with an example of proton interactions with Suwannee River fulvic acid (SRFA). A Monte Carlo approach was used to estimate uncertainty in experimental data, resulting from the propagation of uncertainties in electrode calibration parameters and experimental data points. Monte Carlo simulations revealed large uncertainties present at high (>9-10) and low (monoprotic ligands. Least-squares fit the data with 21 sites, whereas linear programming fit the data equally well with 9 sites. Multiresponse fitting, involving simultaneous fluorescence and pH measurements, improved model discrimination. Deconvolution of the excitation versus emission fluorescence surface for SRFA establishes a minimum of five sites. Diprotic sites are also required for the five fluorescent sites, and one non-fluorescent monoprotic site was added to accommodate the pH data. Consistent with greater complexity, the multiresponse method had broader confidence limits than the uniresponse methods, but corresponded better with the accepted total carboxylic content for SRFA. Overall there was a 40% standard deviation in total carboxylic content for the multiresponse fitting, versus 10% and 1% for least-squares and linear programming, respectively.

  7. Uncertainty and validation. Effect of model complexity on uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Elert, M. [Kemakta Konsult AB, Stockholm (Sweden)] [ed.

    1996-09-01

    In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root

  8. Piezoelectric energy harvesting with parametric uncertainty

    International Nuclear Information System (INIS)

    Ali, S F; Friswell, M I; Adhikari, S

    2010-01-01

    The design and analysis of energy harvesting devices is becoming increasing important in recent years. Most of the literature has focused on the deterministic analysis of these systems and the problem of uncertain parameters has received less attention. Energy harvesting devices exhibit parametric uncertainty due to errors in measurement, errors in modelling and variability in the parameters during manufacture. This paper investigates the effect of parametric uncertainty in the mechanical system on the harvested power, and derives approximate explicit formulae for the optimal electrical parameters that maximize the mean harvested power. The maximum of the mean harvested power decreases with increasing uncertainty, and the optimal frequency at which the maximum mean power occurs shifts. The effect of the parameter variance on the optimal electrical time constant and optimal coupling coefficient are reported. Monte Carlo based simulation results are used to further analyse the system under parametric uncertainty

  9. Effects of High-Latitude Forcing Uncertainty on the Low-Latitude and Midlatitude Ionosphere

    Science.gov (United States)

    Pedatella, N. M.; Lu, G.; Richmond, A. D.

    2018-01-01

    Ensemble simulations are performed using the Thermosphere-Ionosphere-Electrodynamics General Circulation Model (TIE-GCM) in order to understand the role of high-latitude forcing uncertainty on the low-latitude and midlatitude ionosphere response to the April 2010 geomagnetic storm. The ensemble is generated by perturbing either the high-latitude electric potential or auroral energy flux in the assimilative mapping for ionosphere electrodynamics (AMIE). Simulations with perturbed high-latitude electric potential result in substantial intraensemble variability in the low-latitude and midlatitude ionosphere response to the geomagnetic storm, and the ensemble standard deviation for the change in NmF2 reaches 50-100% of the mean change. Such large intraensemble variability is not seen when perturbing the auroral energy flux. In this case, the effects of the forcing uncertainty are primarily confined to high latitudes. We therefore conclude that the specification of high-latitude electric fields is an important source of uncertainty when modeling the low-latitude and midlatitude ionosphere response to a geomagnetic storm. A multiple linear regression analysis of the results indicates that uncertainty in the storm time changes in the equatorial electric fields, neutral winds, and neutral composition can all contribute to the uncertainty in the ionosphere electron density. The results of the present study provide insight into the possible uncertainty in simulations of the low-latitude and midlatitude ionosphere response to geomagnetic storms due to imperfect knowledge of the high-latitude forcing.

  10. ICYESS 2013: Understanding and Interpreting Uncertainty

    Science.gov (United States)

    Rauser, F.; Niederdrenk, L.; Schemann, V.; Schmidt, A.; Suesser, D.; Sonntag, S.

    2013-12-01

    We will report the outcomes and highlights of the Interdisciplinary Conference of Young Earth System Scientists (ICYESS) on Understanding and Interpreting Uncertainty in September 2013, Hamburg, Germany. This conference is aimed at early career scientists (Masters to Postdocs) from a large variety of scientific disciplines and backgrounds (natural, social and political sciences) and will enable 3 days of discussions on a variety of uncertainty-related aspects: 1) How do we deal with implicit and explicit uncertainty in our daily scientific work? What is uncertain for us, and for which reasons? 2) How can we communicate these uncertainties to other disciplines? E.g., is uncertainty in cloud parameterization and respectively equilibrium climate sensitivity a concept that is understood equally well in natural and social sciences that deal with Earth System questions? Or vice versa, is, e.g., normative uncertainty as in choosing a discount rate relevant for natural scientists? How can those uncertainties be reconciled? 3) How can science communicate this uncertainty to the public? Is it useful at all? How are the different possible measures of uncertainty understood in different realms of public discourse? Basically, we want to learn from all disciplines that work together in the broad Earth System Science community how to understand and interpret uncertainty - and then transfer this understanding to the problem of how to communicate with the public, or its different layers / agents. ICYESS is structured in a way that participation is only possible via presentation, so every participant will give their own professional input into how the respective disciplines deal with uncertainty. Additionally, a large focus is put onto communication techniques; there are no 'standard presentations' in ICYESS. Keynote lectures by renowned scientists and discussions will lead to a deeper interdisciplinary understanding of what we do not really know, and how to deal with it. Many

  11. Planning ATES systems under uncertainty

    Science.gov (United States)

    Jaxa-Rozen, Marc; Kwakkel, Jan; Bloemendal, Martin

    2015-04-01

    Aquifer Thermal Energy Storage (ATES) can contribute to significant reductions in energy use within the built environment, by providing seasonal energy storage in aquifers for the heating and cooling of buildings. ATES systems have experienced a rapid uptake over the last two decades; however, despite successful experiments at the individual level, the overall performance of ATES systems remains below expectations - largely due to suboptimal practices for the planning and operation of systems in urban areas. The interaction between ATES systems and underground aquifers can be interpreted as a common-pool resource problem, in which thermal imbalances or interference could eventually degrade the storage potential of the subsurface. Current planning approaches for ATES systems thus typically follow the precautionary principle. For instance, the permitting process in the Netherlands is intended to minimize thermal interference between ATES systems. However, as shown in recent studies (Sommer et al., 2015; Bakr et al., 2013), a controlled amount of interference may benefit the collective performance of ATES systems. An overly restrictive approach to permitting is instead likely to create an artificial scarcity of available space, limiting the potential of the technology in urban areas. In response, master plans - which take into account the collective arrangement of multiple systems - have emerged as an increasingly popular alternative. However, permits and master plans both take a static, ex ante view of ATES governance, making it difficult to predict the effect of evolving ATES use or climactic conditions on overall performance. In particular, the adoption of new systems by building operators is likely to be driven by the available subsurface space and by the performance of existing systems; these outcomes are themselves a function of planning parameters. From this perspective, the interactions between planning authorities, ATES operators, and subsurface conditions

  12. Sensitivity and uncertainty studies of the CRAC2 computer code

    International Nuclear Information System (INIS)

    Kocher, D.C.; Ward, R.C.; Killough, G.G.; Dunning, D.E. Jr.; Hicks, B.B.; Hosker, R.P. Jr.; Ku, J.Y.; Rao, K.S.

    1985-05-01

    This report presents a study of the sensitivity of early fatalities, early injuries, latent cancer fatalities, and economic costs for hypothetical nuclear reactor accidents as predicted by the CRAC2 computer code (CRAC = Calculation of Reactor Accident Consequences) to uncertainties in selected models and parameters used in the code. The sources of uncertainty that were investigated in the CRAC2 sensitivity studies include (1) the model for plume rise, (2) the model for wet deposition, (3) the procedure for meteorological bin-sampling involving the selection of weather sequences that contain rain, (4) the dose conversion factors for inhalation as they are affected by uncertainties in the physical and chemical form of the released radionuclides, (5) the weathering half-time for external ground-surface exposure, and (6) the transfer coefficients for estimating exposures via terrestrial foodchain pathways. The sensitivity studies were performed for selected radionuclide releases, hourly meteorological data, land-use data, a fixed non-uniform population distribution, a single evacuation model, and various release heights and sensible heat rates. Two important general conclusions from the sensitivity and uncertainty studies are as follows: (1) The large effects on predicted early fatalities and early injuries that were observed in some of the sensitivity studies apparently are due in part to the presence of thresholds in the dose-response models. Thus, the observed sensitivities depend in part on the magnitude of the radionuclide releases. (2) Some of the effects on predicted early fatalities and early injuries that were observed in the sensitivity studies were comparable to effects that were due only to the selection of different sets of weather sequences in bin-sampling runs. 47 figs., 50 tabs

  13. BEPU methods and combining of uncertainties

    International Nuclear Information System (INIS)

    Prosek, A.; Mavko, B.

    2004-01-01

    After approval of the revised rule on the acceptance of emergency core cooling system (ECCS) performance in 1988 there has been significant interest in the development of codes and methodologies for best-estimate loss-of-coolant accident (LOCAs) analyses. The Code Scaling, Applicability and Uncertainty (CSAU) evaluation method was developed and demonstrated for large-break (LB) LOCA in a pressurized water reactor. Later several new best estimate plus uncertainty methods (BEPUs) were developed in the world. The purpose of the paper is to identify and compare the statistical approaches of BEPU methods and present their important plant and licensing applications. The study showed that uncertainty analysis with random sampling of input parameters and the use of order statistics for desired tolerance limits of output parameters is today commonly accepted approach. The existing BEPU methods seems mature enough while the future research may be focused on the codes with internal assessment of uncertainty. (author)

  14. Policy Uncertainty and the US Ethanol Industry

    Directory of Open Access Journals (Sweden)

    Jason P. H. Jones

    2017-11-01

    Full Text Available The Renewable Fuel Standard (RFS2, as implemented, has introduced uncertainty into US ethanol producers and the supporting commodity market. First, the fixed mandate for what is mainly cornstarch-based ethanol has increased feedstock price volatility and exerts a general effect across the agricultural sector. Second, the large discrepancy between the original Energy Independence and Security Act (EISA intentions and the actual RFS2 implementation for some fuel classes has increased the investment uncertainty facing investors in biofuel production, distribution, and consumption. Here we discuss and analyze the sources of uncertainty and evaluate the effect of potential RFS2 adjustments as they influence these uncertainties. This includes the use of a flexible, production dependent mandate on corn starch ethanol. We find that a flexible mandate on cornstarch ethanol relaxed during drought could significantly reduce commodity price spikes and alleviate the decline of livestock production in cases of feedstock production shortfalls, but it would increase the risk for ethanol investors.

  15. Uncertainty analysis of energy consumption in dwellings

    Energy Technology Data Exchange (ETDEWEB)

    Pettersen, Trine Dyrstad

    1997-12-31

    This thesis presents a comprehensive study of an energy estimation model that can be used to examine the uncertainty of predicted energy consumption in a dwelling. The variation and uncertainty of input parameters due to the outdoor climate, the building construction and the inhabitants are studied as a basis for further energy evaluations. The occurring variations of energy consumption in nominal similar dwellings are also investigated due to verification of the simulated energy consumption. The main topics are (1) a study of expected variations and uncertainties in both input parameters used in energy consumption calculations and the energy consumption in the dwelling, (2) the development and evaluation of a simplified energy calculation model that considers uncertainties due to the input parameters, (3) an evaluation of the influence of the uncertain parameters on the total variation so that the most important parameters can be identified, and (4) the recommendation of a simplified procedure for treating uncertainties or possible deviations from average conditions. 90 refs., 182 figs., 73 tabs.

  16. Analyzing the uncertainty of ensemble-based gridded observations in land surface simulations and drought assessment

    Science.gov (United States)

    Ahmadalipour, Ali; Moradkhani, Hamid

    2017-12-01

    Hydrologic modeling is one of the primary tools utilized for drought monitoring and drought early warning systems. Several sources of uncertainty in hydrologic modeling have been addressed in the literature. However, few studies have assessed the uncertainty of gridded observation datasets from a drought monitoring perspective. This study provides a hydrologic modeling oriented analysis of the gridded observation data uncertainties over the Pacific Northwest (PNW) and its implications on drought assessment. We utilized a recently developed 100-member ensemble-based observed forcing data to simulate hydrologic fluxes at 1/8° spatial resolution using Variable Infiltration Capacity (VIC) model, and compared the results with a deterministic observation. Meteorological and hydrological droughts are studied at multiple timescales over the basin, and seasonal long-term trends and variations of drought extent is investigated for each case. Results reveal large uncertainty of observed datasets at monthly timescale, with systematic differences for temperature records, mainly due to different lapse rates. The uncertainty eventuates in large disparities of drought characteristics. In general, an increasing trend is found for winter drought extent across the PNW. Furthermore, a ∼3% decrease per decade is detected for snow water equivalent (SWE) over the PNW, with the region being more susceptible to SWE variations of the northern Rockies than the western Cascades. The agricultural areas of southern Idaho demonstrate decreasing trend of natural soil moisture as a result of precipitation decline, which implies higher appeal for anthropogenic water storage and irrigation systems.

  17. Image restoration, uncertainty, and information.

    Science.gov (United States)

    Yu, F T

    1969-01-01

    Some of the physical interpretations about image restoration are discussed. From the theory of information the unrealizability of an inverse filter can be explained by degradation of information, which is due to distortion on the recorded image. The image restoration is a time and space problem, which can be recognized from the theory of relativity (the problem of image restoration is related to Heisenberg's uncertainty principle in quantum mechanics). A detailed discussion of the relationship between information and energy is given. Two general results may be stated: (1) the restoration of the image from the distorted signal is possible only if it satisfies the detectability condition. However, the restored image, at the best, can only approach to the maximum allowable time criterion. (2) The restoration of an image by superimposing the distorted signal (due to smearing) is a physically unrealizable method. However, this restoration procedure may be achieved by the expenditure of an infinite amount of energy.

  18. Uncertainty in artificial intelligence

    CERN Document Server

    Levitt, TS; Lemmer, JF; Shachter, RD

    1990-01-01

    Clearly illustrated in this volume is the current relationship between Uncertainty and AI.It has been said that research in AI revolves around five basic questions asked relative to some particular domain: What knowledge is required? How can this knowledge be acquired? How can it be represented in a system? How should this knowledge be manipulated in order to provide intelligent behavior? How can the behavior be explained? In this volume, all of these questions are addressed. From the perspective of the relationship of uncertainty to the basic questions of AI, the book divides naturally i

  19. Experimental Research Examining how People can Cope with Uncertainty through Soft Haptic Sensations

    NARCIS (Netherlands)

    Van Horen, F.; Mussweiler, T.

    2015-01-01

    Human beings are constantly surrounded by uncertainty and change. The question arises how people cope with such uncertainty. To date, most research has focused on the cognitive strategies people adopt to deal with uncertainty. However, especially when uncertainty is due to unpredictable societal

  20. The influence of climate change on flood risks in France ­- first estimates and uncertainty analysis

    OpenAIRE

    Dumas , Patrice; Hallegatte , Sréphane; Quintana-Seguí , Pere; Martin , Eric

    2013-01-01

    International audience; Abstract. This paper proposes a methodology to project the possible evolution of river flood damages due to climate change, and applies it to mainland France. Its main contributions are (i) to demonstrate a methodology to investigate the full causal chain from global climate change to local economic flood losses; (ii) to show that future flood losses may change in a very significant manner over France; (iii) to show that a very large uncertainty arises from the climate...

  1. Uncertainty Analyses and Strategy

    International Nuclear Information System (INIS)

    Kevin Coppersmith

    2001-01-01

    The DOE identified a variety of uncertainties, arising from different sources, during its assessment of the performance of a potential geologic repository at the Yucca Mountain site. In general, the number and detail of process models developed for the Yucca Mountain site, and the complex coupling among those models, make the direct incorporation of all uncertainties difficult. The DOE has addressed these issues in a number of ways using an approach to uncertainties that is focused on producing a defensible evaluation of the performance of a potential repository. The treatment of uncertainties oriented toward defensible assessments has led to analyses and models with so-called ''conservative'' assumptions and parameter bounds, where conservative implies lower performance than might be demonstrated with a more realistic representation. The varying maturity of the analyses and models, and uneven level of data availability, result in total system level analyses with a mix of realistic and conservative estimates (for both probabilistic representations and single values). That is, some inputs have realistically represented uncertainties, and others are conservatively estimated or bounded. However, this approach is consistent with the ''reasonable assurance'' approach to compliance demonstration, which was called for in the U.S. Nuclear Regulatory Commission's (NRC) proposed 10 CFR Part 63 regulation (64 FR 8640 [DIRS 101680]). A risk analysis that includes conservatism in the inputs will result in conservative risk estimates. Therefore, the approach taken for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) provides a reasonable representation of processes and conservatism for purposes of site recommendation. However, mixing unknown degrees of conservatism in models and parameter representations reduces the transparency of the analysis and makes the development of coherent and consistent probability statements about projected repository

  2. Fundamental uncertainty limit of optical flow velocimetry according to Heisenberg's uncertainty principle.

    Science.gov (United States)

    Fischer, Andreas

    2016-11-01

    Optical flow velocity measurements are important for understanding the complex behavior of flows. Although a huge variety of methods exist, they are either based on a Doppler or a time-of-flight measurement principle. Doppler velocimetry evaluates the velocity-dependent frequency shift of light scattered at a moving particle, whereas time-of-flight velocimetry evaluates the traveled distance of a scattering particle per time interval. Regarding the aim of achieving a minimal measurement uncertainty, it is unclear if one principle allows to achieve lower uncertainties or if both principles can achieve equal uncertainties. For this reason, the natural, fundamental uncertainty limit according to Heisenberg's uncertainty principle is derived for Doppler and time-of-flight measurement principles, respectively. The obtained limits of the velocity uncertainty are qualitatively identical showing, e.g., a direct proportionality for the absolute value of the velocity to the power of 32 and an indirect proportionality to the square root of the scattered light power. Hence, both measurement principles have identical potentials regarding the fundamental uncertainty limit due to the quantum mechanical behavior of photons. This fundamental limit can be attained (at least asymptotically) in reality either with Doppler or time-of-flight methods, because the respective Cramér-Rao bounds for dominating photon shot noise, which is modeled as white Poissonian noise, are identical with the conclusions from Heisenberg's uncertainty principle.

  3. Total uncertainty of low velocity thermal anemometers for measurement of indoor air movements

    DEFF Research Database (Denmark)

    Jørgensen, F.; Popiolek, Z.; Melikov, Arsen Krikor

    2004-01-01

    For a specific thermal anemometer with omnidirectional velocity sensor the expanded total uncertainty in measured mean velocity Û(Vmean) and the expanded total uncertainty in measured turbulence intensity Û(Tu) due to different error sources are estimated. The values are based on a previously...... developed mathematical model of the anemometer in combination with a large database of representative room flows measured with a 3-D Laser Doppler anemometer (LDA). A direct comparison between measurements with a thermal anemometer and a 3-D LDA in flows of varying velocity and turbulence intensity shows...... good agreement not only between the two instruments but also between the thermal anemometer and its mathematical model. The differences in the measurements performed with the two instruments are all well within the measurement uncertainty of both anemometers....

  4. Development of a Dynamic Lidar Uncertainty Framework

    Energy Technology Data Exchange (ETDEWEB)

    Newman, Jennifer [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Clifton, Andrew [WindForS; Bonin, Timothy [CIRES/NOAA ESRL; Choukulkar, Aditya [CIRES/NOAA ESRL; Brewer, W. Alan [NOAA ESRL; Delgado, Ruben [University of Maryland Baltimore County

    2017-08-07

    As wind turbine sizes increase and wind energy expands to more complex and remote sites, remote-sensing devices such as lidars are expected to play a key role in wind resource assessment and power performance testing. The switch to remote-sensing devices represents a paradigm shift in the way the wind industry typically obtains and interprets measurement data for wind energy. For example, the measurement techniques and sources of uncertainty for a remote-sensing device are vastly different from those associated with a cup anemometer on a meteorological tower. Current IEC standards for quantifying remote sensing device uncertainty for power performance testing consider uncertainty due to mounting, calibration, and classification of the remote sensing device, among other parameters. Values of the uncertainty are typically given as a function of the mean wind speed measured by a reference device and are generally fixed, leading to climatic uncertainty values that apply to the entire measurement campaign. However, real-world experience and a consideration of the fundamentals of the measurement process have shown that lidar performance is highly dependent on atmospheric conditions, such as wind shear, turbulence, and aerosol content. At present, these conditions are not directly incorporated into the estimated uncertainty of a lidar device. In this presentation, we describe the development of a new dynamic lidar uncertainty framework that adapts to current flow conditions and more accurately represents the actual uncertainty inherent in lidar measurements under different conditions. In this new framework, sources of uncertainty are identified for estimation of the line-of-sight wind speed and reconstruction of the three-dimensional wind field. These sources are then related to physical processes caused by the atmosphere and lidar operating conditions. The framework is applied to lidar data from a field measurement site to assess the ability of the framework to predict

  5. Unrealized Global Temperature Increase: Implications of Current Uncertainties

    Science.gov (United States)

    Schwartz, Stephen E.

    2018-04-01

    Unrealized increase in global mean surface air temperature (GMST) may result from the climate system not being in steady state with forcings and/or from cessation of negative aerosol forcing that would result from decreases in emissions. An observation-constrained method is applied to infer the dependence of Earth's climate sensitivity on forcing by anthropogenic aerosols within the uncertainty on that forcing given by the Fifth (2013) Assessment Report of the Intergovernmental Panel on Climate Change. Within these uncertainty ranges the increase in GMST due to temperature lag for future forcings held constant is slight (0.09-0.19 K over 20 years; 0.12-0.26 K over 100 years). However, the incremental increase in GMST that would result from a hypothetical abrupt cessation of sources of aerosols could be quite large but is highly uncertain, 0.1-1.3 K over 20 years. Decrease in CO2 abundance and forcing following abrupt cessation of emissions would offset these increases in GMST over 100 years by as little as 0.09 K to as much as 0.8 K. The uncertainties quantified here greatly limit confidence in projections of change in GMST that would result from any strategy for future reduction of emissions.

  6. Starling flock networks manage uncertainty in consensus at low cost.

    Directory of Open Access Journals (Sweden)

    George F Young

    Full Text Available Flocks of starlings exhibit a remarkable ability to maintain cohesion as a group in highly uncertain environments and with limited, noisy information. Recent work demonstrated that individual starlings within large flocks respond to a fixed number of nearest neighbors, but until now it was not understood why this number is seven. We analyze robustness to uncertainty of consensus in empirical data from multiple starling flocks and show that the flock interaction networks with six or seven neighbors optimize the trade-off between group cohesion and individual effort. We can distinguish these numbers of neighbors from fewer or greater numbers using our systems-theoretic approach to measuring robustness of interaction networks as a function of the network structure, i.e., who is sensing whom. The metric quantifies the disagreement within the network due to disturbances and noise during consensus behavior and can be evaluated over a parameterized family of hypothesized sensing strategies (here the parameter is number of neighbors. We use this approach to further show that for the range of flocks studied the optimal number of neighbors does not depend on the number of birds within a flock; rather, it depends on the shape, notably the thickness, of the flock. The results suggest that robustness to uncertainty may have been a factor in the evolution of flocking for starlings. More generally, our results elucidate the role of the interaction network on uncertainty management in collective behavior, and motivate the application of our approach to other biological networks.

  7. Starling Flock Networks Manage Uncertainty in Consensus at Low Cost

    Science.gov (United States)

    Young, George F.; Scardovi, Luca; Cavagna, Andrea; Giardina, Irene; Leonard, Naomi E.

    2013-01-01

    Flocks of starlings exhibit a remarkable ability to maintain cohesion as a group in highly uncertain environments and with limited, noisy information. Recent work demonstrated that individual starlings within large flocks respond to a fixed number of nearest neighbors, but until now it was not understood why this number is seven. We analyze robustness to uncertainty of consensus in empirical data from multiple starling flocks and show that the flock interaction networks with six or seven neighbors optimize the trade-off between group cohesion and individual effort. We can distinguish these numbers of neighbors from fewer or greater numbers using our systems-theoretic approach to measuring robustness of interaction networks as a function of the network structure, i.e., who is sensing whom. The metric quantifies the disagreement within the network due to disturbances and noise during consensus behavior and can be evaluated over a parameterized family of hypothesized sensing strategies (here the parameter is number of neighbors). We use this approach to further show that for the range of flocks studied the optimal number of neighbors does not depend on the number of birds within a flock; rather, it depends on the shape, notably the thickness, of the flock. The results suggest that robustness to uncertainty may have been a factor in the evolution of flocking for starlings. More generally, our results elucidate the role of the interaction network on uncertainty management in collective behavior, and motivate the application of our approach to other biological networks. PMID:23382667

  8. Using finite mixture models in thermal-hydraulics system code uncertainty analysis

    Energy Technology Data Exchange (ETDEWEB)

    Carlos, S., E-mail: scarlos@iqn.upv.es [Department d’Enginyeria Química i Nuclear, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Sánchez, A. [Department d’Estadística Aplicada i Qualitat, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Ginestar, D. [Department de Matemàtica Aplicada, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain); Martorell, S. [Department d’Enginyeria Química i Nuclear, Universitat Politècnica de València, Camí de Vera s.n, 46022 València (Spain)

    2013-09-15

    Highlights: • Best estimate codes simulation needs uncertainty quantification. • The output variables can present multimodal probability distributions. • The analysis of multimodal distribution is performed using finite mixture models. • Two methods to reconstruct output variable probability distribution are used. -- Abstract: Nuclear Power Plant safety analysis is mainly based on the use of best estimate (BE) codes that predict the plant behavior under normal or accidental conditions. As the BE codes introduce uncertainties due to uncertainty in input parameters and modeling, it is necessary to perform uncertainty assessment (UA), and eventually sensitivity analysis (SA), of the results obtained. These analyses are part of the appropriate treatment of uncertainties imposed by current regulation based on the adoption of the best estimate plus uncertainty (BEPU) approach. The most popular approach for uncertainty assessment, based on Wilks’ method, obtains a tolerance/confidence interval, but it does not completely characterize the output variable behavior, which is required for an extended UA and SA. However, the development of standard UA and SA impose high computational cost due to the large number of simulations needed. In order to obtain more information about the output variable and, at the same time, to keep computational cost as low as possible, there has been a recent shift toward developing metamodels (model of model), or surrogate models, that approximate or emulate complex computer codes. In this way, there exist different techniques to reconstruct the probability distribution using the information provided by a sample of values as, for example, the finite mixture models. In this paper, the Expectation Maximization and the k-means algorithms are used to obtain a finite mixture model that reconstructs the output variable probability distribution from data obtained with RELAP-5 simulations. Both methodologies have been applied to a separated

  9. Evaluating uncertainties in regional climate simulations over South America at the seasonal scale

    Energy Technology Data Exchange (ETDEWEB)

    Solman, Silvina A. [Centro de Investigaciones del Mar y la Atmosfera CIMA/CONICET-UBA, DCAO/FCEN, UMI-IFAECI/CNRS, CIMA-Ciudad Universitaria, Buenos Aires (Argentina); Pessacg, Natalia L. [Centro Nacional Patagonico (CONICET), Puerto Madryn, Chubut (Argentina)

    2012-07-15

    This work focuses on the evaluation of different sources of uncertainty affecting regional climate simulations over South America at the seasonal scale, using the MM5 model. The simulations cover a 3-month period for the austral spring season. Several four-member ensembles were performed in order to quantify the uncertainty due to: the internal variability; the definition of the regional model domain; the choice of physical parameterizations and the selection of physical parameters within a particular cumulus scheme. The uncertainty was measured by means of the spread among individual members of each ensemble during the integration period. Results show that the internal variability, triggered by differences in the initial conditions, represents the lowest level of uncertainty for every variable analyzed. The geographic distribution of the spread among ensemble members depends on the variable: for precipitation and temperature the largest spread is found over tropical South America while for the mean sea level pressure the largest spread is located over the southeastern Atlantic Ocean, where large synoptic-scale activity occurs. Using nudging techniques to ingest the boundary conditions reduces dramatically the internal variability. The uncertainty due to the domain choice displays a similar spatial pattern compared with the internal variability, except for the mean sea level pressure field, though its magnitude is larger all over the model domain for every variable. The largest spread among ensemble members is found for the ensemble in which different combinations of physical parameterizations are selected. The perturbed physics ensemble produces a level of uncertainty slightly larger than the internal variability. This study suggests that no matter what the source of uncertainty is, the geographical distribution of the spread among members of the ensembles is invariant, particularly for precipitation and temperature. (orig.)

  10. Risks, uncertainty, vagueness

    International Nuclear Information System (INIS)

    Haefele, W.; Renn, O.; Erdmann, G.

    1990-01-01

    The notion of 'risk' is discussed in its social and technological contexts, leading to an investigation of the terms factuality, hypotheticality, uncertainty, and vagueness, and to the problems of acceptance and acceptability especially in the context of political decision finding. (DG) [de

  11. Lesão cerebral penetrante por grande fragmento de fibra de amianto tratada por craniectomia descompressiva: relato de caso Penetrating brain injury due to a large asbestos fragment treated by decompressive craniectomy: case report

    Directory of Open Access Journals (Sweden)

    Gustavo Cardoso de Andrade

    2004-12-01

    Full Text Available Relata-se caso de paciente de 22 anos vítima de traumatismo cranioencefálico penetrante por fragmento de fibra de amianto medindo 15 x 12 cm, e seu tratamento bem sucedido por craniectomia descompressiva. Ao contrário da lesão encefálica por projétil de arma de fogo, lesão encefálica penetrante por objeto de baixa energia é incomum. A maioria dos casos relatados na literatura envolve lesões cranio-orbitárias ou autoflagelação em pacientes psiquiátricos. O caso relatado torna-se especial em virtude das grandes dimensões do objeto penetrante, do tratamento por craniectomia descompressiva e do bom resultado funcional alcançado.We report the case of a 22-year-old man victim of penetrating brain injury due to a 15 x 12 asbestos fragment and a successfully treatment via decompressive craniectomy. Unlike gunshot wounds to the head, penetrating brain injury from low energy objects are unusual. Most cases reported involve cranio-orbitary injuries as well as self inflicted lesions in mentally ill patients. The reported case is noteworthy due to the large dimensions of the foreign body, the treatment via decompressive craniectomy and the good patient functional outcome.

  12. Strategy under uncertainty.

    Science.gov (United States)

    Courtney, H; Kirkland, J; Viguerie, P

    1997-01-01

    At the heart of the traditional approach to strategy lies the assumption that by applying a set of powerful analytic tools, executives can predict the future of any business accurately enough to allow them to choose a clear strategic direction. But what happens when the environment is so uncertain that no amount of analysis will allow us to predict the future? What makes for a good strategy in highly uncertain business environments? The authors, consultants at McKinsey & Company, argue that uncertainty requires a new way of thinking about strategy. All too often, they say, executives take a binary view: either they underestimate uncertainty to come up with the forecasts required by their companies' planning or capital-budging processes, or they overestimate it, abandon all analysis, and go with their gut instinct. The authors outline a new approach that begins by making a crucial distinction among four discrete levels of uncertainty that any company might face. They then explain how a set of generic strategies--shaping the market, adapting to it, or reserving the right to play at a later time--can be used in each of the four levels. And they illustrate how these strategies can be implemented through a combination of three basic types of actions: big bets, options, and no-regrets moves. The framework can help managers determine which analytic tools can inform decision making under uncertainty--and which cannot. At a broader level, it offers executives a discipline for thinking rigorously and systematically about uncertainty and its implications for strategy.

  13. Mitigating Provider Uncertainty in Service Provision Contracts

    Science.gov (United States)

    Smith, Chris; van Moorsel, Aad

    Uncertainty is an inherent property of open, distributed and multiparty systems. The viability of the mutually beneficial relationships which motivate these systems relies on rational decision-making by each constituent party under uncertainty. Service provision in distributed systems is one such relationship. Uncertainty is experienced by the service provider in his ability to deliver a service with selected quality level guarantees due to inherent non-determinism, such as load fluctuations and hardware failures. Statistical estimators utilized to model this non-determinism introduce additional uncertainty through sampling error. Inability of the provider to accurately model and analyze uncertainty in the quality level guarantees can result in the formation of sub-optimal service provision contracts. Emblematic consequences include loss of revenue, inefficient resource utilization and erosion of reputation and consumer trust. We propose a utility model for contract-based service provision to provide a systematic approach to optimal service provision contract formation under uncertainty. Performance prediction methods to enable the derivation of statistical estimators for quality level are introduced, with analysis of their resultant accuracy and cost.

  14. Approximating uncertainty of annual runoff and reservoir yield using stochastic replicates of global climate model data

    Science.gov (United States)

    Peel, M. C.; Srikanthan, R.; McMahon, T. A.; Karoly, D. J.

    2015-04-01

    Two key sources of uncertainty in projections of future runoff for climate change impact assessments are uncertainty between global climate models (GCMs) and within a GCM. Within-GCM uncertainty is the variability in GCM output that occurs when running a scenario multiple times but each run has slightly different, but equally plausible, initial conditions. The limited number of runs available for each GCM and scenario combination within the Coupled Model Intercomparison Project phase 3 (CMIP3) and phase 5 (CMIP5) data sets, limits the assessment of within-GCM uncertainty. In this second of two companion papers, the primary aim is to present a proof-of-concept approximation of within-GCM uncertainty for monthly precipitation and temperature projections and to assess the impact of within-GCM uncertainty on modelled runoff for climate change impact assessments. A secondary aim is to assess the impact of between-GCM uncertainty on modelled runoff. Here we approximate within-GCM uncertainty by developing non-stationary stochastic replicates of GCM monthly precipitation and temperature data. These replicates are input to an off-line hydrologic model to assess the impact of within-GCM uncertainty on projected annual runoff and reservoir yield. We adopt stochastic replicates of available GCM runs to approximate within-GCM uncertainty because large ensembles, hundreds of runs, for a given GCM and scenario are unavailable, other than the Climateprediction.net data set for the Hadley Centre GCM. To date within-GCM uncertainty has received little attention in the hydrologic climate change impact literature and this analysis provides an approximation of the uncertainty in projected runoff, and reservoir yield, due to within- and between-GCM uncertainty of precipitation and temperature projections. In the companion paper, McMahon et al. (2015) sought to reduce between-GCM uncertainty by removing poorly performing GCMs, resulting in a selection of five better performing GCMs from

  15. Uncertainty quantification for environmental models

    Science.gov (United States)

    Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming

    2012-01-01

    Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10

  16. Optimization Under Uncertainty for Wake Steering Strategies

    Energy Technology Data Exchange (ETDEWEB)

    Quick, Julian [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Annoni, Jennifer [National Renewable Energy Laboratory (NREL), Golden, CO (United States); King, Ryan N [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dykes, Katherine L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Fleming, Paul A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ning, Andrew [Brigham Young University

    2017-08-03

    Offsetting turbines' yaw orientations from incoming wind is a powerful tool that may be leveraged to reduce undesirable wake effects on downstream turbines. First, we examine a simple two-turbine case to gain intuition as to how inflow direction uncertainty affects the optimal solution. The turbines are modeled with unidirectional inflow such that one turbine directly wakes the other, using ten rotor diameter spacing. We perform optimization under uncertainty (OUU) via a parameter sweep of the front turbine. The OUU solution generally prefers less steering. We then do this optimization for a 60-turbine wind farm with unidirectional inflow, varying the degree of inflow uncertainty and approaching this OUU problem by nesting a polynomial chaos expansion uncertainty quantification routine within an outer optimization. We examined how different levels of uncertainty in the inflow direction effect the ratio of the expected values of deterministic and OUU solutions for steering strategies in the large wind farm, assuming the directional uncertainty used to reach said OUU solution (this ratio is defined as the value of the stochastic solution or VSS).

  17. Uncertainty estimates for theoretical atomic and molecular data

    International Nuclear Information System (INIS)

    Chung, H-K; Braams, B J; Bartschat, K; Császár, A G; Drake, G W F; Kirchner, T; Kokoouline, V; Tennyson, J

    2016-01-01

    Sources of uncertainty are reviewed for calculated atomic and molecular data that are important for plasma modeling: atomic and molecular structures and cross sections for electron-atom, electron-molecule, and heavy particle collisions. We concentrate on model uncertainties due to approximations to the fundamental many-body quantum mechanical equations and we aim to provide guidelines to estimate uncertainties as a routine part of computations of data for structure and scattering. (topical review)

  18. Failure probability under parameter uncertainty.

    Science.gov (United States)

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  19. Uncertainty and validation. Effect of user interpretation on uncertainty estimates

    International Nuclear Information System (INIS)

    Kirchner, G.; Peterson, R.

    1996-11-01

    Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the

  20. Uncertainty and validation. Effect of user interpretation on uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Kirchner, G. [Univ. of Bremen (Germany); Peterson, R. [AECL, Chalk River, ON (Canada)] [and others

    1996-11-01

    Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the

  1. Uncertainty in projected point precipitation extremes for hydrological impact analysis of climate change

    Science.gov (United States)

    Van Uytven, Els; Willems, Patrick

    2017-04-01

    temperature and PET observations at Uccle and a large ensemble of 160 global climate model runs (CMIP5). They cover all four representative concentration pathway based greenhouse gas scenarios. While evaluating the downscaled meteorological series, particular attention was given to the performance of extreme value metrics (e.g. for precipitation, by means of intensity-duration-frequency statistics). Moreover, the total uncertainty was decomposed in the fractional uncertainties for each of the uncertainty sources considered. Research assessing the additional uncertainty due to parameter and structural uncertainties of the hydrological impact model is ongoing.

  2. Uncertainty Analysis of Light Water Reactor Fuel Lattices

    Directory of Open Access Journals (Sweden)

    C. Arenas

    2013-01-01

    Full Text Available The study explored the calculation of uncertainty based on available cross-section covariance data and computational tool on fuel lattice levels, which included pin cell and the fuel assembly models. Uncertainty variations due to temperatures changes and different fuel compositions are the main focus of this analysis. Selected assemblies and unit pin cells were analyzed according to the OECD LWR UAM benchmark specifications. Criticality and uncertainty analysis were performed using TSUNAMI-2D sequence in SCALE 6.1. It was found that uncertainties increase with increasing temperature, while kinf decreases. This increase in the uncertainty is due to the increase in sensitivity of the largest contributing reaction of uncertainty, namely, the neutron capture reaction 238U(n, γ due to the Doppler broadening. In addition, three types (UOX, MOX, and UOX-Gd2O3 of fuel material compositions were analyzed. A remarkable increase in uncertainty in kinf was observed for the case of MOX fuel. The increase in uncertainty of kinf in MOX fuel was nearly twice the corresponding value in UOX fuel. The neutron-nuclide reaction of 238U, mainly inelastic scattering (n, n′, contributed the most to the uncertainties in the MOX fuel, shifting the neutron spectrum to higher energy compared to the UOX fuel.

  3. Impacts of Korea's Exchange Rate Uncertainty on Exports

    Directory of Open Access Journals (Sweden)

    Kwon Sik Kim

    2003-12-01

    Full Text Available This paper examines the effects of two types of uncertainty related to the real effective exchange rate (REER in Korea for export trends. To decompose uncertainties into two types of component, I propose an advanced generalized Markov switching model, as developed by Hamilton (1989 and then expanded by Kim and Kim (1996. The proposed model is useful in uncovering two sources of uncertainty: the permanent component of REER and the purely transitory component. I think that the two types of uncertainties have a different effect on export trends in Korea. The transitory component of REER has no effect on the export trend at 5-percent significance, but the permanent component has an effect at this level. In addition, the degree of uncertainty, consisting of low, medium and high uncertainty in the permanent component, and low, medium and high uncertainty in transitory component of REER, also has different effects on export trends in Korea. Only high uncertainty in permanent components effects export trends. The results show that when the policy authority intends to prevent the shrinkage of exports due to the deepening of uncertainties in the foreign exchange market, the economic impacts of its intervention could appear differently according to the characteristics and degree of the uncertainties. Therefore, they imply that its economic measures, which could not grasp the sources of uncertainties properly, may even bring economic costs.

  4. Uncertainty in adaptive capacity

    International Nuclear Information System (INIS)

    Neil Adger, W.; Vincent, K.

    2005-01-01

    The capacity to adapt is a critical element of the process of adaptation: it is the vector of resources that represent the asset base from which adaptation actions can be made. Adaptive capacity can in theory be identified and measured at various scales, from the individual to the nation. The assessment of uncertainty within such measures comes from the contested knowledge domain and theories surrounding the nature of the determinants of adaptive capacity and the human action of adaptation. While generic adaptive capacity at the national level, for example, is often postulated as being dependent on health, governance and political rights, and literacy, and economic well-being, the determinants of these variables at national levels are not widely understood. We outline the nature of this uncertainty for the major elements of adaptive capacity and illustrate these issues with the example of a social vulnerability index for countries in Africa. (authors)

  5. Uncertainties about climate

    International Nuclear Information System (INIS)

    Laval, Katia; Laval, Guy

    2013-01-01

    Like meteorology, climatology is not an exact science: climate change forecasts necessarily include a share of uncertainty. It is precisely this uncertainty which is brandished and exploited by the opponents to the global warming theory to put into question the estimations of its future consequences. Is it legitimate to predict the future using the past climate data (well documented up to 100000 years BP) or the climates of other planets, taking into account the impreciseness of the measurements and the intrinsic complexity of the Earth's machinery? How is it possible to model a so huge and interwoven system for which any exact description has become impossible? Why water and precipitations play such an important role in local and global forecasts, and how should they be treated? This book written by two physicists answers with simpleness these delicate questions in order to give anyone the possibility to build his own opinion about global warming and the need to act rapidly

  6. Offshore wind farms for hydrogen production subject to uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Kassem, Nabil [Royal Inst. of Tech., Stockholm (Sweden). Dept. of Energy Processes

    2002-07-01

    Wind power is a source of clean, nonpolluting electricity, which is fully competitive, if installed at favorable wind sites, with fossil fuel and nuclear power generation. Major technical growth has been in Europe, where government policies and high conventional energy costs favor the use of wind power. As part of its strategy, the EU-Commission has launched a target to increase the installed capacity of Wind power from 7 GWe, in 1998 to 40 GWe by year 2012. Wind power is an intermittent electricity generator, thus it does not provide electric power on an 'as needed' basis. Off-peak power generated from offshore wind farms can be utilized for hydrogen production using water electrolysis. Like electricity, hydrogen is a second energy carrier, which will pave the way for future sustainable energy systems. It is environmentally friendly, versatile, with great potentials in stationary and mobile power applications. Water electrolysis is a well-established technology, which depends on the availability of cheap electrical power. Offshore wind farms have longer lifetime due to lower mechanical fatigue loads, yet to be economic, they have to be of sizes greater than 150 MW using large turbines (> 1.5 MW). The major challenge in wind energy assessment is how accurately the wind speed and hence the error in wind energy can be predicted. Therefore, wind power is subject to a great deal of uncertainties, which should be accounted for in order to provide meaningful and reliable estimates of performance and economic figures-of-merit. Failure to account for uncertainties would result in deterministic estimates that tend to overstate performance and underestimate costs. This study uses methods of risk analysis to evaluate the simultaneous effect of multiple input uncertainties, and provide Life Cycle Assessment (LCA) of the-economic viability of offshore wind systems for hydrogen production subject to technical and economical uncertainties (Published in summary form only)

  7. The uncertainty principle

    International Nuclear Information System (INIS)

    Martens, Hans.

    1991-01-01

    The subject of this thesis is the uncertainty principle (UP). The UP is one of the most characteristic points of differences between quantum and classical mechanics. The starting point of this thesis is the work of Niels Bohr. Besides the discussion the work is also analyzed. For the discussion of the different aspects of the UP the formalism of Davies and Ludwig is used instead of the more commonly used formalism of Neumann and Dirac. (author). 214 refs.; 23 figs

  8. Uncertainty in artificial intelligence

    CERN Document Server

    Shachter, RD; Henrion, M; Lemmer, JF

    1990-01-01

    This volume, like its predecessors, reflects the cutting edge of research on the automation of reasoning under uncertainty.A more pragmatic emphasis is evident, for although some papers address fundamental issues, the majority address practical issues. Topics include the relations between alternative formalisms (including possibilistic reasoning), Dempster-Shafer belief functions, non-monotonic reasoning, Bayesian and decision theoretic schemes, and new inference techniques for belief nets. New techniques are applied to important problems in medicine, vision, robotics, and natural language und

  9. Decision Making Under Uncertainty

    Science.gov (United States)

    2010-11-01

    A sound approach to rational decision making requires a decision maker to establish decision objectives, identify alternatives, and evaluate those...often violate the axioms of rationality when making decisions under uncertainty. The systematic description of such observations may lead to the...which leads to “anchoring” on the initial value. The fact that individuals have been shown to deviate from rationality when making decisions

  10. Economic uncertainty principle?

    OpenAIRE

    Alexander Harin

    2006-01-01

    The economic principle of (hidden) uncertainty is presented. New probability formulas are offered. Examples of solutions of three types of fundamental problems are reviewed.; Principe d'incertitude économique? Le principe économique d'incertitude (cachée) est présenté. De nouvelles formules de chances sont offertes. Les exemples de solutions des trois types de problèmes fondamentaux sont reconsidérés.

  11. Calibration Under Uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Trucano, Timothy Guy

    2005-03-01

    This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

  12. Participation under Uncertainty

    International Nuclear Information System (INIS)

    Boudourides, Moses A.

    2003-01-01

    This essay reviews a number of theoretical perspectives about uncertainty and participation in the present-day knowledge-based society. After discussing the on-going reconfigurations of science, technology and society, we examine how appropriate for policy studies are various theories of social complexity. Post-normal science is such an example of a complexity-motivated approach, which justifies civic participation as a policy response to an increasing uncertainty. But there are different categories and models of uncertainties implying a variety of configurations of policy processes. A particular role in all of them is played by expertise whose democratization is an often-claimed imperative nowadays. Moreover, we discuss how different participatory arrangements are shaped into instruments of policy-making and framing regulatory processes. As participation necessitates and triggers deliberation, we proceed to examine the role and the barriers of deliberativeness. Finally, we conclude by referring to some critical views about the ultimate assumptions of recent European policy frameworks and the conceptions of civic participation and politicization that they invoke

  13. Deterministic uncertainty analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1987-12-01

    This paper presents a deterministic uncertainty analysis (DUA) method for calculating uncertainties that has the potential to significantly reduce the number of computer runs compared to conventional statistical analysis. The method is based upon the availability of derivative and sensitivity data such as that calculated using the well known direct or adjoint sensitivity analysis techniques. Formation of response surfaces using derivative data and the propagation of input probability distributions are discussed relative to their role in the DUA method. A sample problem that models the flow of water through a borehole is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. Propogation of uncertainties by the DUA method is compared for ten cases in which the number of reference model runs was varied from one to ten. The DUA method gives a more accurate representation of the true cumulative distribution of the flow rate based upon as few as two model executions compared to fifty model executions using a statistical approach. 16 refs., 4 figs., 5 tabs

  14. Analysis of uncertainty in modeling perceived risks

    International Nuclear Information System (INIS)

    Melnyk, R.; Sandquist, G.M.

    2005-01-01

    Expanding on a mathematical model developed for quantifying and assessing perceived risks, the distribution functions, variances, and uncertainties associated with estimating the model parameters are quantified. The analytical model permits the identification and assignment of any number of quantifiable risk perception factors that can be incorporated within standard risk methodology. Those risk perception factors associated with major technical issues are modeled using lognormal probability density functions to span the potentially large uncertainty variations associated with these risk perceptions. The model quantifies the logic of public risk perception and provides an effective means for measuring and responding to perceived risks. (authors)

  15. Uncertainty quantification and stochastic modeling with Matlab

    CERN Document Server

    Souza de Cursi, Eduardo

    2015-01-01

    Uncertainty Quantification (UQ) is a relatively new research area which describes the methods and approaches used to supply quantitative descriptions of the effects of uncertainty, variability and errors in simulation problems and models. It is rapidly becoming a field of increasing importance, with many real-world applications within statistics, mathematics, probability and engineering, but also within the natural sciences. Literature on the topic has up until now been largely based on polynomial chaos, which raises difficulties when considering different types of approximation and does no

  16. Uncertainty of forest carbon stock changes. Implications to the total uncertainty of GHG inventory of Finland

    International Nuclear Information System (INIS)

    Monni, S.; Savolainen, I.; Peltoniemi, M.; Lehtonen, A.; Makipaa, R.; Palosuo, T.

    2007-01-01

    Uncertainty analysis facilitates identification of the most important categories affecting greenhouse gas (GHG) inventory uncertainty and helps in prioritisation of the efforts needed for development of the inventory. This paper presents an uncertainty analysis of GHG emissions of all Kyoto sectors and gases for Finland consolidated with estimates of emissions/removals from LULUCF categories. In Finland, net GHG emissions in 2003 were around 69 Tg (±15 Tg) CO2 equivalents. The uncertainties in forest carbon sink estimates in 2003 were larger than in most other emission categories, but of the same order of magnitude as in carbon stock change estimates in other land use, land-use change and forestry (LULUCF) categories, and in N2O emissions from agricultural soils. Uncertainties in sink estimates of 1990 were lower, due to better availability of data. Results of this study indicate that inclusion of the forest carbon sink to GHG inventories reported to the UNFCCC increases uncertainties in net emissions notably. However, the decrease in precision is accompanied by an increase in the accuracy of the overall net GHG emissions due to improved completeness of the inventory. The results of this study can be utilised when planning future GHG mitigation protocols and emission trading schemes and when analysing environmental benefits of climate conventions

  17. On the relationship between aerosol model uncertainty and radiative forcing uncertainty.

    Science.gov (United States)

    Lee, Lindsay A; Reddington, Carly L; Carslaw, Kenneth S

    2016-05-24

    The largest uncertainty in the historical radiative forcing of climate is caused by the interaction of aerosols with clouds. Historical forcing is not a directly measurable quantity, so reliable assessments depend on the development of global models of aerosols and clouds that are well constrained by observations. However, there has been no systematic assessment of how reduction in the uncertainty of global aerosol models will feed through to the uncertainty in the predicted forcing. We use a global model perturbed parameter ensemble to show that tight observational constraint of aerosol concentrations in the model has a relatively small effect on the aerosol-related uncertainty in the calculated forcing between preindustrial and present-day periods. One factor is the low sensitivity of present-day aerosol to natural emissions that determine the preindustrial aerosol state. However, the major cause of the weak constraint is that the full uncertainty space of the model generates a large number of model variants that are equally acceptable compared to present-day aerosol observations. The narrow range of aerosol concentrations in the observationally constrained model gives the impression of low aerosol model uncertainty. However, these multiple "equifinal" models predict a wide range of forcings. To make progress, we need to develop a much deeper understanding of model uncertainty and ways to use observations to constrain it. Equifinality in the aerosol model means that tuning of a small number of model processes to achieve model-observation agreement could give a misleading impression of model robustness.

  18. Uncertainties in model-independent extractions of amplitudes from complete experiments

    International Nuclear Information System (INIS)

    Hoblit, S.; Sandorfi, A.M.; Kamano, H.; Lee, T.-S.H.

    2012-01-01

    A new generation of over-complete experiments is underway, with the goal of performing a high precision extraction of pseudoscalar meson photo-production amplitudes. Such experimentally determined amplitudes can be used both as a test to validate models and as a starting point for an analytic continuation in the complex plane to search for poles. Of crucial importance for both is the level of uncertainty in the extracted multipoles. We have probed these uncertainties by analyses of pseudo-data for KLambda photoproduction, first for the set of 8 observables that have been published for the K + Lambda channel and then for pseudo-data on a complete set of 16 observables with the uncertainties expected from analyses of ongoing CLAS experiments. In fitting multipoles, we have used a combined Monte Carlo sampling of the amplitude space, with gradient minimization, and have found a shallow X 2 valley pitted with a large number of local minima. This results in bands of solutions that are experimentally indistinguishable. All ongoing experiments will measure observables with limited statistics. We have found a dependence on the particular random choice of values of Gaussian distributed pseudo-data, due to the presence of multiple local minima. This results in actual uncertainties for reconstructed multipoles that are often considerable larger than those returned by gradient minimization routines such as Minuit which find a single local minimum. As intuitively expected, this additional level of uncertainty decreases as larger numbers of observables are included.

  19. Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions

    Science.gov (United States)

    Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong

    2016-01-01

    Model benchmarking allows us to separate uncertainty in model predictions caused 1 by model inputs from uncertainty due to model structural error. We extend this method with a large-sample approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.

  20. Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry

    Science.gov (United States)

    West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat

    2016-01-01

    The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.

  1. arXiv Addendum to: Predictions for Higgs production at the Tevatron and the associated uncertainties

    CERN Document Server

    Baglio, Julien

    2010-01-01

    We update the theoretical predictions for the production cross sections of the Standard Model Higgs boson at the Fermilab Tevatron collider, focusing on the two main search channels, the gluon-gluon fusion mechanism $gg \\to H$ and the Higgs-strahlung processes $q \\bar q \\to VH$ with $V=W/Z$, including all relevant higher order QCD and electroweak corrections in perturbation theory. We then estimate the various uncertainties affecting these predictions: the scale uncertainties which are viewed as a measure of the unknown higher order effects, the uncertainties from the parton distribution functions and the related errors on the strong coupling constant, as well as the uncertainties due to the use of an effective theory approach in the determination of the radiative corrections in the $gg \\to H$ process at next-to-next-to-leading order. We find that while the cross sections are well under control in the Higgs--strahlung processes, the theoretical uncertainties are rather large in the case of the gluon-gluon fus...

  2. Uncertainty analysis methods for estimation of reliability of passive system of VHTR

    International Nuclear Information System (INIS)

    Han, S.J.

    2012-01-01

    An estimation of reliability of passive system for the probabilistic safety assessment (PSA) of a very high temperature reactor (VHTR) is under development in Korea. The essential approach of this estimation is to measure the uncertainty of the system performance under a specific accident condition. The uncertainty propagation approach according to the simulation of phenomenological models (computer codes) is adopted as a typical method to estimate the uncertainty for this purpose. This presentation introduced the uncertainty propagation and discussed the related issues focusing on the propagation object and its surrogates. To achieve a sufficient level of depth of uncertainty results, the applicability of the propagation should be carefully reviewed. For an example study, Latin-hypercube sampling (LHS) method as a direct propagation was tested for a specific accident sequence of VHTR. The reactor cavity cooling system (RCCS) developed by KAERI was considered for this example study. This is an air-cooled type passive system that has no active components for its operation. The accident sequence is a low pressure conduction cooling (LPCC) accident that is considered as a design basis accident for the safety design of VHTR. This sequence is due to a large failure of the pressure boundary of the reactor system such as a guillotine break of coolant pipe lines. The presentation discussed the obtained insights (benefit and weakness) to apply an estimation of reliability of passive system

  3. LOFT uncertainty-analysis methodology

    International Nuclear Information System (INIS)

    Lassahn, G.D.

    1983-01-01

    The methodology used for uncertainty analyses of measurements in the Loss-of-Fluid Test (LOFT) nuclear-reactor-safety research program is described and compared with other methodologies established for performing uncertainty analyses

  4. LOFT uncertainty-analysis methodology

    International Nuclear Information System (INIS)

    Lassahn, G.D.

    1983-01-01

    The methodology used for uncertainty analyses of measurements in the Loss-of-Fluid Test (LOFT) nuclear reactor safety research program is described and compared with other methodologies established for performing uncertainty analyses

  5. Uncertainty in visual processes predicts geometrical optical illusions.

    Science.gov (United States)

    Fermüller, Cornelia; Malm, Henrik

    2004-03-01

    It is proposed in this paper that many geometrical optical illusions, as well as illusory patterns due to motion signals in line drawings, are due to the statistics of visual computations. The interpretation of image patterns is preceded by a step where image features such as lines, intersections of lines, or local image movement must be derived. However, there are many sources of noise or uncertainty in the formation and processing of images, and they cause problems in the estimation of these features; in particular, they cause bias. As a result, the locations of features are perceived erroneously and the appearance of the patterns is altered. The bias occurs with any visual processing of line features; under average conditions it is not large enough to be noticeable, but illusory patterns are such that the bias is highly pronounced. Thus, the broader message of this paper is that there is a general uncertainty principle which governs the workings of vision systems, and optical illusions are an artifact of this principle.

  6. Assessment of uncertainties in core melt phenomenology and their impact on risk at the Z/IP facilities

    International Nuclear Information System (INIS)

    Pratt, W.T.; Ludewig, H.; Bari, R.A.; Meyer, J.F.

    1983-01-01

    An evaluation of core meltdown accidents in the Z/IP facilities has been performed. Containment event trees have been developed to relate the progression of a given accident to various potential containment building failure modes. An extensive uncertainty analysis related to core melt phenomenology has been performed. A major conclusion of the study is that large variations in parameters associated with major phenomenological uncertainties have a relatively minor impact on risk when external initiators are considered. This is due to the inherent capability fo the Z/IP containment buildings to contain a wide range of core meltdown accidents. 12 references, 2 tables

  7. Do oil shocks predict economic policy uncertainty?

    Science.gov (United States)

    Rehman, Mobeen Ur

    2018-05-01

    Oil price fluctuations have influential role in global economic policies for developed as well as emerging countries. I investigate the role of international oil prices disintegrated into structural (i) oil supply shock, (ii) aggregate demand shock and (iii) oil market specific demand shocks, based on the work of Kilian (2009) using structural VAR framework on economic policies uncertainty of sampled markets. Economic policy uncertainty, due to its non-linear behavior is modeled in a regime switching framework with disintegrated structural oil shocks. Our results highlight that Indian, Spain and Japanese economic policy uncertainty responds to the global oil price shocks, however aggregate demand shocks fail to induce any change. Oil specific demand shocks are significant only for China and India in high volatility state.

  8. Sensitivity and uncertainty analysis for fission product decay heat calculations

    International Nuclear Information System (INIS)

    Rebah, J.; Lee, Y.K.; Nimal, J.C.; Nimal, B.; Luneville, L.; Duchemin, B.

    1994-01-01

    The calculated uncertainty in decay heat due to the uncertainty in basic nuclear data given in the CEA86 Library, is presented. Uncertainties in summation calculation arise from several sources: fission product yields, half-lives and average decay energies. The correlation between basic data is taken into account. The uncertainty analysis were obtained for thermal-neutron-induced fission of U235 and Pu239 in the case of burst fission and irradiation time. The calculated decay heat in this study is compared with experimental results and with new calculation using the JEF2 Library. (from authors) 6 figs., 19 refs

  9. Understanding Climate Uncertainty with an Ocean Focus

    Science.gov (United States)

    Tokmakian, R. T.

    2009-12-01

    Uncertainty in climate simulations arises from various aspects of the end-to-end process of modeling the Earth’s climate. First, there is uncertainty from the structure of the climate model components (e.g. ocean/ice/atmosphere). Even the most complex models are deficient, not only in the complexity of the processes they represent, but in which processes are included in a particular model. Next, uncertainties arise from the inherent error in the initial and boundary conditions of a simulation. Initial conditions are the state of the weather or climate at the beginning of the simulation and other such things, and typically come from observations. Finally, there is the uncertainty associated with the values of parameters in the model. These parameters may represent physical constants or effects, such as ocean mixing, or non-physical aspects of modeling and computation. The uncertainty in these input parameters propagates through the non-linear model to give uncertainty in the outputs. The models in 2020 will no doubt be better than today’s models, but they will still be imperfect, and development of uncertainty analysis technology is a critical aspect of understanding model realism and prediction capability. Smith [2002] and Cox and Stephenson [2007] discuss the need for methods to quantify the uncertainties within complicated systems so that limitations or weaknesses of the climate model can be understood. In making climate predictions, we need to have available both the most reliable model or simulation and a methods to quantify the reliability of a simulation. If quantitative uncertainty questions of the internal model dynamics are to be answered with complex simulations such as AOGCMs, then the only known path forward is based on model ensembles that characterize behavior with alternative parameter settings [e.g. Rougier, 2007]. The relevance and feasibility of using "Statistical Analysis of Computer Code Output" (SACCO) methods for examining uncertainty in

  10. Uncertainties in Climatological Seawater Density Calculations

    Science.gov (United States)

    Dai, Hao; Zhang, Xining

    2018-03-01

    In most applications, with seawater conductivity, temperature, and pressure data measured in situ by various observation instruments e.g., Conductivity-Temperature-Depth instruments (CTD), the density which has strong ties to ocean dynamics and so on is computed according to equations of state for seawater. This paper, based on density computational formulae in the Thermodynamic Equation of Seawater 2010 (TEOS-10), follows the Guide of the expression of Uncertainty in Measurement (GUM) and assesses the main sources of uncertainties. By virtue of climatological decades-average temperature/Practical Salinity/pressure data sets in the global ocean provided by the National Oceanic and Atmospheric Administration (NOAA), correlation coefficients between uncertainty sources are determined and the combined standard uncertainties uc>(ρ>) in seawater density calculations are evaluated. For grid points in the world ocean with 0.25° resolution, the standard deviations of uc>(ρ>) in vertical profiles cover the magnitude order of 10-4 kg m-3. The uc>(ρ>) means in vertical profiles of the Baltic Sea are about 0.028kg m-3 due to the larger scatter of Absolute Salinity anomaly. The distribution of the uc>(ρ>) means in vertical profiles of the world ocean except for the Baltic Sea, which covers the range of >(0.004,0.01>) kg m-3, is related to the correlation coefficient r>(SA,p>) between Absolute Salinity SA and pressure p. The results in the paper are based on sensors' measuring uncertainties of high accuracy CTD. Larger uncertainties in density calculations may arise if connected with lower sensors' specifications. This work may provide valuable uncertainty information required for reliability considerations of ocean circulation and global climate models.

  11. A New Framework for Quantifying Lidar Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Newman, Jennifer, F.; Clifton, Andrew; Bonin, Timothy A.; Churchfield, Matthew J.

    2017-03-24

    As wind turbine sizes increase and wind energy expands to more complex and remote sites, remote sensing devices such as lidars are expected to play a key role in wind resource assessment and power performance testing. The switch to remote sensing devices represents a paradigm shift in the way the wind industry typically obtains and interprets measurement data for wind energy. For example, the measurement techniques and sources of uncertainty for a remote sensing device are vastly different from those associated with a cup anemometer on a meteorological tower. Current IEC standards discuss uncertainty due to mounting, calibration, and classification of the remote sensing device, among other parameters. Values of the uncertainty are typically given as a function of the mean wind speed measured by a reference device. However, real-world experience has shown that lidar performance is highly dependent on atmospheric conditions, such as wind shear, turbulence, and aerosol content. At present, these conditions are not directly incorporated into the estimated uncertainty of a lidar device. In this presentation, we propose the development of a new lidar uncertainty framework that adapts to current flow conditions and more accurately represents the actual uncertainty inherent in lidar measurements under different conditions. In this new framework, sources of uncertainty are identified for estimation of the line-of-sight wind speed and reconstruction of the three-dimensional wind field. These sources are then related to physical processes caused by the atmosphere and lidar operating conditions. The framework is applied to lidar data from an operational wind farm to assess the ability of the framework to predict errors in lidar-measured wind speed.

  12. Small break LOCA RELAP5/MOD3 uncertainty quantification: Bias and uncertainty evaluation for important phenomena

    International Nuclear Information System (INIS)

    Ortiz, M.G.; Ghan, L.S.; Vogl, J.

    1991-01-01

    The Nuclear Regulatory Commission (NRC) revised the Emergency Core Cooling System (ECCS) licensing rule to allow the use of Best Estimate (BE) computer codes, provided the uncertainty of the calculations are quantified and used in the licensing and regulation process. The NRC developed a generic methodology called Code Scaling, Applicability and Uncertainty (CSAU) to evaluate BE code uncertainties. The CSAU methodology was demonstrated with a specific application to a pressurized water reactor (PWR), experiencing a postulated large break loss-of-coolant accident (LBLOCA). The current work is part of an effort to adapt and demonstrate the CSAU methodology to a small break (SB) LOCA in a PWR of B and W design using RELAP5/MOD3 as the simulation tool. The subject of this paper is the Assessment and Ranging of Parameters (Element 2 of the CSAU methodology), which determines the contribution to uncertainty of specific models in the code

  13. Do Orthopaedic Surgeons Acknowledge Uncertainty?

    NARCIS (Netherlands)

    Teunis, Teun; Janssen, Stein; Guitton, Thierry G.; Ring, David; Parisien, Robert

    2016-01-01

    Much of the decision-making in orthopaedics rests on uncertain evidence. Uncertainty is therefore part of our normal daily practice, and yet physician uncertainty regarding treatment could diminish patients' health. It is not known if physician uncertainty is a function of the evidence alone or if

  14. Assignment of uncertainties to scientific data

    International Nuclear Information System (INIS)

    Froehner, F.H.

    1994-01-01

    Long-standing problems of uncertainty assignment to scientific data came into a sharp focus in recent years when uncertainty information ('covariance files') had to be added to application-oriented large libraries of evaluated nuclear data such as ENDF and JEF. Question arouse about the best way to express uncertainties, the meaning of statistical and systematic errors, the origin of correlation and construction of covariance matrices, the combination of uncertain data from different sources, the general usefulness of results that are strictly valid only for Gaussian or only for linear statistical models, etc. Conventional statistical theory is often unable to give unambiguous answers, and tends to fail when statistics is bad so that prior information becomes crucial. Modern probability theory, on the other hand, incorporating decision information becomes group-theoretic results, is shown to provide straight and unique answers to such questions, and to deal easily with prior information and small samples. (author). 10 refs

  15. Assessing flood forecast uncertainty with fuzzy arithmetic

    Directory of Open Access Journals (Sweden)

    de Bruyn Bertrand

    2016-01-01

    Full Text Available Providing forecasts for flow rates and water levels during floods have to be associated with uncertainty estimates. The forecast sources of uncertainty are plural. For hydrological forecasts (rainfall-runoff performed using a deterministic hydrological model with basic physics, two main sources can be identified. The first obvious source is the forcing data: rainfall forecast data are supplied in real time by meteorological forecasting services to the Flood Forecasting Service within a range between a lowest and a highest predicted discharge. These two values define an uncertainty interval for the rainfall variable provided on a given watershed. The second source of uncertainty is related to the complexity of the modeled system (the catchment impacted by the hydro-meteorological phenomenon, the number of variables that may describe the problem and their spatial and time variability. The model simplifies the system by reducing the number of variables to a few parameters. Thus it contains an intrinsic uncertainty. This model uncertainty is assessed by comparing simulated and observed rates for a large number of hydro-meteorological events. We propose a method based on fuzzy arithmetic to estimate the possible range of flow rates (and levels of water making a forecast based on possible rainfalls provided by forcing and uncertainty model. The model uncertainty is here expressed as a range of possible values. Both rainfall and model uncertainties are combined with fuzzy arithmetic. This method allows to evaluate the prediction uncertainty range. The Flood Forecasting Service of Oise and Aisne rivers, in particular, monitors the upstream watershed of the Oise at Hirson. This watershed’s area is 310 km2. Its response time is about 10 hours. Several hydrological models are calibrated for flood forecasting in this watershed and use the rainfall forecast. This method presents the advantage to be easily implemented. Moreover, it permits to be carried out

  16. Uncertainties in risk assessment and decision making

    International Nuclear Information System (INIS)

    Starzec, Peter; Purucker, Tom; Stewart, Robert

    2008-02-01

    confidence interval under different assumptions regarding the data structure. The results stress the importance to invoke statistical methods and also illustrate how the choice of a wrong methodology may affect the quality of risk assessment and foundations for decision making. The uncertainty in assessing the volume of contaminated soil was shown to be dependant only to a low extent on the interpolation technique used for the specific case study analyzed. It is, however, expected that the uncertainty may increase significantly, if more restrictive risk criteria (lower guideline value) are applied. Despite a possible low uncertainty in assessing the contaminated soil volume, the uncertainty in its localization can be substantial. Based on the demo example presented, it comes out that the risk-based input for decision on soil treatment may vary depending on what assumptions were adopted during interpolation process. Uncertainty in an ecological exposure model with regard to the moving pattern of a receptor in relation till spatial distribution of contaminant has been demonstrated by studies on pronghorn (Antilocapra americana). The results from numerical simulations show that a lack in knowledge on the receptor moving routes may bring about substantial uncertainty in exposure assessment. The presented concept is mainly applicable for 'mobile' receptors on relatively large areas. A number of statistical definitions/methods/concepts are presented in the report of which some are not elaborated on in detail, while readers are referred to proper literature. The mail goal with the study has been rather to shed more light on aspects related to uncertainty in risk assessment and to demonstrate potential consequences of wrong approach than to provide readers with formal guideline and recommendations. However, the outcome from the study will hopefully contribute to the further work on novel approaches towards more reliable risk assessments

  17. Uncertainty in reactive transport geochemical modelling

    International Nuclear Information System (INIS)

    Oedegaard-Jensen, A.; Ekberg, C.

    2005-01-01

    Full text of publication follows: Geochemical modelling is one way of predicting the transport of i.e. radionuclides in a rock formation. In a rock formation there will be fractures in which water and dissolved species can be transported. The composition of the water and the rock can either increase or decrease the mobility of the transported entities. When doing simulations on the mobility or transport of different species one has to know the exact water composition, the exact flow rates in the fracture and in the surrounding rock, the porosity and which minerals the rock is composed of. The problem with simulations on rocks is that the rock itself it not uniform i.e. larger fractures in some areas and smaller in other areas which can give different water flows. The rock composition can be different in different areas. In additions to this variance in the rock there are also problems with measuring the physical parameters used in a simulation. All measurements will perturb the rock and this perturbation will results in more or less correct values of the interesting parameters. The analytical methods used are also encumbered with uncertainties which in this case are added to the uncertainty from the perturbation of the analysed parameters. When doing simulation the effect of the uncertainties must be taken into account. As the computers are getting faster and faster the complexity of simulated systems are increased which also increase the uncertainty in the results from the simulations. In this paper we will show how the uncertainty in the different parameters will effect the solubility and mobility of different species. Small uncertainties in the input parameters can result in large uncertainties in the end. (authors)

  18. Uncertainty analysis of neutron transport calculation

    International Nuclear Information System (INIS)

    Oka, Y.; Furuta, K.; Kondo, S.

    1987-01-01

    A cross section sensitivity-uncertainty analysis code, SUSD was developed. The code calculates sensitivity coefficients for one and two-dimensional transport problems based on the first order perturbation theory. Variance and standard deviation of detector responses or design parameters can be obtained using cross section covariance matrix. The code is able to perform sensitivity-uncertainty analysis for secondary neutron angular distribution(SAD) and secondary neutron energy distribution(SED). Covariances of 6 Li and 7 Li neutron cross sections in JENDL-3PR1 were evaluated including SAD and SED. Covariances of Fe and Be were also evaluated. The uncertainty of tritium breeding ratio, fast neutron leakage flux and neutron heating was analysed on four types of blanket concepts for a commercial tokamak fusion reactor. The uncertainty of tritium breeding ratio was less than 6 percent. Contribution from SAD/SED uncertainties are significant for some parameters. Formulas to estimate the errors of numerical solution of the transport equation were derived based on the perturbation theory. This method enables us to deterministically estimate the numerical errors due to iterative solution, spacial discretization and Legendre polynomial expansion of transfer cross-sections. The calculational errors of the tritium breeding ratio and the fast neutron leakage flux of the fusion blankets were analysed. (author)

  19. Uncertainty quantification in flood risk assessment

    Science.gov (United States)

    Blöschl, Günter; Hall, Julia; Kiss, Andrea; Parajka, Juraj; Perdigão, Rui A. P.; Rogger, Magdalena; Salinas, José Luis; Viglione, Alberto

    2017-04-01

    Uncertainty is inherent to flood risk assessments because of the complexity of the human-water system, which is characterised by nonlinearities and interdependencies, because of limited knowledge about system properties and because of cognitive biases in human perception and decision-making. On top of the uncertainty associated with the assessment of the existing risk to extreme events, additional uncertainty arises because of temporal changes in the system due to climate change, modifications of the environment, population growth and the associated increase in assets. Novel risk assessment concepts are needed that take into account all these sources of uncertainty. They should be based on the understanding of how flood extremes are generated and how they change over time. They should also account for the dynamics of risk perception of decision makers and population in the floodplains. In this talk we discuss these novel risk assessment concepts through examples from Flood Frequency Hydrology, Socio-Hydrology and Predictions Under Change. We believe that uncertainty quantification in flood risk assessment should lead to a robust approach of integrated flood risk management aiming at enhancing resilience rather than searching for optimal defense strategies.

  20. Climate change impacts on extreme events in the United States: an uncertainty analysis

    Science.gov (United States)

    Extreme weather and climate events, such as heat waves, droughts and severe precipitation events, have substantial impacts on ecosystems and the economy. However, future climate simulations display large uncertainty in mean changes. As a result, the uncertainty in future changes ...

  1. Uncertainty analysis comes to integrated assessment models for climate change…and conversely

    NARCIS (Netherlands)

    Cooke, R.M.

    2012-01-01

    This article traces the development of uncertainty analysis through three generations punctuated by large methodology investments in the nuclear sector. Driven by a very high perceived legitimation burden, these investments aimed at strengthening the scientific basis of uncertainty quantification.

  2. Quantifying aggregated uncertainty in Plasmodium falciparum malaria prevalence and populations at risk via efficient space-time geostatistical joint simulation.

    Science.gov (United States)

    Gething, Peter W; Patil, Anand P; Hay, Simon I

    2010-04-01

    Risk maps estimating the spatial distribution of infectious diseases are required to guide public health policy from local to global scales. The advent of model-based geostatistics (MBG) has allowed these maps to be generated in a formal statistical framework, providing robust metrics of map uncertainty that enhances their utility for decision-makers. In many settings, decision-makers require spatially aggregated measures over large regions such as the mean prevalence within a country or administrative region, or national populations living under different levels of risk. Existing MBG mapping approaches provide suitable metrics of local uncertainty--the fidelity of predictions at each mapped pixel--but have not been adapted for measuring uncertainty over large areas, due largely to a series of fundamental computational constraints. Here the authors present a new efficient approximating algorithm that can generate for the first time the necessary joint simulation of prevalence values across the very large prediction spaces needed for global scale mapping. This new approach is implemented in conjunction with an established model for P. falciparum allowing robust estimates of mean prevalence at any specified level of spatial aggregation. The model is used to provide estimates of national populations at risk under three policy-relevant prevalence thresholds, along with accompanying model-based measures of uncertainty. By overcoming previously unchallenged computational barriers, this study illustrates how MBG approaches, already at the forefront of infectious disease mapping, can be extended to provide large-scale aggregate measures appropriate for decision-makers.

  3. Investment and uncertainty

    DEFF Research Database (Denmark)

    Greasley, David; Madsen, Jakob B.

    2006-01-01

    A severe collapse of fixed capital formation distinguished the onset of the Great Depression from other investment downturns between the world wars. Using a model estimated for the years 1890-2000, we show that the expected profitability of capital measured by Tobin's q, and the uncertainty...... surrounding expected profits indicated by share price volatility, were the chief influences on investment levels, and that heightened share price volatility played the dominant role in the crucial investment collapse in 1930. Investment did not simply follow the downward course of income at the onset...

  4. Optimization under Uncertainty

    KAUST Repository

    Lopez, Rafael H.

    2016-01-06

    The goal of this poster is to present the main approaches to optimization of engineering systems in the presence of uncertainties. We begin by giving an insight about robust optimization. Next, we detail how to deal with probabilistic constraints in optimization, the so called the reliability based design. Subsequently, we present the risk optimization approach, which includes the expected costs of failure in the objective function. After that the basic description of each approach is given, the projects developed by CORE are presented. Finally, the main current topic of research of CORE is described.

  5. Optimizing production under uncertainty

    DEFF Research Database (Denmark)

    Rasmussen, Svend

    This Working Paper derives criteria for optimal production under uncertainty based on the state-contingent approach (Chambers and Quiggin, 2000), and discusses po-tential problems involved in applying the state-contingent approach in a normative context. The analytical approach uses the concept...... of state-contingent production functions and a definition of inputs including both sort of input, activity and alloca-tion technology. It also analyses production decisions where production is combined with trading in state-contingent claims such as insurance contracts. The final part discusses...

  6. Commonplaces and social uncertainty

    DEFF Research Database (Denmark)

    Lassen, Inger

    2008-01-01

    This article explores the concept of uncertainty in four focus group discussions about genetically modified food. In the discussions, members of the general public interact with food biotechnology scientists while negotiating their attitudes towards genetic engineering. Their discussions offer...... an example of risk discourse in which the use of commonplaces seems to be a central feature (Myers 2004: 81). My analyses support earlier findings that commonplaces serve important interactional purposes (Barton 1999) and that they are used for mitigating disagreement, for closing topics and for facilitating...

  7. Principles of Uncertainty

    CERN Document Server

    Kadane, Joseph B

    2011-01-01

    An intuitive and mathematical introduction to subjective probability and Bayesian statistics. An accessible, comprehensive guide to the theory of Bayesian statistics, Principles of Uncertainty presents the subjective Bayesian approach, which has played a pivotal role in game theory, economics, and the recent boom in Markov Chain Monte Carlo methods. Both rigorous and friendly, the book contains: Introductory chapters examining each new concept or assumption Just-in-time mathematics -- the presentation of ideas just before they are applied Summary and exercises at the end of each chapter Discus

  8. Mathematical Analysis of Uncertainty

    Directory of Open Access Journals (Sweden)

    Angel GARRIDO

    2016-01-01

    Full Text Available Classical Logic showed early its insufficiencies for solving AI problems. The introduction of Fuzzy Logic aims at this problem. There have been research in the conventional Rough direction alone or in the Fuzzy direction alone, and more recently, attempts to combine both into Fuzzy Rough Sets or Rough Fuzzy Sets. We analyse some new and powerful tools in the study of Uncertainty, as the Probabilistic Graphical Models, Chain Graphs, Bayesian Networks, and Markov Networks, integrating our knowledge of graphs and probability.

  9. Durability reliability analysis for corroding concrete structures under uncertainty

    Science.gov (United States)

    Zhang, Hao

    2018-02-01

    This paper presents a durability reliability analysis of reinforced concrete structures subject to the action of marine chloride. The focus is to provide insight into the role of epistemic uncertainties on durability reliability. The corrosion model involves a number of variables whose probabilistic characteristics cannot be fully determined due to the limited availability of supporting data. All sources of uncertainty, both aleatory and epistemic, should be included in the reliability analysis. Two methods are available to formulate the epistemic uncertainty: the imprecise probability-based method and the purely probabilistic method in which the epistemic uncertainties are modeled as random variables. The paper illustrates how the epistemic uncertainties are modeled and propagated in the two methods, and shows how epistemic uncertainties govern the durability reliability.

  10. Uncertainty Quantification in High Throughput Screening ...

    Science.gov (United States)

    Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of biochemical and cellular processes, including endocrine disruption, cytotoxicity, and zebrafish development. Over 2.6 million concentration response curves are fit to models to extract parameters related to potency and efficacy. Models built on ToxCast results are being used to rank and prioritize the toxicological risk of tested chemicals and to predict the toxicity of tens of thousands of chemicals not yet tested in vivo. However, the data size also presents challenges. When fitting the data, the choice of models, model selection strategy, and hit call criteria must reflect the need for computational efficiency and robustness, requiring hard and somewhat arbitrary cutoffs. When coupled with unavoidable noise in the experimental concentration response data, these hard cutoffs cause uncertainty in model parameters and the hit call itself. The uncertainty will then propagate through all of the models built on the data. Left unquantified, this uncertainty makes it difficult to fully interpret the data for risk assessment. We used bootstrap resampling methods to quantify the uncertainty in fitting models to the concentration response data. Bootstrap resampling determines confidence intervals for

  11. Uncertainties in Organ Burdens Estimated from PAS

    International Nuclear Information System (INIS)

    La Bone, T.R.

    2004-01-01

    To calculate committed effective dose equivalent, one needs to know the quantity of the radionuclide in all significantly irradiated organs (the organ burden) as a function of time following the intake. There are two major sources of uncertainty in an organ burden estimated from personal air sampling (PAS) data: (1) The uncertainty in going from the exposure measured with the PAS to the quantity of aerosol inhaled by the individual, and (2) The uncertainty in going from the intake to the organ burdens at any given time, taking into consideration the biological variability of the biokinetic models from person to person (interperson variability) and in one person over time (intra-person variability). We have been using biokinetic modeling methods developed by researchers at the University of Florida to explore the impact of inter-person variability on the uncertainty of organ burdens estimated from PAS data. These initial studies suggest that the uncertainties are so large that PAS might be considered to be a qualitative (rather than quantitative) technique. These results indicate that more studies should be performed to properly classify the reliability and usefulness of using PAS monitoring data to estimate organ burdens, organ dose, and ultimately CEDE

  12. Problems due to icing of overhead lines - Part II

    International Nuclear Information System (INIS)

    Havard, D.G.; Pon, C.J.; Krishnasamy, S.G.

    1985-01-01

    A companion paper describes uncertainties in overhead line design due to the variability of ice and wind loads. This paper reviews two other effects due to icing; conductor galloping and torsional instability, which require further study. (author)

  13. Damage assessment of composite plate structures with material and measurement uncertainty

    Science.gov (United States)

    Chandrashekhar, M.; Ganguli, Ranjan

    2016-06-01

    Composite materials are very useful in structural engineering particularly in weight sensitive applications. Two different test models of the same structure made from composite materials can display very different dynamic behavior due to large uncertainties associated with composite material properties. Also, composite structures can suffer from pre-existing imperfections like delaminations, voids or cracks during fabrication. In this paper, we show that modeling and material uncertainties in composite structures can cause considerable problem in damage assessment. A recently developed C0 shear deformable locking free refined composite plate element is employed in the numerical simulations to alleviate modeling uncertainty. A qualitative estimate of the impact of modeling uncertainty on the damage detection problem is made. A robust Fuzzy Logic System (FLS) with sliding window defuzzifier is used for delamination damage detection in composite plate type structures. The FLS is designed using variations in modal frequencies due to randomness in material properties. Probabilistic analysis is performed using Monte Carlo Simulation (MCS) on a composite plate finite element model. It is demonstrated that the FLS shows excellent robustness in delamination detection at very high levels of randomness in input data.

  14. An Efficient Deterministic Approach to Model-based Prediction Uncertainty

    Data.gov (United States)

    National Aeronautics and Space Administration — Prognostics deals with the prediction of the end of life (EOL) of a system. EOL is a random variable, due to the presence of process noise and uncertainty in the...

  15. Uncertainty Principles on Two Step Nilpotent Lie Groups

    Indian Academy of Sciences (India)

    Abstract. We extend an uncertainty principle due to Cowling and Price to two step nilpotent Lie groups, which generalizes a classical theorem of Hardy. We also prove an analogue of Heisenberg inequality on two step nilpotent Lie groups.

  16. A review on the CIRCE methodology to quantify the uncertainty of the physical models of a code

    International Nuclear Information System (INIS)

    Jeon, Seong Su; Hong, Soon Joon; Bang, Young Seok

    2012-01-01

    In the field of nuclear engineering, recent regulatory audit calculations of large break loss of coolant accident (LBLOCA) have been performed with the best estimate code such as MARS, RELAP5 and CATHARE. Since the credible regulatory audit calculation is very important in the evaluation of the safety of the nuclear power plant (NPP), there have been many researches to develop rules and methodologies for the use of best estimate codes. One of the major points is to develop the best estimate plus uncertainty (BEPU) method for uncertainty analysis. As a representative BEPU method, NRC proposes the CSAU (Code scaling, applicability and uncertainty) methodology, which clearly identifies the different steps necessary for an uncertainty analysis. The general idea is 1) to determine all the sources of uncertainty in the code, also called basic uncertainties, 2) quantify them and 3) combine them in order to obtain the final uncertainty for the studied application. Using the uncertainty analysis such as CSAU methodology, an uncertainty band for the code response (calculation result), important from the safety point of view is calculated and the safety margin of the NPP is quantified. An example of such a response is the peak cladding temperature (PCT) for a LBLOCA. However, there is a problem in the uncertainty analysis with the best estimate codes. Generally, it is very difficult to determine the uncertainties due to the empiricism of closure laws (also called correlations or constitutive relationships). So far the only proposed approach is based on the expert judgment. For this case, the uncertainty range of important parameters can be wide and inaccurate so that the confidence level of the BEPU calculation results can be decreased. In order to solve this problem, recently CEA (France) proposes a statistical method of data analysis, called CIRCE. The CIRCE method is intended to quantify the uncertainties of the correlations of a code. It may replace the expert judgment

  17. Uncertainty information in climate data records from Earth observation

    Science.gov (United States)

    Merchant, Christopher J.; Paul, Frank; Popp, Thomas; Ablain, Michael; Bontemps, Sophie; Defourny, Pierre; Hollmann, Rainer; Lavergne, Thomas; Laeng, Alexandra; de Leeuw, Gerrit; Mittaz, Jonathan; Poulsen, Caroline; Povey, Adam C.; Reuter, Max; Sathyendranath, Shubha; Sandven, Stein; Sofieva, Viktoria F.; Wagner, Wolfgang

    2017-07-01

    The question of how to derive and present uncertainty information in climate data records (CDRs) has received sustained attention within the European Space Agency Climate Change Initiative (CCI), a programme to generate CDRs addressing a range of essential climate variables (ECVs) from satellite data. Here, we review the nature, mathematics, practicalities, and communication of uncertainty information in CDRs from Earth observations. This review paper argues that CDRs derived from satellite-based Earth observation (EO) should include rigorous uncertainty information to support the application of the data in contexts such as policy, climate modelling, and numerical weather prediction reanalysis. Uncertainty, error, and quality are distinct concepts, and the case is made that CDR products should follow international metrological norms for presenting quantified uncertainty. As a baseline for good practice, total standard uncertainty should be quantified per datum in a CDR, meaning that uncertainty estimates should clearly discriminate more and less certain data. In this case, flags for data quality should not duplicate uncertainty information, but instead describe complementary information (such as the confidence in the uncertainty estimate provided or indicators of conditions violating the retrieval assumptions). The paper discusses the many sources of error in CDRs, noting that different errors may be correlated across a wide range of timescales and space scales. Error effects that contribute negligibly to the total uncertainty in a single-satellite measurement can be the dominant sources of uncertainty in a CDR on the large space scales and long timescales that are highly relevant for some climate applications. For this reason, identifying and characterizing the relevant sources of uncertainty for CDRs is particularly challenging. The characterization of uncertainty caused by a given error effect involves assessing the magnitude of the effect, the shape of the

  18. The importance of input interactions in the uncertainty and sensitivity analysis of nuclear fuel behavior

    Energy Technology Data Exchange (ETDEWEB)

    Ikonen, T., E-mail: timo.ikonen@vtt.fi; Tulkki, V.

    2014-08-15

    Highlights: • Uncertainty and sensitivity analysis of modeled nuclear fuel behavior is performed. • Burnup dependency of the uncertainties and sensitivities is characterized. • Input interactions significantly increase output uncertainties for irradiated fuel. • Identification of uncertainty sources is greatly improved with higher order methods. • Results stress the importance of using methods that take interactions into account. - Abstract: The propagation of uncertainties in a PWR fuel rod under steady-state irradiation is analyzed by computational means. A hypothetical steady-state scenario of the Three Mile Island 1 reactor fuel rod is modeled with the fuel performance FRAPCON, using realistic input uncertainties for the fabrication and model parameters, boundary conditions and material properties. The uncertainty and sensitivity analysis is performed by extensive Monte Carlo sampling of the inputs’ probability distribution and by applying correlation coefficient and Sobol’ variance decomposition analyses. The latter includes evaluation of the second order and total effect sensitivity indices, allowing the study of interactions between input variables. The results show that the interactions play a large role in the propagation of uncertainties, and first order methods such as the correlation coefficient analyses are in general insufficient for sensitivity analysis of the fuel rod. Significant improvement over the first order methods can be achieved by using higher order methods. The results also show that both the magnitude of the uncertainties and their propagation depends not only on the output in question, but also on burnup. The latter is due to onset of new phenomena (such as the fission gas release) and the gradual closure of the pellet-cladding gap with increasing burnup. Increasing burnup also affects the importance of input interactions. Interaction effects are typically highest in the moderate burnup (of the order of 10–40 MWd

  19. Risk Assessment Uncertainties in Cybersecurity Investments

    Directory of Open Access Journals (Sweden)

    Andrew Fielder

    2018-06-01

    Full Text Available When undertaking cybersecurity risk assessments, it is important to be able to assign numeric values to metrics to compute the final expected loss that represents the risk that an organization is exposed to due to cyber threats. Even if risk assessment is motivated by real-world observations and data, there is always a high chance of assigning inaccurate values due to different uncertainties involved (e.g., evolving threat landscape, human errors and the natural difficulty of quantifying risk. Existing models empower organizations to compute optimal cybersecurity strategies given their financial constraints, i.e., available cybersecurity budget. Further, a general game-theoretic model with uncertain payoffs (probability-distribution-valued payoffs shows that such uncertainty can be incorporated in the game-theoretic model by allowing payoffs to be random. This paper extends previous work in the field to tackle uncertainties in risk assessment that affect cybersecurity investments. The findings from simulated examples indicate that although uncertainties in cybersecurity risk assessment lead, on average, to different cybersecurity strategies, they do not play a significant role in the final expected loss of the organization when utilising a game-theoretic model and methodology to derive these strategies. The model determines robust defending strategies even when knowledge regarding risk assessment values is not accurate. As a result, it is possible to show that the cybersecurity investments’ tool is capable of providing effective decision support.

  20. Investment, regulation, and uncertainty

    Science.gov (United States)

    Smyth, Stuart J; McDonald, Jillian; Falck-Zepeda, Jose

    2014-01-01

    As with any technological innovation, time refines the technology, improving upon the original version of the innovative product. The initial GM crops had single traits for either herbicide tolerance or insect resistance. Current varieties have both of these traits stacked together and in many cases other abiotic and biotic traits have also been stacked. This innovation requires investment. While this is relatively straight forward, certain conditions need to exist such that investments can be facilitated. The principle requirement for investment is that regulatory frameworks render consistent and timely decisions. If the certainty of regulatory outcomes weakens, the potential for changes in investment patterns increases.   This article provides a summary background to the leading plant breeding technologies that are either currently being used to develop new crop varieties or are in the pipeline to be applied to plant breeding within the next few years. Challenges for existing regulatory systems are highlighted. Utilizing an option value approach from investment literature, an assessment of uncertainty regarding the regulatory approval for these varying techniques is undertaken. This research highlights which technology development options have the greatest degree of uncertainty and hence, which ones might be expected to see an investment decline. PMID:24499745

  1. Probabilistic Mass Growth Uncertainties

    Science.gov (United States)

    Plumer, Eric; Elliott, Darren

    2013-01-01

    Mass has been widely used as a variable input parameter for Cost Estimating Relationships (CER) for space systems. As these space systems progress from early concept studies and drawing boards to the launch pad, their masses tend to grow substantially, hence adversely affecting a primary input to most modeling CERs. Modeling and predicting mass uncertainty, based on historical and analogous data, is therefore critical and is an integral part of modeling cost risk. This paper presents the results of a NASA on-going effort to publish mass growth datasheet for adjusting single-point Technical Baseline Estimates (TBE) of masses of space instruments as well as spacecraft, for both earth orbiting and deep space missions at various stages of a project's lifecycle. This paper will also discusses the long term strategy of NASA Headquarters in publishing similar results, using a variety of cost driving metrics, on an annual basis. This paper provides quantitative results that show decreasing mass growth uncertainties as mass estimate maturity increases. This paper's analysis is based on historical data obtained from the NASA Cost Analysis Data Requirements (CADRe) database.

  2. Uncertainty analysis in WWTP model applications: a critical discussion using an example from design

    DEFF Research Database (Denmark)

    Sin, Gürkan; Gernaey, Krist; Neumann, Marc B.

    2009-01-01

    of design performance criteria differs significantly. The implication for the practical applications of uncertainty analysis in the wastewater industry is profound: (i) as the uncertainty analysis results are specific to the framing used, the results must be interpreted within the context of that framing......This study focuses on uncertainty analysis of WWTP models and analyzes the issue of framing and how it affects the interpretation of uncertainty analysis results. As a case study, the prediction of uncertainty involved in model-based design of a wastewater treatment plant is studied. The Monte...... to stoichiometric, biokinetic and influent parameters; (2) uncertainty due to hydraulic behaviour of the plant and mass transfer parameters; (3) uncertainty due to the combination of (1) and (2). The results demonstrate that depending on the way the uncertainty analysis is framed, the estimated uncertainty...

  3. Embracing uncertainty in applied ecology.

    Science.gov (United States)

    Milner-Gulland, E J; Shea, K

    2017-12-01

    Applied ecologists often face uncertainty that hinders effective decision-making.Common traps that may catch the unwary are: ignoring uncertainty, acknowledging uncertainty but ploughing on, focussing on trivial uncertainties, believing your models, and unclear objectives.We integrate research insights and examples from a wide range of applied ecological fields to illustrate advances that are generally underused, but could facilitate ecologists' ability to plan and execute research to support management.Recommended approaches to avoid uncertainty traps are: embracing models, using decision theory, using models more effectively, thinking experimentally, and being realistic about uncertainty. Synthesis and applications . Applied ecologists can become more effective at informing management by using approaches that explicitly take account of uncertainty.

  4. Oil price uncertainty in Canada

    Energy Technology Data Exchange (ETDEWEB)

    Elder, John [Department of Finance and Real Estate, 1272 Campus Delivery, Colorado State University, Fort Collins, CO 80523 (United States); Serletis, Apostolos [Department of Economics, University of Calgary, Calgary, Alberta (Canada)

    2009-11-15

    Bernanke [Bernanke, Ben S. Irreversibility, uncertainty, and cyclical investment. Quarterly Journal of Economics 98 (1983), 85-106.] shows how uncertainty about energy prices may induce optimizing firms to postpone investment decisions, thereby leading to a decline in aggregate output. Elder and Serletis [Elder, John and Serletis, Apostolos. Oil price uncertainty.] find empirical evidence that uncertainty about oil prices has tended to depress investment in the United States. In this paper we assess the robustness of these results by investigating the effects of oil price uncertainty in Canada. Our results are remarkably similar to existing results for the United States, providing additional evidence that uncertainty about oil prices may provide another explanation for why the sharp oil price declines of 1985 failed to produce rapid output growth. Impulse-response analysis suggests that uncertainty about oil prices may tend to reinforce the negative response of output to positive oil shocks. (author)

  5. Phenomenological uncertainty analysis of early containment failure at severe accident of nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Su Won

    2011-02-15

    The severe accident has inherently significant uncertainty due to wide range of conditions and performing experiments, validation and practical application are extremely difficult because of its high temperature and pressure. Although internal and external researches were put into practice, the reference used in Korean nuclear plants were foreign data of 1980s and safety analysis as the probabilistic safety assessment has not applied the newest methodology. Also, it is applied to containment pressure formed into point value as results of thermal hydraulic analysis to identify the probability of containment failure in level 2 PSA. In this paper, the uncertainty analysis methods for phenomena of severe accident influencing early containment failure were developed, the uncertainty analysis that apply Korean nuclear plants using the MELCOR code was performed and it is a point of view to present the distribution of containment pressure as a result of uncertainty analysis. Because early containment failure is important factor of Large Early Release Frequency(LERF) that is used as representative criteria of decision-making in nuclear power plants, it was selected in this paper among various modes of containment failure. Important phenomena of early containment failure at severe accident based on previous researches were comprehended and methodology of 7th steps to evaluate uncertainty was developed. The MELCOR input for analysis of the severe accident reflected natural circulation flow was developed and the accident scenario for station black out that was representative initial event of early containment failure was determined. By reviewing the internal model and correlation for MELCOR model relevant important phenomena of early containment failure, the uncertainty factors which could affect on the uncertainty were founded and the major factors were finally identified through the sensitivity analysis. In order to determine total number of MELCOR calculations which can

  6. Quantification of margins and uncertainties: Alternative representations of epistemic uncertainty

    International Nuclear Information System (INIS)

    Helton, Jon C.; Johnson, Jay D.

    2011-01-01

    In 2001, the National Nuclear Security Administration of the U.S. Department of Energy in conjunction with the national security laboratories (i.e., Los Alamos National Laboratory, Lawrence Livermore National Laboratory and Sandia National Laboratories) initiated development of a process designated Quantification of Margins and Uncertainties (QMU) for the use of risk assessment methodologies in the certification of the reliability and safety of the nation's nuclear weapons stockpile. A previous presentation, 'Quantification of Margins and Uncertainties: Conceptual and Computational Basis,' describes the basic ideas that underlie QMU and illustrates these ideas with two notional examples that employ probability for the representation of aleatory and epistemic uncertainty. The current presentation introduces and illustrates the use of interval analysis, possibility theory and evidence theory as alternatives to the use of probability theory for the representation of epistemic uncertainty in QMU-type analyses. The following topics are considered: the mathematical structure of alternative representations of uncertainty, alternative representations of epistemic uncertainty in QMU analyses involving only epistemic uncertainty, and alternative representations of epistemic uncertainty in QMU analyses involving a separation of aleatory and epistemic uncertainty. Analyses involving interval analysis, possibility theory and evidence theory are illustrated with the same two notional examples used in the presentation indicated above to illustrate the use of probability to represent aleatory and epistemic uncertainty in QMU analyses.

  7. Benchmarking observational uncertainties for hydrology (Invited)

    Science.gov (United States)

    McMillan, H. K.; Krueger, T.; Freer, J. E.; Westerberg, I.

    2013-12-01

    There is a pressing need for authoritative and concise information on the expected error distributions and magnitudes in hydrological data, to understand its information content. Many studies have discussed how to incorporate uncertainty information into model calibration and implementation, and shown how model results can be biased if uncertainty is not appropriately characterised. However, it is not always possible (for example due to financial or time constraints) to make detailed studies of uncertainty for every research study. Instead, we propose that the hydrological community could benefit greatly from sharing information on likely uncertainty characteristics and the main factors that control the resulting magnitude. In this presentation, we review the current knowledge of uncertainty for a number of key hydrological variables: rainfall, flow and water quality (suspended solids, nitrogen, phosphorus). We collated information on the specifics of the data measurement (data type, temporal and spatial resolution), error characteristics measured (e.g. standard error, confidence bounds) and error magnitude. Our results were primarily split by data type. Rainfall uncertainty was controlled most strongly by spatial scale, flow uncertainty was controlled by flow state (low, high) and gauging method. Water quality presented a more complex picture with many component errors. For all variables, it was easy to find examples where relative error magnitude exceeded 40%. We discuss some of the recent developments in hydrology which increase the need for guidance on typical error magnitudes, in particular when doing comparative/regionalisation and multi-objective analysis. Increased sharing of data, comparisons between multiple catchments, and storage in national/international databases can mean that data-users are far removed from data collection, but require good uncertainty information to reduce bias in comparisons or catchment regionalisation studies. Recently it has

  8. Uncertainty estimation with a small number of measurements, part II: a redefinition of uncertainty and an estimator method

    Science.gov (United States)

    Huang, Hening

    2018-01-01

    This paper is the second (Part II) in a series of two papers (Part I and Part II). Part I has quantitatively discussed the fundamental limitations of the t-interval method for uncertainty estimation with a small number of measurements. This paper (Part II) reveals that the t-interval is an ‘exact’ answer to a wrong question; it is actually misused in uncertainty estimation. This paper proposes a redefinition of uncertainty, based on the classical theory of errors and the theory of point estimation, and a modification of the conventional approach to estimating measurement uncertainty. It also presents an asymptotic procedure for estimating the z-interval. The proposed modification is to replace the t-based uncertainty with an uncertainty estimator (mean- or median-unbiased). The uncertainty estimator method is an approximate answer to the right question to uncertainty estimation. The modified approach provides realistic estimates of uncertainty, regardless of whether the population standard deviation is known or unknown, or if the sample size is small or large. As an application example of the modified approach, this paper presents a resolution to the Du-Yang paradox (i.e. Paradox 2), one of the three paradoxes caused by the misuse of the t-interval in uncertainty estimation.

  9. Uncertainty of a hydrological climate change impact assessment - Is it really all about climate uncertainty?

    Science.gov (United States)

    Honti, Mark; Reichert, Peter; Scheidegger, Andreas; Stamm, Christian

    2013-04-01

    Climate change impact assessments have become more and more popular in hydrology since the middle 1980's with another boost after the publication of the IPCC AR4 report. During hundreds of impact studies a quasi-standard methodology emerged, which is mainly shaped by the growing public demand for predicting how water resources management or flood protection should change in the close future. The ``standard'' workflow considers future climate under a specific IPCC emission scenario simulated by global circulation models (GCMs), possibly downscaled by a regional climate model (RCM) and/or a stochastic weather generator. The output from the climate models is typically corrected for bias before feeding it into a calibrated hydrological model, which is run on the past and future meteorological data to analyse the impacts of climate change on the hydrological indicators of interest. The impact predictions are as uncertain as any forecast that tries to describe the behaviour of an extremely complex system decades into the future. Future climate predictions are uncertain due to the scenario uncertainty and the GCM model uncertainty that is obvious on finer resolution than continental scale. Like in any hierarchical model system, uncertainty propagates through the descendant components. Downscaling increases uncertainty with the deficiencies of RCMs and/or weather generators. Bias correction adds a strong deterministic shift to the input data. Finally the predictive uncertainty of the hydrological model ends the cascade that leads to the total uncertainty of the hydrological impact assessment. There is an emerging consensus between many studies on the relative importance of the different uncertainty sources. The prevailing perception is that GCM uncertainty dominates hydrological impact studies. There are only few studies, which found that the predictive uncertainty of hydrological models can be in the same range or even larger than climatic uncertainty. We carried out a

  10. Model uncertainty in financial markets : Long run risk and parameter uncertainty

    NARCIS (Netherlands)

    de Roode, F.A.

    2014-01-01

    Uncertainty surrounding key parameters of financial markets, such as the in- flation and equity risk premium, constitute a major risk for institutional investors with long investment horizons. Hedging the investors’ inflation exposure can be challenging due to the lack of domestic inflation-linked

  11. Heisenberg's principle of uncertainty and the uncertainty relations

    International Nuclear Information System (INIS)

    Redei, Miklos

    1987-01-01

    The usual verbal form of the Heisenberg uncertainty principle and the usual mathematical formulation (the so-called uncertainty theorem) are not equivalent. The meaning of the concept 'uncertainty' is not unambiguous and different interpretations are used in the literature. Recently a renewed interest has appeared to reinterpret and reformulate the precise meaning of Heisenberg's principle and to find adequate mathematical form. The suggested new theorems are surveyed and critically analyzed. (D.Gy.) 20 refs

  12. Uncertainty as Certaint

    Science.gov (United States)

    Petzinger, Tom

    I am trying to make money in the biotech industry from complexity science. And I am doing it with inspiration that I picked up on the edge of Appalachia spending time with June Holley and ACEnet when I was a Wall Street Journal reporter. I took some of those ideas to Pittsburgh, in biotechnology, in a completely private setting with an economic development focus, but also with a mission t o return profit to private capital. And we are doing that. I submit as a hypothesis, something we are figuring out in the post- industrial era, that business evolves. It is not the definition of business, but business critically involves the design of systems in which uncertainty is treated as a certainty. That is what I have seen and what I have tried to put into practice.

  13. DOD ELAP Lab Uncertainties

    Science.gov (United States)

    2012-03-01

    ISO / IEC   17025  Inspection Bodies – ISO / IEC  17020  RMPs – ISO  Guide 34 (Reference...certify to :  ISO  9001 (QMS),  ISO  14001 (EMS),   TS 16949 (US Automotive)  etc. 2 3 DoD QSM 4.2 standard   ISO / IEC   17025 :2005  Each has uncertainty...IPV6, NLLAP, NEFAP  TRAINING Programs  Certification Bodies – ISO / IEC  17021  Accreditation for  Management System 

  14. Traceability and Measurement Uncertainty

    DEFF Research Database (Denmark)

    Tosello, Guido; De Chiffre, Leonardo

    2004-01-01

    . The project partnership aims (composed by 7 partners in 5 countries, thus covering a real European spread in high tech production technology) to develop and implement an advanced e-learning system that integrates contributions from quite different disciplines into a user-centred approach that strictly....... Machine tool testing 9. The role of manufacturing metrology for QM 10. Inspection planning 11. Quality management of measurements incl. Documentation 12. Advanced manufacturing measurement technology The present report (which represents the section 2 - Traceability and Measurement Uncertainty – of the e-learning......This report is made as a part of the project ‘Metro-E-Learn: European e-Learning in Manufacturing Metrology’, an EU project under the program SOCRATES MINERVA (ODL and ICT in Education), Contract No: 101434-CP-1-2002-1-DE-MINERVA, coordinated by Friedrich-Alexander-University Erlangen...

  15. Decision making under uncertainty

    International Nuclear Information System (INIS)

    Cyert, R.M.

    1989-01-01

    This paper reports on ways of improving the reliability of products and systems in this country if we are to survive as a first-rate industrial power. The use of statistical techniques have, since the 1920s, been viewed as one of the methods for testing quality and estimating the level of quality in a universe of output. Statistical quality control is not relevant, generally, to improving systems in an industry like yours, but certainly the use of probability concepts is of significance. In addition, when it is recognized that part of the problem involves making decisions under uncertainty, it becomes clear that techniques such as sequential decision making and Bayesian analysis become major methodological approaches that must be utilized

  16. Sustainability and uncertainty

    DEFF Research Database (Denmark)

    Jensen, Karsten Klint

    2007-01-01

    The widely used concept of sustainability is seldom precisely defined, and its clarification involves making up one's mind about a range of difficult questions. One line of research (bottom-up) takes sustaining a system over time as its starting point and then infers prescriptions from...... this requirement. Another line (top-down) takes an economical interpretation of the Brundtland Commission's suggestion that the present generation's needsatisfaction should not compromise the need-satisfaction of future generations as its starting point. It then measures sustainability at the level of society...... a clarified ethical goal, disagreements can arise. At present we do not know what substitutions will be possible in the future. This uncertainty clearly affects the prescriptions that follow from the measure of sustainability. Consequently, decisions about how to make future agriculture sustainable...

  17. An uncertainty inventory demonstration - a primary step in uncertainty quantification

    Energy Technology Data Exchange (ETDEWEB)

    Langenbrunner, James R. [Los Alamos National Laboratory; Booker, Jane M [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Salazar, Issac F [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2009-01-01

    Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.

  18. Design of Adaptive Policy Pathways under Deep Uncertainties

    Science.gov (United States)

    Babovic, Vladan

    2013-04-01

    The design of large-scale engineering and infrastructural systems today is growing in complexity. Designers need to consider sociotechnical uncertainties, intricacies, and processes in the long- term strategic deployment and operations of these systems. In this context, water and spatial management is increasingly challenged not only by climate-associated changes such as sea level rise and increased spatio-temporal variability of precipitation, but also by pressures due to population growth and particularly accelerating rate of urbanisation. Furthermore, high investment costs and long term-nature of water-related infrastructure projects requires long-term planning perspective, sometimes extending over many decades. Adaptation to such changes is not only determined by what is known or anticipated at present, but also by what will be experienced and learned as the future unfolds, as well as by policy responses to social and water events. As a result, a pathway emerges. Instead of responding to 'surprises' and making decisions on ad hoc basis, exploring adaptation pathways into the future provide indispensable support in water management decision-making. In this contribution, a structured approach for designing a dynamic adaptive policy based on the concepts of adaptive policy making and adaptation pathways is introduced. Such an approach provides flexibility which allows change over time in response to how the future unfolds, what is learned about the system, and changes in societal preferences. The introduced flexibility provides means for dealing with complexities of adaptation under deep uncertainties. It enables engineering systems to change in the face of uncertainty to reduce impacts from downside scenarios while capitalizing on upside opportunities. This contribution presents comprehensive framework for development and deployment of adaptive policy pathway framework, and demonstrates its performance under deep uncertainties on a case study related to urban

  19. An Adaptation Dilemma Caused by Impacts-Modeling Uncertainty

    Science.gov (United States)

    Frieler, K.; Müller, C.; Elliott, J. W.; Heinke, J.; Arneth, A.; Bierkens, M. F.; Ciais, P.; Clark, D. H.; Deryng, D.; Doll, P. M.; Falloon, P.; Fekete, B. M.; Folberth, C.; Friend, A. D.; Gosling, S. N.; Haddeland, I.; Khabarov, N.; Lomas, M. R.; Masaki, Y.; Nishina, K.; Neumann, K.; Oki, T.; Pavlick, R.; Ruane, A. C.; Schmid, E.; Schmitz, C.; Stacke, T.; Stehfest, E.; Tang, Q.; Wisser, D.

    2013-12-01

    Ensuring future well-being for a growing population under either strong climate change or an aggressive mitigation strategy requires a subtle balance of potentially conflicting response measures. In the case of competing goals, uncertainty in impact estimates plays a central role when high confidence in achieving a primary objective (such as food security) directly implies an increased probability of uncertainty induced failure with regard to a competing target (such as climate protection). We use cross sectoral consistent multi-impact model simulations from the Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP, www.isi-mip.org) to illustrate this uncertainty dilemma: RCP projections from 7 global crop, 11 hydrological, and 7 biomes models are combined to analyze irrigation and land use changes as possible responses to climate change and increasing crop demand due to population growth and economic development. We show that - while a no-regrets option with regard to climate protection - additional irrigation alone is not expected to balance the demand increase by 2050. In contrast, a strong expansion of cultivated land closes the projected production-demand gap in some crop models. However, it comes at the expense of a loss of natural carbon sinks of order 50%. Given the large uncertainty of state of the art crop model projections even these strong land use changes would not bring us ';on the safe side' with respect to food supply. In a world where increasing carbon emissions continue to shrink the overall solution space, we demonstrate that current impacts-modeling uncertainty is a luxury we cannot afford. ISI-MIP is intended to provide cross sectoral consistent impact projections for model intercomparison and improvement as well as cross-sectoral integration. The results presented here were generated within the first Fast-Track phase of the project covering global impact projections. The second phase will also include regional projections. It is the aim

  20. Uncertainty of the peak flow reconstruction of the 1907 flood in the Ebro River in Xerta (NE Iberian Peninsula)

    Science.gov (United States)

    Ruiz-Bellet, Josep Lluís; Castelltort, Xavier; Balasch, J. Carles; Tuset, Jordi

    2017-02-01

    There is no clear, unified and accepted method to estimate the uncertainty of hydraulic modelling results. In historical floods reconstruction, due to the lower precision of input data, the magnitude of this uncertainty could reach a high value. With the objectives of giving an estimate of the peak flow error of a typical historical flood reconstruction with the model HEC-RAS and of providing a quick, simple uncertainty assessment that an end user could easily apply, the uncertainty of the reconstructed peak flow of a major flood in the Ebro River (NE Iberian Peninsula) was calculated with a set of local sensitivity analyses on six input variables. The peak flow total error was estimated at ±31% and water height was found to be the most influential variable on peak flow, followed by Manning's n. However, the latter, due to its large uncertainty, was the greatest contributor to peak flow total error. Besides, the HEC-RAS resulting peak flow was compared to the ones obtained with the 2D model Iber and with Manning's equation; all three methods gave similar peak flows. Manning's equation gave almost the same result than HEC-RAS. The main conclusion is that, to ensure the lowest peak flow error, the reliability and precision of the flood mark should be thoroughly assessed.

  1. Ensuring effective supply chain management under uncertainty

    Directory of Open Access Journals (Sweden)

    Lutsenko Iryna Sergiivna

    2016-09-01

    Full Text Available Identified the main sources of uncertainty in supply chains and tools to mitigate them. The necessity of functional, spatial and temporal integration and linkage of decision-making at different management levels. Determined that the optimization of information flow can occur due to the “shrink” in time, volume and direction, this process should be preceded by a thorough analysis and rethinking of the business processes of a complex system of supply chains.

  2. Risk Management and Uncertainty in Infrastructure Projects

    DEFF Research Database (Denmark)

    Harty, Chris; Neerup Themsen, Tim; Tryggestad, Kjell

    2014-01-01

    The assumption that large complex projects should be managed in order to reduce uncertainty and increase predictability is not new. What is relatively new, however, is that uncertainty reduction can and should be obtained through formal risk management approaches. We question both assumptions...... by addressing a more fundamental question about the role of knowledge in current risk management practices. Inquiries into the predominant approaches to risk management in large infrastructure and construction projects reveal their assumptions about knowledge and we discuss the ramifications these have...... for project and construction management. Our argument and claim is that predominant risk management approaches tends to reinforce conventional ideas of project control whilst undermining other notions of value and relevance of built assets and project management process. These approaches fail to consider...

  3. Essays on model uncertainty in financial models

    NARCIS (Netherlands)

    Li, Jing

    2018-01-01

    This dissertation studies model uncertainty, particularly in financial models. It consists of two empirical chapters and one theoretical chapter. The first empirical chapter (Chapter 2) classifies model uncertainty into parameter uncertainty and misspecification uncertainty. It investigates the

  4. Additional challenges for uncertainty analysis in river engineering

    Science.gov (United States)

    Berends, Koen; Warmink, Jord; Hulscher, Suzanne

    2016-04-01

    the proposed intervention. The implicit assumption underlying such analysis is that both models are commensurable. We hypothesize that they are commensurable only to a certain extent. In an idealised study we have demonstrated that prediction performance loss should be expected with increasingly large engineering works. When accounting for parametric uncertainty of floodplain roughness in model identification, we see uncertainty bounds for predicted effects of interventions increase with increasing intervention scale. Calibration of these types of models therefore seems to have a shelf-life, beyond which calibration does not longer improves prediction. Therefore a qualification scheme for model use is required that can be linked to model validity. In this study, we characterize model use along three dimensions: extrapolation (using the model with different external drivers), extension (using the model for different output or indicators) and modification (using modified models). Such use of models is expected to have implications for the applicability of surrogating modelling for efficient uncertainty analysis as well, which is recommended for future research. Warmink, J. J.; Straatsma, M. W.; Huthoff, F.; Booij, M. J. & Hulscher, S. J. M. H. 2013. Uncertainty of design water levels due to combined bed form and vegetation roughness in the Dutch river Waal. Journal of Flood Risk Management 6, 302-318 . DOI: 10.1111/jfr3.12014

  5. A new uncertainty importance measure

    International Nuclear Information System (INIS)

    Borgonovo, E.

    2007-01-01

    Uncertainty in parameters is present in many risk assessment problems and leads to uncertainty in model predictions. In this work, we introduce a global sensitivity indicator which looks at the influence of input uncertainty on the entire output distribution without reference to a specific moment of the output (moment independence) and which can be defined also in the presence of correlations among the parameters. We discuss its mathematical properties and highlight the differences between the present indicator, variance-based uncertainty importance measures and a moment independent sensitivity indicator previously introduced in the literature. Numerical results are discussed with application to the probabilistic risk assessment model on which Iman [A matrix-based approach to uncertainty and sensitivity analysis for fault trees. Risk Anal 1987;7(1):22-33] first introduced uncertainty importance measures

  6. Uncertainty Management and Sensitivity Analysis

    DEFF Research Database (Denmark)

    Rosenbaum, Ralph K.; Georgiadis, Stylianos; Fantke, Peter

    2018-01-01

    Uncertainty is always there and LCA is no exception to that. The presence of uncertainties of different types and from numerous sources in LCA results is a fact, but managing them allows to quantify and improve the precision of a study and the robustness of its conclusions. LCA practice sometimes...... suffers from an imbalanced perception of uncertainties, justifying modelling choices and omissions. Identifying prevalent misconceptions around uncertainties in LCA is a central goal of this chapter, aiming to establish a positive approach focusing on the advantages of uncertainty management. The main...... objectives of this chapter are to learn how to deal with uncertainty in the context of LCA, how to quantify it, interpret and use it, and how to communicate it. The subject is approached more holistically than just focusing on relevant statistical methods or purely mathematical aspects. This chapter...

  7. Additivity of entropic uncertainty relations

    Directory of Open Access Journals (Sweden)

    René Schwonnek

    2018-03-01

    Full Text Available We consider the uncertainty between two pairs of local projective measurements performed on a multipartite system. We show that the optimal bound in any linear uncertainty relation, formulated in terms of the Shannon entropy, is additive. This directly implies, against naive intuition, that the minimal entropic uncertainty can always be realized by fully separable states. Hence, in contradiction to proposals by other authors, no entanglement witness can be constructed solely by comparing the attainable uncertainties of entangled and separable states. However, our result gives rise to a huge simplification for computing global uncertainty bounds as they now can be deduced from local ones. Furthermore, we provide the natural generalization of the Maassen and Uffink inequality for linear uncertainty relations with arbitrary positive coefficients.

  8. Validity of WTP measures under preference uncertainty

    OpenAIRE

    Kniebes, Carola; Rehdanz, Katrin; Schmidt, Ulrich

    2014-01-01

    This paper establishes a new method for eliciting Willingness to Pay (WTP) in contingent valuation (CV) studies with an open-ended elicitation format: the Range-WTP method. In contrast to the traditional approach for eliciting Point-WTP, Range-WTP explicitly allows for preference uncertainty in responses. Using data from two novel large-scale surveys on the perception of solar radiation management (SRM), a little-known technique for counteracting climate change, we compare the performance of ...

  9. Calibration and Propagation of Uncertainty for Independence

    Energy Technology Data Exchange (ETDEWEB)

    Holland, Troy Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kress, Joel David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bhat, Kabekode Ghanasham [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-06-30

    This document reports on progress and methods for the calibration and uncertainty quantification of the Independence model developed at UT Austin. The Independence model is an advanced thermodynamic and process model framework for piperazine solutions as a high-performance CO2 capture solvent. Progress is presented in the framework of the CCSI standard basic data model inference framework. Recent work has largely focused on the thermodynamic submodels of Independence.

  10. Decisions under uncertainty using Bayesian analysis

    Directory of Open Access Journals (Sweden)

    Stelian STANCU

    2006-01-01

    Full Text Available The present paper makes a short presentation of the Bayesian decions method, where extrainformation brings a great support to decision making process, but also attract new costs. In this situation, getting new information, generally experimentaly based, contributes to diminushing the uncertainty degree that influences decision making process. As a conclusion, in a large number of decision problems, there is the possibility that the decision makers will renew some decisions already taken because of the facilities offered by obtainig extrainformation.

  11. Decommissioning funding: ethics, implementation, uncertainties

    International Nuclear Information System (INIS)

    2006-01-01

    This status report on Decommissioning Funding: Ethics, Implementation, Uncertainties also draws on the experience of the NEA Working Party on Decommissioning and Dismantling (WPDD). The report offers, in a concise form, an overview of relevant considerations on decommissioning funding mechanisms with regard to ethics, implementation and uncertainties. Underlying ethical principles found in international agreements are identified, and factors influencing the accumulation and management of funds for decommissioning nuclear facilities are discussed together with the main sources of uncertainties of funding systems. (authors)

  12. Chemical model reduction under uncertainty

    KAUST Repository

    Najm, Habib; Galassi, R. Malpica; Valorani, M.

    2016-01-01

    We outline a strategy for chemical kinetic model reduction under uncertainty. We present highlights of our existing deterministic model reduction strategy, and describe the extension of the formulation to include parametric uncertainty in the detailed mechanism. We discuss the utility of this construction, as applied to hydrocarbon fuel-air kinetics, and the associated use of uncertainty-aware measures of error between predictions from detailed and simplified models.

  13. Chemical model reduction under uncertainty

    KAUST Repository

    Najm, Habib

    2016-01-05

    We outline a strategy for chemical kinetic model reduction under uncertainty. We present highlights of our existing deterministic model reduction strategy, and describe the extension of the formulation to include parametric uncertainty in the detailed mechanism. We discuss the utility of this construction, as applied to hydrocarbon fuel-air kinetics, and the associated use of uncertainty-aware measures of error between predictions from detailed and simplified models.

  14. The Uncertainty of Measurement Results

    Energy Technology Data Exchange (ETDEWEB)

    Ambrus, A. [Hungarian Food Safety Office, Budapest (Hungary)

    2009-07-15

    Factors affecting the uncertainty of measurement are explained, basic statistical formulae given, and the theoretical concept explained in the context of pesticide formulation analysis. Practical guidance is provided on how to determine individual uncertainty components within an analytical procedure. An extended and comprehensive table containing the relevant mathematical/statistical expressions elucidates the relevant underlying principles. Appendix I provides a practical elaborated example on measurement uncertainty estimation, above all utilizing experimental repeatability and reproducibility laboratory data. (author)

  15. Uncertainty analysis of environmental models

    International Nuclear Information System (INIS)

    Monte, L.

    1990-01-01

    In the present paper an evaluation of the output uncertainty of an environmental model for assessing the transfer of 137 Cs and 131 I in the human food chain are carried out on the basis of a statistical analysis of data reported by the literature. The uncertainty analysis offers the oppotunity of obtaining some remarkable information about the uncertainty of models predicting the migration of non radioactive substances in the environment mainly in relation to the dry and wet deposition

  16. Multi-scenario modelling of uncertainty in stochastic chemical systems

    International Nuclear Information System (INIS)

    Evans, R. David; Ricardez-Sandoval, Luis A.

    2014-01-01

    Uncertainty analysis has not been well studied at the molecular scale, despite extensive knowledge of uncertainty in macroscale systems. The ability to predict the effect of uncertainty allows for robust control of small scale systems such as nanoreactors, surface reactions, and gene toggle switches. However, it is difficult to model uncertainty in such chemical systems as they are stochastic in nature, and require a large computational cost. To address this issue, a new model of uncertainty propagation in stochastic chemical systems, based on the Chemical Master Equation, is proposed in the present study. The uncertain solution is approximated by a composite state comprised of the averaged effect of samples from the uncertain parameter distributions. This model is then used to study the effect of uncertainty on an isomerization system and a two gene regulation network called a repressilator. The results of this model show that uncertainty in stochastic systems is dependent on both the uncertain distribution, and the system under investigation. -- Highlights: •A method to model uncertainty on stochastic systems was developed. •The method is based on the Chemical Master Equation. •Uncertainty in an isomerization reaction and a gene regulation network was modelled. •Effects were significant and dependent on the uncertain input and reaction system. •The model was computationally more efficient than Kinetic Monte Carlo

  17. Simplified propagation of standard uncertainties

    International Nuclear Information System (INIS)

    Shull, A.H.

    1997-01-01

    An essential part of any measurement control program is adequate knowledge of the uncertainties of the measurement system standards. Only with an estimate of the standards'' uncertainties can one determine if the standard is adequate for its intended use or can one calculate the total uncertainty of the measurement process. Purchased standards usually have estimates of uncertainty on their certificates. However, when standards are prepared and characterized by a laboratory, variance propagation is required to estimate the uncertainty of the standard. Traditional variance propagation typically involves tedious use of partial derivatives, unfriendly software and the availability of statistical expertise. As a result, the uncertainty of prepared standards is often not determined or determined incorrectly. For situations meeting stated assumptions, easier shortcut methods of estimation are now available which eliminate the need for partial derivatives and require only a spreadsheet or calculator. A system of simplifying the calculations by dividing into subgroups of absolute and relative uncertainties is utilized. These methods also incorporate the International Standards Organization (ISO) concepts for combining systematic and random uncertainties as published in their Guide to the Expression of Measurement Uncertainty. Details of the simplified methods and examples of their use are included in the paper

  18. Uncertainty information in climate data records from Earth observation

    Science.gov (United States)

    Merchant, C. J.

    2017-12-01

    How to derive and present uncertainty in climate data records (CDRs) has been debated within the European Space Agency Climate Change Initiative, in search of common principles applicable across a range of essential climate variables. Various points of consensus have been reached, including the importance of improving provision of uncertainty information and the benefit of adopting international norms of metrology for language around the distinct concepts of uncertainty and error. Providing an estimate of standard uncertainty per datum (or the means to readily calculate it) emerged as baseline good practice, and should be highly relevant to users of CDRs when the uncertainty in data is variable (the usual case). Given this baseline, the role of quality flags is clarified as being complementary to and not repetitive of uncertainty information. Data with high uncertainty are not poor quality if a valid estimate of the uncertainty is available. For CDRs and their applications, the error correlation properties across spatio-temporal scales present important challenges that are not fully solved. Error effects that are negligible in the uncertainty of a single pixel may dominate uncertainty in the large-scale and long-term. A further principle is that uncertainty estimates should themselves be validated. The concepts of estimating and propagating uncertainty are generally acknowledged in geophysical sciences, but less widely practised in Earth observation and development of CDRs. Uncertainty in a CDR depends in part (and usually significantly) on the error covariance of the radiances and auxiliary data used in the retrieval. Typically, error covariance information is not available in the fundamental CDR (FCDR) (i.e., with the level-1 radiances), since provision of adequate level-1 uncertainty information is not yet standard practice. Those deriving CDRs thus cannot propagate the radiance uncertainty to their geophysical products. The FIDUCEO project (www.fiduceo.eu) is

  19. Managing Measurement Uncertainty in Building Acoustics

    Directory of Open Access Journals (Sweden)

    Chiara Scrosati

    2015-12-01

    Full Text Available In general, uncertainties should preferably be determined following the principles laid down in ISO/IEC Guide 98-3, the Guide to the expression of uncertainty in measurement (GUM:1995. According to current knowledge, it seems impossible to formulate these models for the different quantities in building acoustics. Therefore, the concepts of repeatability and reproducibility are necessary to determine the uncertainty of building acoustics measurements. This study shows the uncertainty of field measurements of a lightweight wall, a heavyweight floor, a façade with a single glazing window and a façade with double glazing window that were analyzed by a Round Robin Test (RRT, conducted in a full-scale experimental building at ITC-CNR (Construction Technologies Institute of the National Research Council of Italy. The single number quantities and their uncertainties were evaluated in both narrow and enlarged range and it was shown that including or excluding the low frequencies leads to very significant differences, except in the case of the sound insulation of façades with single glazing window. The results obtained in these RRTs were compared with other results from literature, which confirm the increase of the uncertainty of single number quantities due to the low frequencies extension. Having stated the measurement uncertainty for a single measurement, in building acoustics, it is also very important to deal with sampling for the purposes of classification of buildings or building units. Therefore, this study also shows an application of the sampling included in the Italian Standard on the acoustic classification of building units on a serial type building consisting of 47 building units. It was found that the greatest variability is observed in the façade and it depends on both the great variability of window’s typologies and on workmanship. Finally, it is suggested how to manage the uncertainty in building acoustics, both for one single

  20. Using interpolation to estimate system uncertainty in gene expression experiments.

    Directory of Open Access Journals (Sweden)

    Lee J Falin

    Full Text Available The widespread use of high-throughput experimental assays designed to measure the entire complement of a cell's genes or gene products has led to vast stores of data that are extremely plentiful in terms of the number of items they can measure in a single sample, yet often sparse in the number of samples per experiment due to their high cost. This often leads to datasets where the number of treatment levels or time points sampled is limited, or where there are very small numbers of technical and/or biological replicates. Here we introduce a novel algorithm to quantify the uncertainty in the unmeasured intervals between biological measurements taken across a set of quantitative treatments. The algorithm provides a probabilistic distribution of possible gene expression values within unmeasured intervals, based on a plausible biological constraint. We show how quantification of this uncertainty can be used to guide researchers in further data collection by identifying which samples would likely add the most information to the system under study. Although the context for developing the algorithm was gene expression measurements taken over a time series, the approach can be readily applied to any set of quantitative systems biology measurements taken following quantitative (i.e. non-categorical treatments. In principle, the method could also be applied to combinations of treatments, in which case it could greatly simplify the task of exploring the large combinatorial space of future possible measurements.

  1. Data-Driven Model Uncertainty Estimation in Hydrologic Data Assimilation

    Science.gov (United States)

    Pathiraja, S.; Moradkhani, H.; Marshall, L.; Sharma, A.; Geenens, G.

    2018-02-01

    The increasing availability of earth observations necessitates mathematical methods to optimally combine such data with hydrologic models. Several algorithms exist for such purposes, under the umbrella of data assimilation (DA). However, DA methods are often applied in a suboptimal fashion for complex real-world problems, due largely to several practical implementation issues. One such issue is error characterization, which is known to be critical for a successful assimilation. Mischaracterized errors lead to suboptimal forecasts, and in the worst case, to degraded estimates even compared to the no assimilation case. Model uncertainty characterization has received little attention relative to other aspects of DA science. Traditional methods rely on subjective, ad hoc tuning factors or parametric distribution assumptions that may not always be applicable. We propose a novel data-driven approach (named SDMU) to model uncertainty characterization for DA studies where (1) the system states are partially observed and (2) minimal prior knowledge of the model error processes is available, except that the errors display state dependence. It includes an approach for estimating the uncertainty in hidden model states, with the end goal of improving predictions of observed variables. The SDMU is therefore suited to DA studies where the observed variables are of primary interest. Its efficacy is demonstrated through a synthetic case study with low-dimensional chaotic dynamics and a real hydrologic experiment for one-day-ahead streamflow forecasting. In both experiments, the proposed method leads to substantial improvements in the hidden states and observed system outputs over a standard method involving perturbation with Gaussian noise.

  2. Structural reliability in context of statistical uncertainties and modelling discrepancies

    International Nuclear Information System (INIS)

    Pendola, Maurice

    2000-01-01

    Structural reliability methods have been largely improved during the last years and have showed their ability to deal with uncertainties during the design stage or to optimize the functioning and the maintenance of industrial installations. They are based on a mechanical modeling of the structural behavior according to the considered failure modes and on a probabilistic representation of input parameters of this modeling. In practice, only limited statistical information is available to build the probabilistic representation and different sophistication levels of the mechanical modeling may be introduced. Thus, besides the physical randomness, other uncertainties occur in such analyses. The aim of this work is triple: 1. at first, to propose a methodology able to characterize the statistical uncertainties due to the limited number of data in order to take them into account in the reliability analyses. The obtained reliability index measures the confidence in the structure considering the statistical information available. 2. Then, to show a methodology leading to reliability results evaluated from a particular mechanical modeling but by using a less sophisticated one. The objective is then to decrease the computational efforts required by the reference modeling. 3. Finally, to propose partial safety factors that are evolving as a function of the number of statistical data available and as a function of the sophistication level of the mechanical modeling that is used. The concepts are illustrated in the case of a welded pipe and in the case of a natural draught cooling tower. The results show the interest of the methodologies in an industrial context. [fr

  3. Asymmetric Uncertainty Expression for High Gradient Aerodynamics

    Science.gov (United States)

    Pinier, Jeremy T

    2012-01-01

    When the physics of the flow around an aircraft changes very abruptly either in time or space (e.g., flow separation/reattachment, boundary layer transition, unsteadiness, shocks, etc), the measurements that are performed in a simulated environment like a wind tunnel test or a computational simulation will most likely incorrectly predict the exact location of where (or when) the change in physics happens. There are many reasons for this, includ- ing the error introduced by simulating a real system at a smaller scale and at non-ideal conditions, or the error due to turbulence models in a computational simulation. The un- certainty analysis principles that have been developed and are being implemented today do not fully account for uncertainty in the knowledge of the location of abrupt physics changes or sharp gradients, leading to a potentially underestimated uncertainty in those areas. To address this problem, a new asymmetric aerodynamic uncertainty expression containing an extra term to account for a phase-uncertainty, the magnitude of which is emphasized in the high-gradient aerodynamic regions is proposed in this paper. Additionally, based on previous work, a method for dispersing aerodynamic data within asymmetric uncer- tainty bounds in a more realistic way has been developed for use within Monte Carlo-type analyses.

  4. Environmental impact and risk assessments and key factors contributing to the overall uncertainties.

    Science.gov (United States)

    Salbu, Brit

    2016-01-01

    There is a significant number of nuclear and radiological sources that have contributed, are still contributing, or have the potential to contribute to radioactive contamination of the environment in the future. To protect the environment from radioactive contamination, impact and risk assessments are performed prior to or during a release event, short or long term after deposition or prior and after implementation of countermeasures. When environmental impact and risks are assessed, however, a series of factors will contribute to the overall uncertainties. To provide environmental impact and risk assessments, information on processes, kinetics and a series of input variables is needed. Adding problems such as variability, questionable assumptions, gaps in knowledge, extrapolations and poor conceptual model structures, a series of factors are contributing to large and often unacceptable uncertainties in impact and risk assessments. Information on the source term and the release scenario is an essential starting point in impact and risk models; the source determines activity concentrations and atom ratios of radionuclides released, while the release scenario determine the physico-chemical forms of released radionuclides such as particle size distribution, structure and density. Releases will most often contain other contaminants such as metals, and due to interactions, contaminated sites should be assessed as a multiple stressor scenario. Following deposition, a series of stressors, interactions and processes will influence the ecosystem transfer of radionuclide species and thereby influence biological uptake (toxicokinetics) and responses (toxicodynamics) in exposed organisms. Due to the variety of biological species, extrapolation is frequently needed to fill gaps in knowledge e.g., from effects to no effects, from effects in one organism to others, from one stressor to mixtures. Most toxtests are, however, performed as short term exposure of adult organisms

  5. Uncertainties of Molecular Structural Parameters

    International Nuclear Information System (INIS)

    Császár, Attila G.

    2014-01-01

    performed. Simply, there are significant disagreements between the same bond lengths measured by different techniques. These disagreements are, however, systematic and can be computed via techniques of quantum chemistry which deal not only with the motions of the electrons (electronic structure theory) but also with the often large amplitude motions of the nuclei. As to the relevant quantum chemical computations, since about 1970 electronic structure theory has become able to make quantitative predictions and thus challenge (or even overrule) many experiments. Nevertheless, quantitative agreement of quantum chemical results with experiment can only be expected when the motions of the atoms are also considered. In the fourth age of quantum chemistry we are living in an era where one can bridge quantitatively the gap between ‘effective’, experimental and ‘equilibrium’, computed structures at even elevated temperatures of interest thus minimizing any real uncertainties of structural parameters. The connections mentioned are extremely important as they help to understand the true uncertainty of measured structural parameters. Traditionally it is microwave (MW) and millimeterwave (MMW) spectroscopy, as well as gas-phase electron diffraction (GED), which yielded the most accurate structural parameters of molecules. The accuracy of the MW and GED experiments approached about 0.001Å and 0.1º under ideal circumstances, worse, sometimes considerably worse, in less than ideal and much more often encountered situations. Quantum chemistry can define both highly accurate equilibrium (so-called Born-Oppenheimer, r_e"B"O, and semiexperimental, r_e"S"E) structures and, via detailed investigation of molecular motions, accurate temperature-dependent rovibrationally averaged structures. Determining structures is still a rich field for research, understanding the measured or computed uncertainties of structures and structural parameters is still a challenge but there are firm and well

  6. A Study on the uncertainty and sensitivity in numerical simulation of parametric roll

    DEFF Research Database (Denmark)

    Choi, Ju-hyuck; Nielsen, Ulrik Dam; Jensen, Jørgen Juncher

    2016-01-01