WorldWideScience

Sample records for quantifying aggregated uncertainty

  1. Quantifying aggregated uncertainty in Plasmodium falciparum malaria prevalence and populations at risk via efficient space-time geostatistical joint simulation.

    Science.gov (United States)

    Gething, Peter W; Patil, Anand P; Hay, Simon I

    2010-04-01

    Risk maps estimating the spatial distribution of infectious diseases are required to guide public health policy from local to global scales. The advent of model-based geostatistics (MBG) has allowed these maps to be generated in a formal statistical framework, providing robust metrics of map uncertainty that enhances their utility for decision-makers. In many settings, decision-makers require spatially aggregated measures over large regions such as the mean prevalence within a country or administrative region, or national populations living under different levels of risk. Existing MBG mapping approaches provide suitable metrics of local uncertainty--the fidelity of predictions at each mapped pixel--but have not been adapted for measuring uncertainty over large areas, due largely to a series of fundamental computational constraints. Here the authors present a new efficient approximating algorithm that can generate for the first time the necessary joint simulation of prevalence values across the very large prediction spaces needed for global scale mapping. This new approach is implemented in conjunction with an established model for P. falciparum allowing robust estimates of mean prevalence at any specified level of spatial aggregation. The model is used to provide estimates of national populations at risk under three policy-relevant prevalence thresholds, along with accompanying model-based measures of uncertainty. By overcoming previously unchallenged computational barriers, this study illustrates how MBG approaches, already at the forefront of infectious disease mapping, can be extended to provide large-scale aggregate measures appropriate for decision-makers.

  2. Quantifying the uncertainty in heritability.

    Science.gov (United States)

    Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

    2014-05-01

    The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.

  3. Aggregate Uncertainty, Money and Banking

    OpenAIRE

    Hongfei Sun

    2006-01-01

    This paper studies the problem of monitoring the monitor in a model of money and banking with aggregate uncertainty. It shows that when inside money is required as a means of bank loan repayment, a market of inside money is entailed at the repayment stage and generates information-revealing prices that perfectly discipline the bank. The incentive problem of a bank is costlessly overcome simply by involving inside money in repayment. Inside money distinguishes itself from outside money by its ...

  4. WASH-1400: quantifying the uncertainties

    International Nuclear Information System (INIS)

    Erdmann, R.C.; Leverenz, F.L. Jr.; Lellouche, G.S.

    1981-01-01

    The purpose of this paper is to focus on the limitations of the WASH-1400 analysis in estimating the risk from light water reactors (LWRs). This assessment attempts to modify the quantification of the uncertainty in and estimate of risk as presented by the RSS (reactor safety study). 8 refs

  5. Quantifying uncertainty in nuclear analytical measurements

    International Nuclear Information System (INIS)

    2004-07-01

    The lack of international consensus on the expression of uncertainty in measurements was recognised by the late 1970s and led, after the issuance of a series of rather generic recommendations, to the publication of a general publication, known as GUM, the Guide to the Expression of Uncertainty in Measurement. This publication, issued in 1993, was based on co-operation over several years by the Bureau International des Poids et Mesures, the International Electrotechnical Commission, the International Federation of Clinical Chemistry, the International Organization for Standardization (ISO), the International Union of Pure and Applied Chemistry, the International Union of Pure and Applied Physics and the Organisation internationale de metrologie legale. The purpose was to promote full information on how uncertainty statements are arrived at and to provide a basis for harmonized reporting and the international comparison of measurement results. The need to provide more specific guidance to different measurement disciplines was soon recognized and the field of analytical chemistry was addressed by EURACHEM in 1995 in the first edition of a guidance report on Quantifying Uncertainty in Analytical Measurements, produced by a group of experts from the field. That publication translated the general concepts of the GUM into specific applications for analytical laboratories and illustrated the principles with a series of selected examples as a didactic tool. Based on feedback from the actual practice, the EURACHEM publication was extensively reviewed in 1997-1999 under the auspices of the Co-operation on International Traceability in Analytical Chemistry (CITAC), and a second edition was published in 2000. Still, except for a single example on the measurement of radioactivity in GUM, the field of nuclear and radiochemical measurements was not covered. The explicit requirement of ISO standard 17025:1999, General Requirements for the Competence of Testing and Calibration

  6. Hump-shape Uncertainty, Agency Costs and Aggregate Fluctuations

    OpenAIRE

    Lee, Gabriel; Kevin, Salyer; Strobel, Johannes

    2016-01-01

    Previously measured uncertainty shocks using the U.S. data show a hump-shape time path: Uncertainty rises for two years before its decline. Current literature on the effects uncertainty on macroeconomics, including housing, has not accounted for this observation. Consequently, the literature on uncertainty and macroeconomics is divided on the effcts and the propagation mechanism of uncertainty on aggregate uctuations. This paper shows that when uncertainty rises and falls over time, th...

  7. A New Framework for Quantifying Lidar Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Newman, Jennifer, F.; Clifton, Andrew; Bonin, Timothy A.; Churchfield, Matthew J.

    2017-03-24

    As wind turbine sizes increase and wind energy expands to more complex and remote sites, remote sensing devices such as lidars are expected to play a key role in wind resource assessment and power performance testing. The switch to remote sensing devices represents a paradigm shift in the way the wind industry typically obtains and interprets measurement data for wind energy. For example, the measurement techniques and sources of uncertainty for a remote sensing device are vastly different from those associated with a cup anemometer on a meteorological tower. Current IEC standards discuss uncertainty due to mounting, calibration, and classification of the remote sensing device, among other parameters. Values of the uncertainty are typically given as a function of the mean wind speed measured by a reference device. However, real-world experience has shown that lidar performance is highly dependent on atmospheric conditions, such as wind shear, turbulence, and aerosol content. At present, these conditions are not directly incorporated into the estimated uncertainty of a lidar device. In this presentation, we propose the development of a new lidar uncertainty framework that adapts to current flow conditions and more accurately represents the actual uncertainty inherent in lidar measurements under different conditions. In this new framework, sources of uncertainty are identified for estimation of the line-of-sight wind speed and reconstruction of the three-dimensional wind field. These sources are then related to physical processes caused by the atmosphere and lidar operating conditions. The framework is applied to lidar data from an operational wind farm to assess the ability of the framework to predict errors in lidar-measured wind speed.

  8. Quantifying uncertainty in observational rainfall datasets

    Science.gov (United States)

    Lennard, Chris; Dosio, Alessandro; Nikulin, Grigory; Pinto, Izidine; Seid, Hussen

    2015-04-01

    The CO-ordinated Regional Downscaling Experiment (CORDEX) has to date seen the publication of at least ten journal papers that examine the African domain during 2012 and 2013. Five of these papers consider Africa generally (Nikulin et al. 2012, Kim et al. 2013, Hernandes-Dias et al. 2013, Laprise et al. 2013, Panitz et al. 2013) and five have regional foci: Tramblay et al. (2013) on Northern Africa, Mariotti et al. (2014) and Gbobaniyi el al. (2013) on West Africa, Endris et al. (2013) on East Africa and Kalagnoumou et al. (2013) on southern Africa. There also are a further three papers that the authors know about under review. These papers all use an observed rainfall and/or temperature data to evaluate/validate the regional model output and often proceed to assess projected changes in these variables due to climate change in the context of these observations. The most popular reference rainfall data used are the CRU, GPCP, GPCC, TRMM and UDEL datasets. However, as Kalagnoumou et al. (2013) point out there are many other rainfall datasets available for consideration, for example, CMORPH, FEWS, TAMSAT & RIANNAA, TAMORA and the WATCH & WATCH-DEI data. They, with others (Nikulin et al. 2012, Sylla et al. 2012) show that the observed datasets can have a very wide spread at a particular space-time coordinate. As more ground, space and reanalysis-based rainfall products become available, all which use different methods to produce precipitation data, the selection of reference data is becoming an important factor in model evaluation. A number of factors can contribute to a uncertainty in terms of the reliability and validity of the datasets such as radiance conversion algorithims, the quantity and quality of available station data, interpolation techniques and blending methods used to combine satellite and guage based products. However, to date no comprehensive study has been performed to evaluate the uncertainty in these observational datasets. We assess 18 gridded

  9. Quantifying data worth toward reducing predictive uncertainty

    Science.gov (United States)

    Dausman, A.M.; Doherty, J.; Langevin, C.D.; Sukop, M.C.

    2010-01-01

    The present study demonstrates a methodology for optimization of environmental data acquisition. Based on the premise that the worth of data increases in proportion to its ability to reduce the uncertainty of key model predictions, the methodology can be used to compare the worth of different data types, gathered at different locations within study areas of arbitrary complexity. The method is applied to a hypothetical nonlinear, variable density numerical model of salt and heat transport. The relative utilities of temperature and concentration measurements at different locations within the model domain are assessed in terms of their ability to reduce the uncertainty associated with predictions of movement of the salt water interface in response to a decrease in fresh water recharge. In order to test the sensitivity of the method to nonlinear model behavior, analyses were repeated for multiple realizations of system properties. Rankings of observation worth were similar for all realizations, indicating robust performance of the methodology when employed in conjunction with a highly nonlinear model. The analysis showed that while concentration and temperature measurements can both aid in the prediction of interface movement, concentration measurements, especially when taken in proximity to the interface at locations where the interface is expected to move, are of greater worth than temperature measurements. Nevertheless, it was also demonstrated that pairs of temperature measurements, taken in strategic locations with respect to the interface, can also lead to more precise predictions of interface movement. Journal compilation ?? 2010 National Ground Water Association.

  10. Quantifying uncertainties in wind energy assessment

    Science.gov (United States)

    Patlakas, Platon; Galanis, George; Kallos, George

    2015-04-01

    The constant rise of wind energy production and the subsequent penetration in global energy markets during the last decades resulted in new sites selection with various types of problems. Such problems arise due to the variability and the uncertainty of wind speed. The study of the wind speed distribution lower and upper tail may support the quantification of these uncertainties. Such approaches focused on extreme wind conditions or periods below the energy production threshold are necessary for a better management of operations. Towards this direction, different methodologies are presented for the credible evaluation of potential non-frequent/extreme values for these environmental conditions. The approaches used, take into consideration the structural design of the wind turbines according to their lifespan, the turbine failures, the time needed for repairing as well as the energy production distribution. In this work, a multi-parametric approach for studying extreme wind speed values will be discussed based on tools of Extreme Value Theory. In particular, the study is focused on extreme wind speed return periods and the persistence of no energy production based on a weather modeling system/hind cast/10-year dataset. More specifically, two methods (Annual Maxima and Peaks Over Threshold) were used for the estimation of extreme wind speeds and their recurrence intervals. Additionally, two different methodologies (intensity given duration and duration given intensity, both based on Annual Maxima method) were implied to calculate the extreme events duration, combined with their intensity as well as the event frequency. The obtained results prove that the proposed approaches converge, at least on the main findings, for each case. It is also remarkable that, despite the moderate wind speed climate of the area, several consequent days of no energy production are observed.

  11. A Novel Method to Quantify Soil Aggregate Stability by Measuring Aggregate Bond Energies

    Science.gov (United States)

    Efrat, Rachel; Rawlins, Barry G.; Quinton, John N.; Watts, Chris W.; Whitmore, Andy P.

    2016-04-01

    Soil aggregate stability is a key indicator of soil quality because it controls physical, biological and chemical functions important in cultivated soils. Micro-aggregates are responsible for the long term sequestration of carbon in soil, therefore determine soils role in the carbon cycle. It is thus vital that techniques to measure aggregate stability are accurate, consistent and reliable, in order to appropriately manage and monitor soil quality, and to develop our understanding and estimates of soil as a carbon store to appropriately incorporate in carbon cycle models. Practices used to assess the stability of aggregates vary in sample preparation, operational technique and unit of results. They use proxies and lack quantification. Conflicting results are therefore drawn between projects that do not provide methodological or resultant comparability. Typical modern stability tests suspend aggregates in water and monitor fragmentation upon exposure to an un-quantified amount of ultrasonic energy, utilising a laser granulometer to measure the change in mean weight diameter. In this project a novel approach has been developed based on that of Zhu et al., (2009), to accurately quantify the stability of aggregates by specifically measuring their bond energies. The bond energies are measured operating a combination of calorimetry and a high powered ultrasonic probe, with computable output function. Temperature change during sonication is monitored by an array of probes which enables calculation of the energy spent heating the system (Ph). Our novel technique suspends aggregates in heavy liquid lithium heteropolytungstate, as opposed to water, to avoid exposing aggregates to an immeasurable disruptive energy source, due to cavitation, collisions and clay swelling. Mean weight diameter is measured by a laser granulometer to monitor aggregate breakdown after successive periods of calculated ultrasonic energy input (Pi), until complete dispersion is achieved and bond

  12. Macro Expectations, Aggregate Uncertainty, and Expected Term Premia

    DEFF Research Database (Denmark)

    Dick, Christian D.; Schmeling, Maik; Schrimpf, Andreas

    2013-01-01

    as well as aggregate macroeconomic uncertainty at the level of individual forecasters. We find that expected term premia are (i) time-varying and reasonably persistent, (ii) strongly related to expectations about future output growth, and (iii) positively affected by uncertainty about future output growth...... and in ation rates. Expectations about real macroeconomic variables seem to matter more than expectations about nominal factors. Additional findings on term structure factors suggest that the level and slope factor capture information related to uncertainty about real and nominal macroeconomic prospects...

  13. GRAPH THEORY APPROACH TO QUANTIFY UNCERTAINTY OF PERFORMANCE MEASURES

    Directory of Open Access Journals (Sweden)

    Sérgio D. Sousa

    2015-03-01

    Full Text Available In this work, the performance measurement process is studied to quantify the uncertainty induced in the resulting performance measure (PM. To that end, the causes of uncertainty are identified, analysing the activities undertaken in the three following stages of the performance measurement process: design and implementation, data collection and record, and determination and analysis. A quantitative methodology based on graph theory and on the sources of uncertainty of the performance measurement process is used to calculate an uncertainty index to evaluate the level of uncertainty of a given PM or (key performance indicator. An application example is presented. The quantification of PM uncertainty could contribute to better represent the risk associated with a given decision and also to improve the PM to increase its precision and reliability.

  14. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  15. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-09-28

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  16. Quantifying uncertainties in the structural response of SSME blades

    Science.gov (United States)

    Nagpal, Vinod K.

    1987-01-01

    To quantify the uncertainties associated with the geometry and material properties of a Space Shuttle Main Engine (SSME) turbopump blade, a computer code known as STAEBL was used. A finite element model of the blade used 80 triangular shell elements with 55 nodes and five degrees of freedom per node. The whole study was simulated on the computer and no real experiments were conducted. The structural response has been evaluated in terms of three variables which are natural frequencies, root (maximum) stress, and blade tip displacements. The results of the study indicate that only the geometric uncertainties have significant effects on the response. Uncertainties in material properties have insignificant effects.

  17. Quantifying phenomenological importance in best-estimate plus uncertainty analyses

    International Nuclear Information System (INIS)

    Martin, Robert P.

    2009-01-01

    This paper describes a general methodology for quantifying the importance of specific phenomenological elements to analysis measures evaluated from non-parametric best-estimate plus uncertainty evaluation methodologies. The principal objective of an importance analysis is to reveal those uncertainty contributors having the greatest influence on key analysis measures. This characterization supports the credibility of the uncertainty analysis, the applicability of the analytical tools, and even the generic evaluation methodology through the validation of the engineering judgments that guided the evaluation methodology development. A demonstration of the importance analysis is provided using data from a sample problem considered in the development of AREVA's Realistic LBLOCA methodology. The results are presented against the original large-break LOCA Phenomena Identification and Ranking Table developed by the Technical Program Group responsible for authoring the Code Scaling, Applicability and Uncertainty methodology. (author)

  18. Quantifying the measurement uncertainty of results from environmental analytical methods.

    Science.gov (United States)

    Moser, J; Wegscheider, W; Sperka-Gottlieb, C

    2001-07-01

    The Eurachem-CITAC Guide Quantifying Uncertainty in Analytical Measurement was put into practice in a public laboratory devoted to environmental analytical measurements. In doing so due regard was given to the provisions of ISO 17025 and an attempt was made to base the entire estimation of measurement uncertainty on available data from the literature or from previously performed validation studies. Most environmental analytical procedures laid down in national or international standards are the result of cooperative efforts and put into effect as part of a compromise between all parties involved, public and private, that also encompasses environmental standards and statutory limits. Central to many procedures is the focus on the measurement of environmental effects rather than on individual chemical species. In this situation it is particularly important to understand the measurement process well enough to produce a realistic uncertainty statement. Environmental analytical methods will be examined as far as necessary, but reference will also be made to analytical methods in general and to physical measurement methods where appropriate. This paper describes ways and means of quantifying uncertainty for frequently practised methods of environmental analysis. It will be shown that operationally defined measurands are no obstacle to the estimation process as described in the Eurachem/CITAC Guide if it is accepted that the dominating component of uncertainty comes from the actual practice of the method as a reproducibility standard deviation.

  19. Probabilistic structural analysis to quantify uncertainties associated with turbopump blades

    Science.gov (United States)

    Nagpal, Vinod K.; Rubinstein, Robert; Chamis, Christos C.

    1987-01-01

    A probabilistic study of turbopump blades has been in progress at NASA Lewis Research Center for over the last two years. The objectives of this study are to evaluate the effects of uncertainties in geometry and material properties on the structural response of the turbopump blades to evaluate the tolerance limits on the design. A methodology based on probabilistic approach has been developed to quantify the effects of the random uncertainties. The results of this study indicate that only the variations in geometry have significant effects.

  20. Quantifying chemical uncertainties in simulations of the ISM

    Science.gov (United States)

    Glover, Simon

    2018-06-01

    The ever-increasing power of large parallel computers now makes it possible to include increasingly sophisticated chemical models in three-dimensional simulations of the interstellar medium (ISM). This allows us to study the role that chemistry plays in the thermal balance of a realistically-structured, turbulent ISM, as well as enabling us to generated detailed synthetic observations of important atomic or molecular tracers. However, one major constraint on the accuracy of these models is the accuracy with which the input chemical rate coefficients are known. Uncertainties in these chemical rate coefficients inevitably introduce uncertainties into the model predictions. In this talk, I will review some of the methods we can use to quantify these uncertainties and to identify the key reactions where improved chemical data is most urgently required. I will also discuss a few examples, ranging from the local ISM to the high-redshift universe.

  1. Three-dimensional laser scanning technique to quantify aggregate and ballast shape properties

    CSIR Research Space (South Africa)

    Anochie-Boateng, Joseph

    2013-06-01

    Full Text Available methods towards a more accurate and automated techniques to quantify aggregate shape properties. This paper validates a new flakiness index equation using three-dimensional (3-D) laser scanning data of aggregate and ballast materials obtained from...

  2. Quantifying uncertainty, variability and likelihood for ordinary differential equation models

    LENUS (Irish Health Repository)

    Weisse, Andrea Y

    2010-10-28

    Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.

  3. Quantifying uncertainties of seismic Bayesian inversion of Northern Great Plains

    Science.gov (United States)

    Gao, C.; Lekic, V.

    2017-12-01

    Elastic waves excited by earthquakes are the fundamental observations of the seismological studies. Seismologists measure information such as travel time, amplitude, and polarization to infer the properties of earthquake source, seismic wave propagation, and subsurface structure. Across numerous applications, seismic imaging has been able to take advantage of complimentary seismic observables to constrain profiles and lateral variations of Earth's elastic properties. Moreover, seismic imaging plays a unique role in multidisciplinary studies of geoscience by providing direct constraints on the unreachable interior of the Earth. Accurate quantification of uncertainties of inferences made from seismic observations is of paramount importance for interpreting seismic images and testing geological hypotheses. However, such quantification remains challenging and subjective due to the non-linearity and non-uniqueness of geophysical inverse problem. In this project, we apply a reverse jump Markov chain Monte Carlo (rjMcMC) algorithm for a transdimensional Bayesian inversion of continental lithosphere structure. Such inversion allows us to quantify the uncertainties of inversion results by inverting for an ensemble solution. It also yields an adaptive parameterization that enables simultaneous inversion of different elastic properties without imposing strong prior information on the relationship between them. We present retrieved profiles of shear velocity (Vs) and radial anisotropy in Northern Great Plains using measurements from USArray stations. We use both seismic surface wave dispersion and receiver function data due to their complementary constraints of lithosphere structure. Furthermore, we analyze the uncertainties of both individual and joint inversion of those two data types to quantify the benefit of doing joint inversion. As an application, we infer the variation of Moho depths and crustal layering across the northern Great Plains.

  4. Quantifying uncertainty and trade-offs in resilience assessments

    Directory of Open Access Journals (Sweden)

    Craig R. Allen

    2018-03-01

    Full Text Available Several frameworks have been developed to assess the resilience of social-ecological systems, but most require substantial data inputs, time, and technical expertise. Stakeholders and practitioners often lack the resources for such intensive efforts. Furthermore, most end with problem framing and fail to explicitly address trade-offs and uncertainty. To remedy this gap, we developed a rapid survey assessment that compares the relative resilience of social-ecological systems with respect to a number of resilience properties. This approach generates large amounts of information relative to stakeholder inputs. We targeted four stakeholder categories: government (policy, regulation, management, end users (farmers, ranchers, landowners, industry, agency/public science (research, university, extension, and NGOs (environmental, citizen, social justice in four North American watersheds, to assess social-ecological resilience through surveys. Conceptually, social-ecological systems are comprised of components ranging from strictly human to strictly ecological, but that relate directly or indirectly to one another. They have soft boundaries and several important dimensions or axes that together describe the nature of social-ecological interactions, e.g., variability, diversity, modularity, slow variables, feedbacks, capital, innovation, redundancy, and ecosystem services. There is no absolute measure of resilience, so our design takes advantage of cross-watershed comparisons and therefore focuses on relative resilience. Our approach quantifies and compares the relative resilience across watershed systems and potential trade-offs among different aspects of the social-ecological system, e.g., between social, economic, and ecological contributions. This approach permits explicit assessment of several types of uncertainty (e.g., self-assigned uncertainty for stakeholders; uncertainty across respondents, watersheds, and subsystems, and subjectivity in

  5. Quantifying and predicting interpretational uncertainty in cross-sections

    Science.gov (United States)

    Randle, Charles; Bond, Clare; Monaghan, Alison; Lark, Murray

    2015-04-01

    Cross-sections are often constructed from data to create a visual impression of the geologist's interpretation of the sub-surface geology. However as with all interpretations, this vision of the sub-surface geology is uncertain. We have designed and carried out an experiment with the aim of quantifying the uncertainty in geological cross-sections created by experts interpreting borehole data. By analysing different attributes of the data and interpretations we reflect on the main controls on uncertainty. A group of ten expert modellers at the British Geological Survey were asked to interpret an 11.4 km long cross-section from south-east Glasgow, UK. The data provided consisted of map and borehole data of the superficial deposits and shallow bedrock. Each modeller had a unique set of 11 boreholes removed from their dataset, to which their interpretations of the top of the bedrock were compared. This methodology allowed quantification of how far from the 'correct answer' each interpretation is at 11 points along each interpreted cross-section line; through comparison of the interpreted and actual bedrock elevations in the boreholes. This resulted in the collection of 110 measurements of the error to use in further analysis. To determine the potential control on uncertainty various attributes relating to the modeller, the interpretation and the data were recorded. Modellers were asked to fill out a questionnaire asking for information; such as how much 3D modelling experience they had, and how long it took them to complete the interpretation. They were also asked to record their confidence in their interpretations graphically, in the form of a confidence level drawn onto the cross-section. Initial analysis showed the majority of the experts' interpreted bedrock elevations within 5 metres of those recorded in the withheld boreholes. Their distribution is peaked and symmetrical about a mean of zero, indicating that there was no tendency for the experts to either under

  6. Quantifying uncertainties of permafrost carbon–climate feedbacks

    Directory of Open Access Journals (Sweden)

    E. J. Burke

    2017-06-01

    Full Text Available The land surface models JULES (Joint UK Land Environment Simulator, two versions and ORCHIDEE-MICT (Organizing Carbon and Hydrology in Dynamic Ecosystems, each with a revised representation of permafrost carbon, were coupled to the Integrated Model Of Global Effects of climatic aNomalies (IMOGEN intermediate-complexity climate and ocean carbon uptake model. IMOGEN calculates atmospheric carbon dioxide (CO2 and local monthly surface climate for a given emission scenario with the land–atmosphere CO2 flux exchange from either JULES or ORCHIDEE-MICT. These simulations include feedbacks associated with permafrost carbon changes in a warming world. Both IMOGEN–JULES and IMOGEN–ORCHIDEE-MICT were forced by historical and three alternative future-CO2-emission scenarios. Those simulations were performed for different climate sensitivities and regional climate change patterns based on 22 different Earth system models (ESMs used for CMIP3 (phase 3 of the Coupled Model Intercomparison Project, allowing us to explore climate uncertainties in the context of permafrost carbon–climate feedbacks. Three future emission scenarios consistent with three representative concentration pathways were used: RCP2.6, RCP4.5 and RCP8.5. Paired simulations with and without frozen carbon processes were required to quantify the impact of the permafrost carbon feedback on climate change. The additional warming from the permafrost carbon feedback is between 0.2 and 12 % of the change in the global mean temperature (ΔT by the year 2100 and 0.5 and 17 % of ΔT by 2300, with these ranges reflecting differences in land surface models, climate models and emissions pathway. As a percentage of ΔT, the permafrost carbon feedback has a greater impact on the low-emissions scenario (RCP2.6 than on the higher-emissions scenarios, suggesting that permafrost carbon should be taken into account when evaluating scenarios of heavy mitigation and stabilization

  7. Quantifying uncertainty in Bayesian calibrated animal-to-human PBPK models with informative prior distributions

    Science.gov (United States)

    Understanding and quantifying the uncertainty of model parameters and predictions has gained more interest in recent years with the increased use of computational models in chemical risk assessment. Fully characterizing the uncertainty in risk metrics derived from linked quantita...

  8. Quantifying uncertainty in LCA-modelling of waste management systems

    DEFF Research Database (Denmark)

    Clavreul, Julie; Guyonnet, D.; Christensen, Thomas Højlund

    2012-01-01

    Uncertainty analysis in LCA studies has been subject to major progress over the last years. In the context of waste management, various methods have been implemented but a systematic method for uncertainty analysis of waste-LCA studies is lacking. The objective of this paper is (1) to present...... the sources of uncertainty specifically inherent to waste-LCA studies, (2) to select and apply several methods for uncertainty analysis and (3) to develop a general framework for quantitative uncertainty assessment of LCA of waste management systems. The suggested method is a sequence of four steps combining...

  9. Ways forward in quantifying data uncertainty in geological databases

    Science.gov (United States)

    Kint, Lars; Chademenos, Vasileios; De Mol, Robin; Kapel, Michel; Lagring, Ruth; Stafleu, Jan; van Heteren, Sytze; Van Lancker, Vera

    2017-04-01

    Issues of compatibility of geological data resulting from the merging of many different data sources and time periods may jeopardize harmonization of data products. Important progress has been made due to increasing data standardization, e.g., at a European scale through the SeaDataNet and Geo-Seas data management infrastructures. Common geological data standards are unambiguously defined, avoiding semantic overlap in geological data and associated metadata. Quality flagging is also applied increasingly, though ways in further propagating this information in data products is still at its infancy. For the Belgian and southern Netherlands part of the North Sea, databases are now rigorously re-analyzed in view of quantifying quality flags in terms of uncertainty to be propagated through a 3D voxel model of the subsurface (https://odnature.naturalsciences.be/tiles/). An approach is worked out to consistently account for differences in positioning, sampling gear, analysis procedures and vintage. The flag scaling is used in the interpolation process of geological data, but will also be used when visualizing the suitability of geological resources in a decision support system. Expert knowledge is systematically revisited as to avoid totally inappropriate use of the flag scaling process. The quality flagging is also important when communicating results to end-users. Therefore, an open data policy in combination with several processing tools will be at the heart of a new Belgian geological data portal as a platform for knowledge building (KB) and knowledge management (KM) serving the marine geoscience, the policy community and the public at large.

  10. Quantifying uncertainties in precipitation: a case study from Greece

    Directory of Open Access Journals (Sweden)

    C. Anagnostopoulou

    2008-04-01

    Full Text Available The main objective of the present study was the examination and the quantification of the uncertainties in the precipitation time series over the Greek area, for a 42-year time period. The uncertainty index applied to the rainfall data is a combination (total of the departures of the rainfall season length, of the median data of the accumulated percentages and of the total amounts of rainfall. Results of the study indicated that all the stations are characterized, on an average basis, by medium to high uncertainty. The stations that presented an increasing rainfall uncertainty were the ones located mainly to the continental parts of the study region. From the temporal analysis of the uncertainty index, it was demonstrated that the greatest percentage of the years, for all the stations time-series, was characterized by low to high uncertainty (intermediate categories of the index. Most of the results of the uncertainty index for the Greek region are similar to the corresponding results of various stations all over the European region.

  11. Quantifying and managing uncertainty in operational modal analysis

    Science.gov (United States)

    Au, Siu-Kui; Brownjohn, James M. W.; Mottershead, John E.

    2018-03-01

    Operational modal analysis aims at identifying the modal properties (natural frequency, damping, etc.) of a structure using only the (output) vibration response measured under ambient conditions. Highly economical and feasible, it is becoming a common practice in full-scale vibration testing. In the absence of (input) loading information, however, the modal properties have significantly higher uncertainty than their counterparts identified from free or forced vibration (known input) tests. Mastering the relationship between identification uncertainty and test configuration is of great interest to both scientists and engineers, e.g., for achievable precision limits and test planning/budgeting. Addressing this challenge beyond the current state-of-the-art that are mostly concerned with identification algorithms, this work obtains closed form analytical expressions for the identification uncertainty (variance) of modal parameters that fundamentally explains the effect of test configuration. Collectively referred as 'uncertainty laws', these expressions are asymptotically correct for well-separated modes, small damping and long data; and are applicable under non-asymptotic situations. They provide a scientific basis for planning and standardization of ambient vibration tests, where factors such as channel noise, sensor number and location can be quantitatively accounted for. The work is reported comprehensively with verification through synthetic and experimental data (laboratory and field), scientific implications and practical guidelines for planning ambient vibration tests.

  12. Quantifying uncertainty in NDSHA estimates due to earthquake catalogue

    Science.gov (United States)

    Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano

    2014-05-01

    The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate

  13. Towards quantifying uncertainty in predictions of Amazon 'dieback'.

    Science.gov (United States)

    Huntingford, Chris; Fisher, Rosie A; Mercado, Lina; Booth, Ben B B; Sitch, Stephen; Harris, Phil P; Cox, Peter M; Jones, Chris D; Betts, Richard A; Malhi, Yadvinder; Harris, Glen R; Collins, Mat; Moorcroft, Paul

    2008-05-27

    Simulations with the Hadley Centre general circulation model (HadCM3), including carbon cycle model and forced by a 'business-as-usual' emissions scenario, predict a rapid loss of Amazonian rainforest from the middle of this century onwards. The robustness of this projection to both uncertainty in physical climate drivers and the formulation of the land surface scheme is investigated. We analyse how the modelled vegetation cover in Amazonia responds to (i) uncertainty in the parameters specified in the atmosphere component of HadCM3 and their associated influence on predicted surface climate. We then enhance the land surface description and (ii) implement a multilayer canopy light interception model and compare with the simple 'big-leaf' approach used in the original simulations. Finally, (iii) we investigate the effect of changing the method of simulating vegetation dynamics from an area-based model (TRIFFID) to a more complex size- and age-structured approximation of an individual-based model (ecosystem demography). We find that the loss of Amazonian rainforest is robust across the climate uncertainty explored by perturbed physics simulations covering a wide range of global climate sensitivity. The introduction of the refined light interception model leads to an increase in simulated gross plant carbon uptake for the present day, but, with altered respiration, the net effect is a decrease in net primary productivity. However, this does not significantly affect the carbon loss from vegetation and soil as a consequence of future simulated depletion in soil moisture; the Amazon forest is still lost. The introduction of the more sophisticated dynamic vegetation model reduces but does not halt the rate of forest dieback. The potential for human-induced climate change to trigger the loss of Amazon rainforest appears robust within the context of the uncertainties explored in this paper. Some further uncertainties should be explored, particularly with respect to the

  14. Different methodologies to quantify uncertainties of air emissions.

    Science.gov (United States)

    Romano, Daniela; Bernetti, Antonella; De Lauretis, Riccardo

    2004-10-01

    Characterization of the uncertainty associated with air emission estimates is of critical importance especially in the compilation of air emission inventories. In this paper, two different theories are discussed and applied to evaluate air emissions uncertainty. In addition to numerical analysis, which is also recommended in the framework of the United Nation Convention on Climate Change guidelines with reference to Monte Carlo and Bootstrap simulation models, fuzzy analysis is also proposed. The methodologies are discussed and applied to an Italian example case study. Air concentration values are measured from two electric power plants: a coal plant, consisting of two boilers and a fuel oil plant, of four boilers; the pollutants considered are sulphur dioxide (SO(2)), nitrogen oxides (NO(X)), carbon monoxide (CO) and particulate matter (PM). Monte Carlo, Bootstrap and fuzzy methods have been applied to estimate uncertainty of these data. Regarding Monte Carlo, the most accurate results apply to Gaussian distributions; a good approximation is also observed for other distributions with almost regular features either positive asymmetrical or negative asymmetrical. Bootstrap, on the other hand, gives a good uncertainty estimation for irregular and asymmetrical distributions. The logic of fuzzy analysis, where data are represented as vague and indefinite in opposition to the traditional conception of neatness, certain classification and exactness of the data, follows a different description. In addition to randomness (stochastic variability) only, fuzzy theory deals with imprecision (vagueness) of data. Fuzzy variance of the data set was calculated; the results cannot be directly compared with empirical data but the overall performance of the theory is analysed. Fuzzy theory may appear more suitable for qualitative reasoning than for a quantitative estimation of uncertainty, but it suits well when little information and few measurements are available and when

  15. Quantifying Uncertainty in Near Surface Electromagnetic Imaging Using Bayesian Methods

    Science.gov (United States)

    Blatter, D. B.; Ray, A.; Key, K.

    2017-12-01

    Geoscientists commonly use electromagnetic methods to image the Earth's near surface. Field measurements of EM fields are made (often with the aid an artificial EM source) and then used to infer near surface electrical conductivity via a process known as inversion. In geophysics, the standard inversion tool kit is robust and can provide an estimate of the Earth's near surface conductivity that is both geologically reasonable and compatible with the measured field data. However, standard inverse methods struggle to provide a sense of the uncertainty in the estimate they provide. This is because the task of finding an Earth model that explains the data to within measurement error is non-unique - that is, there are many, many such models; but the standard methods provide only one "answer." An alternative method, known as Bayesian inversion, seeks to explore the full range of Earth model parameters that can adequately explain the measured data, rather than attempting to find a single, "ideal" model. Bayesian inverse methods can therefore provide a quantitative assessment of the uncertainty inherent in trying to infer near surface conductivity from noisy, measured field data. This study applies a Bayesian inverse method (called trans-dimensional Markov chain Monte Carlo) to transient airborne EM data previously collected over Taylor Valley - one of the McMurdo Dry Valleys in Antarctica. Our results confirm the reasonableness of previous estimates (made using standard methods) of near surface conductivity beneath Taylor Valley. In addition, we demonstrate quantitatively the uncertainty associated with those estimates. We demonstrate that Bayesian inverse methods can provide quantitative uncertainty to estimates of near surface conductivity.

  16. Quantifying uncertainty on sediment loads using bootstrap confidence intervals

    Science.gov (United States)

    Slaets, Johanna I. F.; Piepho, Hans-Peter; Schmitter, Petra; Hilger, Thomas; Cadisch, Georg

    2017-01-01

    Load estimates are more informative than constituent concentrations alone, as they allow quantification of on- and off-site impacts of environmental processes concerning pollutants, nutrients and sediment, such as soil fertility loss, reservoir sedimentation and irrigation channel siltation. While statistical models used to predict constituent concentrations have been developed considerably over the last few years, measures of uncertainty on constituent loads are rarely reported. Loads are the product of two predictions, constituent concentration and discharge, integrated over a time period, which does not make it straightforward to produce a standard error or a confidence interval. In this paper, a linear mixed model is used to estimate sediment concentrations. A bootstrap method is then developed that accounts for the uncertainty in the concentration and discharge predictions, allowing temporal correlation in the constituent data, and can be used when data transformations are required. The method was tested for a small watershed in Northwest Vietnam for the period 2010-2011. The results showed that confidence intervals were asymmetric, with the highest uncertainty in the upper limit, and that a load of 6262 Mg year-1 had a 95 % confidence interval of (4331, 12 267) in 2010 and a load of 5543 Mg an interval of (3593, 8975) in 2011. Additionally, the approach demonstrated that direct estimates from the data were biased downwards compared to bootstrap median estimates. These results imply that constituent loads predicted from regression-type water quality models could frequently be underestimating sediment yields and their environmental impact.

  17. Quantifying uncertainty in Gulf of Mexico forecasts stemming from uncertain initial conditions

    KAUST Repository

    Iskandarani, Mohamed; Le Hé naff, Matthieu; Srinivasan, Ashwanth; Knio, Omar

    2016-01-01

    Polynomial Chaos (PC) methods are used to quantify the impacts of initial conditions uncertainties on oceanic forecasts of the Gulf of Mexico circulation. Empirical Orthogonal Functions are used as initial conditions perturbations with their modal

  18. Moving Beyond 2% Uncertainty: A New Framework for Quantifying Lidar Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Newman, Jennifer F.; Clifton, Andrew

    2017-03-08

    Remote sensing of wind using lidar is revolutionizing wind energy. However, current generations of wind lidar are ascribed a climatic value of uncertainty, which is based on a poor description of lidar sensitivity to external conditions. In this presentation, we show how it is important to consider the complete lidar measurement process to define the measurement uncertainty, which in turn offers the ability to define a much more granular and dynamic measurement uncertainty. This approach is a progression from the 'white box' lidar uncertainty method.

  19. Quantifying the Effects of Historical Land Cover Conversion Uncertainty on Global Carbon and Climate Estimates

    Science.gov (United States)

    Di Vittorio, A. V.; Mao, J.; Shi, X.; Chini, L.; Hurtt, G.; Collins, W. D.

    2018-01-01

    Previous studies have examined land use change as a driver of global change, but the translation of land use change into land cover conversion has been largely unconstrained. Here we quantify the effects of land cover conversion uncertainty on the global carbon and climate system using the integrated Earth System Model. Our experiments use identical land use change data and vary land cover conversions to quantify associated uncertainty in carbon and climate estimates. Land cover conversion uncertainty is large, constitutes a 5 ppmv range in estimated atmospheric CO2 in 2004, and generates carbon uncertainty that is equivalent to 80% of the net effects of CO2 and climate and 124% of the effects of nitrogen deposition during 1850-2004. Additionally, land cover uncertainty generates differences in local surface temperature of over 1°C. We conclude that future studies addressing land use, carbon, and climate need to constrain and reduce land cover conversion uncertainties.

  20. Quantifying uncertainty of geological 3D layer models, constructed with a-priori geological expertise

    NARCIS (Netherlands)

    Gunnink, J.J.; Maljers, D.; Hummelman, J.

    2010-01-01

    Uncertainty quantification of geological models that are constructed with additional geological expert-knowledge is not straightforward. To construct sound geological 3D layer models we use a lot of additional knowledge, with an uncertainty that is hard to quantify. Examples of geological expert

  1. Quantifying the interplay between environmental and social effects on aggregated-fish dynamics.

    Directory of Open Access Journals (Sweden)

    Manuela Capello

    Full Text Available Demonstrating and quantifying the respective roles of social interactions and external stimuli governing fish dynamics is key to understanding fish spatial distribution. If seminal studies have contributed to our understanding of fish spatial organization in schools, little experimental information is available on fish in their natural environment, where aggregations often occur in the presence of spatial heterogeneities. Here, we applied novel modeling approaches coupled to accurate acoustic tracking for studying the dynamics of a group of gregarious fish in a heterogeneous environment. To this purpose, we acoustically tracked with submeter resolution the positions of twelve small pelagic fish (Selar crumenophthalmus in the presence of an anchored floating object, constituting a point of attraction for several fish species. We constructed a field-based model for aggregated-fish dynamics, deriving effective interactions for both social and external stimuli from experiments. We tuned the model parameters that best fit the experimental data and quantified the importance of social interactions in the aggregation, providing an explanation for the spatial structure of fish aggregations found around floating objects. Our results can be generalized to other gregarious species and contexts as long as it is possible to observe the fine-scale movements of a subset of individuals.

  2. Testing methodologies for quantifying physical models uncertainties. A comparative exercise using CIRCE and IPREM (FFTBM)

    Energy Technology Data Exchange (ETDEWEB)

    Freixa, Jordi, E-mail: jordi.freixa-terradas@upc.edu; Alfonso, Elsa de, E-mail: elsa.de.alfonso@upc.edu; Reventós, Francesc, E-mail: francesc.reventos@upc.edu

    2016-08-15

    Highlights: • Uncertainty of physical models are a key issue in Best estimate plus uncertainty analysis. • Estimation of uncertainties of physical models of thermal hydraulics system codes. • Comparison of CIRCÉ and FFTBM methodologies. • Simulation of reflood experiments in order to evaluate uncertainty of physical models related to the reflood scenario. - Abstract: The increasing importance of Best-Estimate Plus Uncertainty (BEPU) analyses in nuclear safety and licensing processes have lead to several international activities. The latest findings highlighted the uncertainties of physical models as one of the most controversial aspects of BEPU. This type of uncertainties is an important contributor to the total uncertainty of NPP BE calculations. Due to the complexity of estimating this uncertainty, it is often assessed solely by engineering judgment. The present study comprises a comparison of two different state-of-the-art methodologies CIRCÉ and IPREM (FFTBM) capable of quantifying the uncertainty of physical models. Similarities and differences of their results are discussed through the observation of probability distribution functions and envelope calculations. In particular, the analyzed scenario is core reflood. Experimental data from the FEBA and PERICLES test facilities is employed while the thermal hydraulic simulations are carried out with RELAP5/mod3.3. This work is undertaken under the framework of PREMIUM (Post-BEMUSE Reflood Model Input Uncertainty Methods) benchmark.

  3. Quantifying Uncertainty in Satellite-Retrieved Land Surface Temperature from Cloud Detection Errors

    Directory of Open Access Journals (Sweden)

    Claire E. Bulgin

    2018-04-01

    Full Text Available Clouds remain one of the largest sources of uncertainty in remote sensing of surface temperature in the infrared, but this uncertainty has not generally been quantified. We present a new approach to do so, applied here to the Advanced Along-Track Scanning Radiometer (AATSR. We use an ensemble of cloud masks based on independent methodologies to investigate the magnitude of cloud detection uncertainties in area-average Land Surface Temperature (LST retrieval. We find that at a grid resolution of 625 km 2 (commensurate with a 0.25 ∘ grid size at the tropics, cloud detection uncertainties are positively correlated with cloud-cover fraction in the cell and are larger during the day than at night. Daytime cloud detection uncertainties range between 2.5 K for clear-sky fractions of 10–20% and 1.03 K for clear-sky fractions of 90–100%. Corresponding night-time uncertainties are 1.6 K and 0.38 K, respectively. Cloud detection uncertainty shows a weaker positive correlation with the number of biomes present within a grid cell, used as a measure of heterogeneity in the background against which the cloud detection must operate (e.g., surface temperature, emissivity and reflectance. Uncertainty due to cloud detection errors is strongly dependent on the dominant land cover classification. We find cloud detection uncertainties of a magnitude of 1.95 K over permanent snow and ice, 1.2 K over open forest, 0.9–1 K over bare soils and 0.09 K over mosaic cropland, for a standardised clear-sky fraction of 74.2%. As the uncertainties arising from cloud detection errors are of a significant magnitude for many surface types and spatially heterogeneous where land classification varies rapidly, LST data producers are encouraged to quantify cloud-related uncertainties in gridded products.

  4. A Bayesian statistical method for quantifying model form uncertainty and two model combination methods

    International Nuclear Information System (INIS)

    Park, Inseok; Grandhi, Ramana V.

    2014-01-01

    Apart from parametric uncertainty, model form uncertainty as well as prediction error may be involved in the analysis of engineering system. Model form uncertainty, inherently existing in selecting the best approximation from a model set cannot be ignored, especially when the predictions by competing models show significant differences. In this research, a methodology based on maximum likelihood estimation is presented to quantify model form uncertainty using the measured differences of experimental and model outcomes, and is compared with a fully Bayesian estimation to demonstrate its effectiveness. While a method called the adjustment factor approach is utilized to propagate model form uncertainty alone into the prediction of a system response, a method called model averaging is utilized to incorporate both model form uncertainty and prediction error into it. A numerical problem of concrete creep is used to demonstrate the processes for quantifying model form uncertainty and implementing the adjustment factor approach and model averaging. Finally, the presented methodology is applied to characterize the engineering benefits of a laser peening process

  5. Fuzzy probability based fault tree analysis to propagate and quantify epistemic uncertainty

    International Nuclear Information System (INIS)

    Purba, Julwan Hendry; Sony Tjahyani, D.T.; Ekariansyah, Andi Sofrany; Tjahjono, Hendro

    2015-01-01

    Highlights: • Fuzzy probability based fault tree analysis is to evaluate epistemic uncertainty in fuzzy fault tree analysis. • Fuzzy probabilities represent likelihood occurrences of all events in a fault tree. • A fuzzy multiplication rule quantifies epistemic uncertainty of minimal cut sets. • A fuzzy complement rule estimate epistemic uncertainty of the top event. • The proposed FPFTA has successfully evaluated the U.S. Combustion Engineering RPS. - Abstract: A number of fuzzy fault tree analysis approaches, which integrate fuzzy concepts into the quantitative phase of conventional fault tree analysis, have been proposed to study reliabilities of engineering systems. Those new approaches apply expert judgments to overcome the limitation of the conventional fault tree analysis when basic events do not have probability distributions. Since expert judgments might come with epistemic uncertainty, it is important to quantify the overall uncertainties of the fuzzy fault tree analysis. Monte Carlo simulation is commonly used to quantify the overall uncertainties of conventional fault tree analysis. However, since Monte Carlo simulation is based on probability distribution, this technique is not appropriate for fuzzy fault tree analysis, which is based on fuzzy probabilities. The objective of this study is to develop a fuzzy probability based fault tree analysis to overcome the limitation of fuzzy fault tree analysis. To demonstrate the applicability of the proposed approach, a case study is performed and its results are then compared to the results analyzed by a conventional fault tree analysis. The results confirm that the proposed fuzzy probability based fault tree analysis is feasible to propagate and quantify epistemic uncertainties in fault tree analysis

  6. Quantifying the uncertainty of wave energy conversion device cost for policy appraisal: An Irish case study

    International Nuclear Information System (INIS)

    Farrell, Niall; Donoghue, Cathal O’; Morrissey, Karyn

    2015-01-01

    Wave Energy Conversion (WEC) devices are at a pre-commercial stage of development with feasibility studies sensitive to uncertainties surrounding assumed input costs. This may affect decision making. This paper analyses the impact these uncertainties may have on investor, developer and policymaker decisions using an Irish case study. Calibrated to data present in the literature, a probabilistic methodology is shown to be an effective means to carry this out. Value at Risk (VaR) and Conditional Value at Risk (CVaR) metrics are used to quantify the certainty of achieving a given cost or return on investment. We analyse the certainty of financial return provided by the proposed Irish Feed-in Tariff (FiT) policy. The influence of cost reduction through bulk discount is also discussed, with cost reduction targets for developers identified. Uncertainty is found to have a greater impact on the profitability of smaller installations and those subject to lower rates of cost reduction. This paper emphasises that a premium is required to account for cost uncertainty when setting FiT rates. By quantifying uncertainty, a means to specify an efficient premium is presented. - Highlights: • Probabilistic model quantifies uncertainty for wave energy feasibility analyses. • Methodology presented and applied to an Irish case study. • A feed-in tariff premium of 3–4 c/kWh required to account for cost uncertainty. • Sensitivity of uncertainty and cost to rates of technological change analysed. • Use of probabilistic model for investors and developers also demonstrated

  7. Methodology for quantifying uncertainty in coal assessments with an application to a Texas lignite deposit

    Energy Technology Data Exchange (ETDEWEB)

    Olea, Ricardo A.; Luppens, James A.; Tewalt, Susan J. [U.S. Geological Survey, Reston, VA (United States)

    2011-01-01

    A common practice for characterizing uncertainty in coal resource assessments has been the itemization of tonnage at the mining unit level and the classification of such units according to distance to drilling holes. Distance criteria, such as those used in U.S. Geological Survey Circular 891, are still widely used for public disclosure. A major deficiency of distance methods is that they do not provide a quantitative measure of uncertainty. Additionally, relying on distance between data points alone does not take into consideration other factors known to have an influence on uncertainty, such as spatial correlation, type of probability distribution followed by the data, geological discontinuities, and boundary of the deposit. Several geostatistical methods have been combined to formulate a quantitative characterization for appraising uncertainty. Drill hole datasets ranging from widespread exploration drilling to detailed development drilling from a lignite deposit in Texas were used to illustrate the modeling. The results show that distance to the nearest drill hole is almost completely unrelated to uncertainty, which confirms the inadequacy of characterizing uncertainty based solely on a simple classification of resources by distance classes. The more complex statistical methods used in this study quantify uncertainty and show good agreement between confidence intervals in the uncertainty predictions and data from additional drilling. (author)

  8. Methodology for quantifying uncertainty in coal assessments with an application to a Texas lignite deposit

    Science.gov (United States)

    Olea, R.A.; Luppens, J.A.; Tewalt, S.J.

    2011-01-01

    A common practice for characterizing uncertainty in coal resource assessments has been the itemization of tonnage at the mining unit level and the classification of such units according to distance to drilling holes. Distance criteria, such as those used in U.S. Geological Survey Circular 891, are still widely used for public disclosure. A major deficiency of distance methods is that they do not provide a quantitative measure of uncertainty. Additionally, relying on distance between data points alone does not take into consideration other factors known to have an influence on uncertainty, such as spatial correlation, type of probability distribution followed by the data, geological discontinuities, and boundary of the deposit. Several geostatistical methods have been combined to formulate a quantitative characterization for appraising uncertainty. Drill hole datasets ranging from widespread exploration drilling to detailed development drilling from a lignite deposit in Texas were used to illustrate the modeling. The results show that distance to the nearest drill hole is almost completely unrelated to uncertainty, which confirms the inadequacy of characterizing uncertainty based solely on a simple classification of resources by distance classes. The more complex statistical methods used in this study quantify uncertainty and show good agreement between confidence intervals in the uncertainty predictions and data from additional drilling. ?? 2010.

  9. Quantifying measurement uncertainty and spatial variability in the context of model evaluation

    Science.gov (United States)

    Choukulkar, A.; Brewer, A.; Pichugina, Y. L.; Bonin, T.; Banta, R. M.; Sandberg, S.; Weickmann, A. M.; Djalalova, I.; McCaffrey, K.; Bianco, L.; Wilczak, J. M.; Newman, J. F.; Draxl, C.; Lundquist, J. K.; Wharton, S.; Olson, J.; Kenyon, J.; Marquis, M.

    2017-12-01

    In an effort to improve wind forecasts for the wind energy sector, the Department of Energy and the NOAA funded the second Wind Forecast Improvement Project (WFIP2). As part of the WFIP2 field campaign, a large suite of in-situ and remote sensing instrumentation was deployed to the Columbia River Gorge in Oregon and Washington from October 2015 - March 2017. The array of instrumentation deployed included 915-MHz wind profiling radars, sodars, wind- profiling lidars, and scanning lidars. The role of these instruments was to provide wind measurements at high spatial and temporal resolution for model evaluation and improvement of model physics. To properly determine model errors, the uncertainties in instrument-model comparisons need to be quantified accurately. These uncertainties arise from several factors such as measurement uncertainty, spatial variability, and interpolation of model output to instrument locations, to name a few. In this presentation, we will introduce a formalism to quantify measurement uncertainty and spatial variability. The accuracy of this formalism will be tested using existing datasets such as the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign. Finally, the uncertainties in wind measurement and the spatial variability estimates from the WFIP2 field campaign will be discussed to understand the challenges involved in model evaluation.

  10. Quantifying Uncertainty in Flood Inundation Mapping Using Streamflow Ensembles and Multiple Hydraulic Modeling Techniques

    Science.gov (United States)

    Hosseiny, S. M. H.; Zarzar, C.; Gomez, M.; Siddique, R.; Smith, V.; Mejia, A.; Demir, I.

    2016-12-01

    The National Water Model (NWM) provides a platform for operationalize nationwide flood inundation forecasting and mapping. The ability to model flood inundation on a national scale will provide invaluable information to decision makers and local emergency officials. Often, forecast products use deterministic model output to provide a visual representation of a single inundation scenario, which is subject to uncertainty from various sources. While this provides a straightforward representation of the potential inundation, the inherent uncertainty associated with the model output should be considered to optimize this tool for decision making support. The goal of this study is to produce ensembles of future flood inundation conditions (i.e. extent, depth, and velocity) to spatially quantify and visually assess uncertainties associated with the predicted flood inundation maps. The setting for this study is located in a highly urbanized watershed along the Darby Creek in Pennsylvania. A forecasting framework coupling the NWM with multiple hydraulic models was developed to produce a suite ensembles of future flood inundation predictions. Time lagged ensembles from the NWM short range forecasts were used to account for uncertainty associated with the hydrologic forecasts. The forecasts from the NWM were input to iRIC and HEC-RAS two-dimensional software packages, from which water extent, depth, and flow velocity were output. Quantifying the agreement between output ensembles for each forecast grid provided the uncertainty metrics for predicted flood water inundation extent, depth, and flow velocity. For visualization, a series of flood maps that display flood extent, water depth, and flow velocity along with the underlying uncertainty associated with each of the forecasted variables were produced. The results from this study demonstrate the potential to incorporate and visualize model uncertainties in flood inundation maps in order to identify the high flood risk zones.

  11. Quantifying geological uncertainty for flow and transport modeling in multi-modal heterogeneous formations

    Science.gov (United States)

    Feyen, Luc; Caers, Jef

    2006-06-01

    In this work, we address the problem of characterizing the heterogeneity and uncertainty of hydraulic properties for complex geological settings. Hereby, we distinguish between two scales of heterogeneity, namely the hydrofacies structure and the intrafacies variability of the hydraulic properties. We employ multiple-point geostatistics to characterize the hydrofacies architecture. The multiple-point statistics are borrowed from a training image that is designed to reflect the prior geological conceptualization. The intrafacies variability of the hydraulic properties is represented using conventional two-point correlation methods, more precisely, spatial covariance models under a multi-Gaussian spatial law. We address the different levels and sources of uncertainty in characterizing the subsurface heterogeneity, and explore their effect on groundwater flow and transport predictions. Typically, uncertainty is assessed by way of many images, termed realizations, of a fixed statistical model. However, in many cases, sampling from a fixed stochastic model does not adequately represent the space of uncertainty. It neglects the uncertainty related to the selection of the stochastic model and the estimation of its input parameters. We acknowledge the uncertainty inherent in the definition of the prior conceptual model of aquifer architecture and in the estimation of global statistics, anisotropy, and correlation scales. Spatial bootstrap is used to assess the uncertainty of the unknown statistical parameters. As an illustrative example, we employ a synthetic field that represents a fluvial setting consisting of an interconnected network of channel sands embedded within finer-grained floodplain material. For this highly non-stationary setting we quantify the groundwater flow and transport model prediction uncertainty for various levels of hydrogeological uncertainty. Results indicate the importance of accurately describing the facies geometry, especially for transport

  12. The accountability imperative for quantifying the uncertainty of emission forecasts: evidence from Mexico

    DEFF Research Database (Denmark)

    Puig, Daniel; Morales-Nápoles, Oswaldo; Bakhtiari, Fatemeh

    2017-01-01

    forecasting approaches can reflect prevailing uncertainties. We apply a transparent and replicable method to quantify the uncertainty associated with projections of gross domestic product growth rates for Mexico, a key driver of GHG emissions in the country. We use those projections to produce probabilistic...... forecasts of GHG emissions for Mexico. We contrast our probabilistic forecasts with Mexico’s governmental deterministic forecasts. We show that, because they fail to reflect such key uncertainty, deterministic forecasts are ill-suited for use in target-setting processes. We argue that (i) guidelines should...... be agreed upon, to ensure that governmental forecasts meet certain minimum transparency and quality standards, and (ii) governments should be held accountable for the appropriateness of the forecasting approach applied to prepare governmental forecasts, especially when those forecasts are used to derive...

  13. Quantifying uncertainty in Transcranial Magnetic Stimulation - A high resolution simulation study in ICBM space.

    Science.gov (United States)

    Toschi, Nicola; Keck, Martin E; Welt, Tobias; Guerrisi, Maria

    2012-01-01

    Transcranial Magnetic Stimulation offers enormous potential for noninvasive brain stimulation. While it is known that brain tissue significantly "reshapes" induced field and charge distributions, most modeling investigations to-date have focused on single-subject data with limited generality. Further, the effects of the significant uncertainties which exist in the simulation (i.e. brain conductivity distributions) and stimulation (e.g. coil positioning and orientations) setup have not been quantified. In this study, we construct a high-resolution anisotropic head model in standard ICBM space, which can be used as a population-representative standard for bioelectromagnetic simulations. Further, we employ Monte-Carlo simulations in order to quantify how uncertainties in conductivity values propagate all the way to induced field and currents, demonstrating significant, regionally dependent dispersions in values which are commonly assumed "ground truth". This framework can be leveraged in order to quantify the effect of any type of uncertainty in noninvasive brain stimulation and bears relevance in all applications of TMS, both investigative and therapeutic.

  14. Quantifying structural uncertainty on fault networks using a marked point process within a Bayesian framework

    Science.gov (United States)

    Aydin, Orhun; Caers, Jef Karel

    2017-08-01

    Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed

  15. Quantifying uncertainty and resilience on coral reefs using a Bayesian approach

    International Nuclear Information System (INIS)

    Van Woesik, R

    2013-01-01

    Coral reefs are rapidly deteriorating globally. The contemporary management option favors managing for resilience to provide reefs with the capacity to tolerate human-induced disturbances. Yet resilience is most commonly defined as the capacity of a system to absorb disturbances without changing fundamental processes or functionality. Quantifying no change, or the uncertainty of a null hypothesis, is nonsensical using frequentist statistics, but is achievable using a Bayesian approach. This study outlines a practical Bayesian framework that quantifies the resilience of coral reefs using two inter-related models. The first model examines the functionality of coral reefs in the context of their reef-building capacity, whereas the second model examines the recovery rates of coral cover after disturbances. Quantifying intrinsic rates of increase in coral cover and habitat-specific, steady-state equilibria are useful proxies of resilience. A reduction in the intrinsic rate of increase following a disturbance, or the slowing of recovery over time, can be useful indicators of stress; a change in the steady-state equilibrium suggests a phase shift. Quantifying the uncertainty of key reef-building processes and recovery parameters, and comparing these parameters against benchmarks, facilitates the detection of loss of resilience and provides signals of imminent change. (letter)

  16. Quantifying uncertainty and resilience on coral reefs using a Bayesian approach

    Science.gov (United States)

    van Woesik, R.

    2013-12-01

    Coral reefs are rapidly deteriorating globally. The contemporary management option favors managing for resilience to provide reefs with the capacity to tolerate human-induced disturbances. Yet resilience is most commonly defined as the capacity of a system to absorb disturbances without changing fundamental processes or functionality. Quantifying no change, or the uncertainty of a null hypothesis, is nonsensical using frequentist statistics, but is achievable using a Bayesian approach. This study outlines a practical Bayesian framework that quantifies the resilience of coral reefs using two inter-related models. The first model examines the functionality of coral reefs in the context of their reef-building capacity, whereas the second model examines the recovery rates of coral cover after disturbances. Quantifying intrinsic rates of increase in coral cover and habitat-specific, steady-state equilibria are useful proxies of resilience. A reduction in the intrinsic rate of increase following a disturbance, or the slowing of recovery over time, can be useful indicators of stress; a change in the steady-state equilibrium suggests a phase shift. Quantifying the uncertainty of key reef-building processes and recovery parameters, and comparing these parameters against benchmarks, facilitates the detection of loss of resilience and provides signals of imminent change.

  17. Extending Ripley's K-Function to Quantify Aggregation in 2-D Grayscale Images.

    Directory of Open Access Journals (Sweden)

    Mohamed Amgad

    Full Text Available In this work, we describe the extension of Ripley's K-function to allow for overlapping events at very high event densities. We show that problematic edge effects introduce significant bias to the function at very high densities and small radii, and propose a simple correction method that successfully restores the function's centralization. Using simulations of homogeneous Poisson distributions of events, as well as simulations of event clustering under different conditions, we investigate various aspects of the function, including its shape-dependence and correspondence between true cluster radius and radius at which the K-function is maximized. Furthermore, we validate the utility of the function in quantifying clustering in 2-D grayscale images using three modalities: (i Simulations of particle clustering; (ii Experimental co-expression of soluble and diffuse protein at varying ratios; (iii Quantifying chromatin clustering in the nuclei of wt and crwn1 crwn2 mutant Arabidopsis plant cells, using a previously-published image dataset. Overall, our work shows that Ripley's K-function is a valid abstract statistical measure whose utility extends beyond the quantification of clustering of non-overlapping events. Potential benefits of this work include the quantification of protein and chromatin aggregation in fluorescent microscopic images. Furthermore, this function has the potential to become one of various abstract texture descriptors that are utilized in computer-assisted diagnostics in anatomic pathology and diagnostic radiology.

  18. Data Fusion: A decision analysis tool that quantifies geological and parametric uncertainty

    International Nuclear Information System (INIS)

    Porter, D.W.

    1995-01-01

    Engineering projects such as siting waste facilities and performing remediation are often driven by geological and hydrogeological uncertainties. Geological understanding and hydrogeological parameters such as hydraulic conductivity are needed to achieve reliable engineering design. Information form non-invasive and minimal invasive data sets offers potential for reduction in uncertainty, but a single data type does not usually meet all needs. Data Fusion uses Bayesian statistics to update prior knowledge with information from diverse data sets as the data is acquired. Prior knowledge takes the form of first principles models (e.g., groundwater flow) and spatial continuity models for heterogeneous properties. The variability of heterogeneous properties is modeled in a form motivated by statistical physics as a Markov random field. A computer reconstruction of targets of interest is produced within a quantified statistical uncertainty. The computed uncertainty provides a rational basis for identifying data gaps for assessing data worth to optimize data acquisition. Further, the computed uncertainty provides a way to determine the confidence of achieving adequate safety, margins in engineering design. Beyond design, Data Fusion provides the basis for real time computer monitoring of remediation. Working with the DOE Office of Technology (OTD), the authors have developed and patented a Data Fusion Workstation system that has been used on jobs at the Hanford, Savannah River, Pantex and Fernald DOE sites. Further, applications include an army depot at Letterkenney, PA and commercial industrial sites

  19. Data Fusion: A decision analysis tool that quantifies geological and parametric uncertainty

    International Nuclear Information System (INIS)

    Porter, D.W.

    1996-01-01

    Engineering projects such as siting waste facilities and performing remediation are often driven by geological and hydrogeological uncertainties. Geological understanding and hydrogeological parameters such as hydraulic conductivity are needed to achieve reliable engineering design. Information from non-invasive and minimally invasive data sets offers potential for reduction in uncertainty, but a single data type does not usually meet all needs. Data Fusion uses Bayesian statistics to update prior knowledge with information from diverse data sets as the data is acquired. Prior knowledge takes the form of first principles models (e.g., groundwater flow) and spatial continuity models for heterogeneous properties. The variability of heterogeneous properties is modeled in a form motivated by statistical physics as a Markov random field. A computer reconstruction of targets of interest is produced within a quantified statistical uncertainty. The computed uncertainty provides a rational basis for identifying data gaps for assessing data worth to optimize data acquisition. Further, the computed uncertainty provides a way to determine the confidence of achieving adequate safety margins in engineering design. Beyond design, Data Fusion provides the basis for real time computer monitoring of remediation. Working with the DOE Office of Technology (OTD), the author has developed and patented a Data Fusion Workstation system that has been used on jobs at the Hanford, Savannah River, Pantex and Fernald DOE sites. Further applications include an army depot at Letterkenney, PA and commercial industrial sites

  20. New strategies for quantifying and propagating nuclear data uncertainty in CUSA

    International Nuclear Information System (INIS)

    Zhao, Qiang; Zhang, Chunyan; Hao, Chen; Li, Fu; Wang, Dongyong; Yu, Yan

    2016-01-01

    Highlights: • An efficient sampling method based on LHS combined with Cholesky decomposition conversion is proposed. • A code of generating multi-group covariance matrices has been developed. • The uncertainty and sensitivity results of CUSA agree well with TSUNAMI-1D. - Abstract: The uncertainties of nuclear cross sections are propagated to the key parameters of nuclear reactor core through transport calculation. The statistical sampling method can be used to quantify and propagate nuclear data uncertainty in nuclear reactor physics calculations. In order to use statistical sampling method two key technical problems, method of generating multi-group covariance matrices and sampling method, should be considered reasonably and efficiently. In this paper, a method of transforming nuclear cross section covariance matrix in multi-group form into users' group structures based on the flat-flux approximation has been studied in depth. And most notably, an efficient sampling method has been proposed, which is based on Latin Hypercube Sampling (LHS) combined with Cholesky decomposition conversion. Based on those method, two modules named T-COCCO and GUIDE have been developed and have been successfully added into the code for uncertainty and sensitivity analysis (CUSA). The new modules have been verified respectively. Numerical results for the TMI-1 pin-cell case are presented and compared to TSUNAMI-1D. The comparison of the results further support that the methods and the computational tool developed in this work can be used to conduct sensitivity and uncertainty analysis for nuclear cross sections.

  1. New strategies for quantifying and propagating nuclear data uncertainty in CUSA

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Qiang; Zhang, Chunyan [Fundamental Science on Nuclear Safety and Simulation Technology Laboratory, College of Nuclear Science and Technology, Harbin Engineering University, Harbin (China); Hao, Chen, E-mail: haochen.heu@163.com [Fundamental Science on Nuclear Safety and Simulation Technology Laboratory, College of Nuclear Science and Technology, Harbin Engineering University, Harbin (China); Li, Fu [Institute of Nuclear and New Energy Technology(INET), Collaborative Innovation Center of Advanced Nuclear Energy Technology, Key Laboratory of Advanced Reactor Engineering and Safety of Ministry of Education, Tsinghua University, Beijing (China); Wang, Dongyong [Fundamental Science on Nuclear Safety and Simulation Technology Laboratory, College of Nuclear Science and Technology, Harbin Engineering University, Harbin (China); School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an (China); Yu, Yan [Fundamental Science on Nuclear Safety and Simulation Technology Laboratory, College of Nuclear Science and Technology, Harbin Engineering University, Harbin (China)

    2016-10-15

    Highlights: • An efficient sampling method based on LHS combined with Cholesky decomposition conversion is proposed. • A code of generating multi-group covariance matrices has been developed. • The uncertainty and sensitivity results of CUSA agree well with TSUNAMI-1D. - Abstract: The uncertainties of nuclear cross sections are propagated to the key parameters of nuclear reactor core through transport calculation. The statistical sampling method can be used to quantify and propagate nuclear data uncertainty in nuclear reactor physics calculations. In order to use statistical sampling method two key technical problems, method of generating multi-group covariance matrices and sampling method, should be considered reasonably and efficiently. In this paper, a method of transforming nuclear cross section covariance matrix in multi-group form into users' group structures based on the flat-flux approximation has been studied in depth. And most notably, an efficient sampling method has been proposed, which is based on Latin Hypercube Sampling (LHS) combined with Cholesky decomposition conversion. Based on those method, two modules named T-COCCO and GUIDE have been developed and have been successfully added into the code for uncertainty and sensitivity analysis (CUSA). The new modules have been verified respectively. Numerical results for the TMI-1 pin-cell case are presented and compared to TSUNAMI-1D. The comparison of the results further support that the methods and the computational tool developed in this work can be used to conduct sensitivity and uncertainty analysis for nuclear cross sections.

  2. A framework to quantify uncertainty in simulations of oil transport in the ocean

    KAUST Repository

    Gonçalves, Rafael C.

    2016-03-02

    An uncertainty quantification framework is developed for the DeepC Oil Model based on a nonintrusive polynomial chaos method. This allows the model\\'s output to be presented in a probabilistic framework so that the model\\'s predictions reflect the uncertainty in the model\\'s input data. The new capability is illustrated by simulating the far-field dispersal of oil in a Deepwater Horizon blowout scenario. The uncertain input consisted of ocean current and oil droplet size data and the main model output analyzed is the ensuing oil concentration in the Gulf of Mexico. A 1331 member ensemble was used to construct a surrogate for the model which was then mined for statistical information. The mean and standard deviations in the oil concentration were calculated for up to 30 days, and the total contribution of each input parameter to the model\\'s uncertainty was quantified at different depths. Also, probability density functions of oil concentration were constructed by sampling the surrogate and used to elaborate probabilistic hazard maps of oil impact. The performance of the surrogate was constantly monitored in order to demarcate the space-time zones where its estimates are reliable. © 2016. American Geophysical Union.

  3. Quantifying natural delta variability using a multiple-point geostatistics prior uncertainty model

    Science.gov (United States)

    Scheidt, Céline; Fernandes, Anjali M.; Paola, Chris; Caers, Jef

    2016-10-01

    We address the question of quantifying uncertainty associated with autogenic pattern variability in a channelized transport system by means of a modern geostatistical method. This question has considerable relevance for practical subsurface applications as well, particularly those related to uncertainty quantification relying on Bayesian approaches. Specifically, we show how the autogenic variability in a laboratory experiment can be represented and reproduced by a multiple-point geostatistical prior uncertainty model. The latter geostatistical method requires selection of a limited set of training images from which a possibly infinite set of geostatistical model realizations, mimicking the training image patterns, can be generated. To that end, we investigate two methods to determine how many training images and what training images should be provided to reproduce natural autogenic variability. The first method relies on distance-based clustering of overhead snapshots of the experiment; the second method relies on a rate of change quantification by means of a computer vision algorithm termed the demon algorithm. We show quantitatively that with either training image selection method, we can statistically reproduce the natural variability of the delta formed in the experiment. In addition, we study the nature of the patterns represented in the set of training images as a representation of the "eigenpatterns" of the natural system. The eigenpattern in the training image sets display patterns consistent with previous physical interpretations of the fundamental modes of this type of delta system: a highly channelized, incisional mode; a poorly channelized, depositional mode; and an intermediate mode between the two.

  4. Quantifying uncertainty due to internal variability using high-resolution regional climate model simulations

    Science.gov (United States)

    Gutmann, E. D.; Ikeda, K.; Deser, C.; Rasmussen, R.; Clark, M. P.; Arnold, J. R.

    2015-12-01

    The uncertainty in future climate predictions is as large or larger than the mean climate change signal. As such, any predictions of future climate need to incorporate and quantify the sources of this uncertainty. One of the largest sources comes from the internal, chaotic, variability within the climate system itself. This variability has been approximated using the 30 ensemble members of the Community Earth System Model (CESM) large ensemble. Here we examine the wet and dry end members of this ensemble for cool-season precipitation in the Colorado Rocky Mountains with a set of high-resolution regional climate model simulations. We have used the Weather Research and Forecasting model (WRF) to simulate the periods 1990-2000, 2025-2035, and 2070-2080 on a 4km grid. These simulations show that the broad patterns of change depicted in CESM are inherited by the high-resolution simulations; however, the differences in the height and location of the mountains in the WRF simulation, relative to the CESM simulation, means that the location and magnitude of the precipitation changes are very different. We further show that high-resolution simulations with the Intermediate Complexity Atmospheric Research model (ICAR) predict a similar spatial pattern in the change signal as WRF for these ensemble members. We then use ICAR to examine the rest of the CESM Large Ensemble as well as the uncertainty in the regional climate model due to the choice of physics parameterizations.

  5. The accountability imperative for quantifying the uncertainty of emission forecasts: evidence from Mexico

    International Nuclear Information System (INIS)

    Puig, Daniel; Bakhtiari, Fatemeh; Morales-Napoles, Oswaldo; Landa Rivera, Gissela

    2017-01-01

    Governmental climate change mitigation targets are typically developed with the aid of forecasts of greenhouse-gas emissions. The robustness and credibility of such forecasts depends, among other issues, on the extent to which forecasting approaches can reflect prevailing uncertainties. We apply a transparent and replicable method to quantify the uncertainty associated with projections of gross domestic product growth rates for Mexico, a key driver of greenhouse-gas emissions in the country. We use those projections to produce probabilistic forecasts of greenhouse-gas emissions for Mexico. We contrast our probabilistic forecasts with Mexico's governmental deterministic forecasts. We show that, because they fail to reflect such key uncertainty, deterministic forecasts are ill-suited for use in target-setting processes. We argue that (i) guidelines should be agreed upon, to ensure that governmental forecasts meet certain minimum transparency and quality standards, and (ii) governments should be held accountable for the appropriateness of the forecasting approach applied to prepare governmental forecasts, especially when those forecasts are used to derive climate change mitigation targets. (authors)

  6. A framework to quantify uncertainty in simulations of oil transport in the ocean

    KAUST Repository

    Gonç alves, Rafael C.; Iskandarani, Mohamed; Srinivasan, Ashwanth; Thacker, W. Carlisle; Chassignet, Eric; Knio, Omar

    2016-01-01

    An uncertainty quantification framework is developed for the DeepC Oil Model based on a nonintrusive polynomial chaos method. This allows the model's output to be presented in a probabilistic framework so that the model's predictions reflect the uncertainty in the model's input data. The new capability is illustrated by simulating the far-field dispersal of oil in a Deepwater Horizon blowout scenario. The uncertain input consisted of ocean current and oil droplet size data and the main model output analyzed is the ensuing oil concentration in the Gulf of Mexico. A 1331 member ensemble was used to construct a surrogate for the model which was then mined for statistical information. The mean and standard deviations in the oil concentration were calculated for up to 30 days, and the total contribution of each input parameter to the model's uncertainty was quantified at different depths. Also, probability density functions of oil concentration were constructed by sampling the surrogate and used to elaborate probabilistic hazard maps of oil impact. The performance of the surrogate was constantly monitored in order to demarcate the space-time zones where its estimates are reliable. © 2016. American Geophysical Union.

  7. Quantifying the sources of uncertainty in an ensemble of hydrological climate-impact projections

    Science.gov (United States)

    Aryal, Anil; Shrestha, Sangam; Babel, Mukand S.

    2018-01-01

    The objective of this paper is to quantify the various sources of uncertainty in the assessment of climate change impact on hydrology in the Tamakoshi River Basin, located in the north-eastern part of Nepal. Multiple climate and hydrological models were used to simulate future climate conditions and discharge in the basin. The simulated results of future climate and river discharge were analysed for the quantification of sources of uncertainty using two-way and three-way ANOVA. The results showed that temperature and precipitation in the study area are projected to change in near- (2010-2039), mid- (2040-2069) and far-future (2070-2099) periods. Maximum temperature is likely to rise by 1.75 °C under Representative Concentration Pathway (RCP) 4.5 and by 3.52 °C under RCP 8.5. Similarly, the minimum temperature is expected to rise by 2.10 °C under RCP 4.5 and by 3.73 °C under RCP 8.5 by the end of the twenty-first century. Similarly, the precipitation in the study area is expected to change by - 2.15% under RCP 4.5 and - 2.44% under RCP 8.5 scenarios. The future discharge in the study area was projected using two hydrological models, viz. Soil and Water Assessment Tool (SWAT) and Hydrologic Engineering Center's Hydrologic Modelling System (HEC-HMS). The SWAT model projected discharge is expected to change by small amount, whereas HEC-HMS model projected considerably lower discharge in future compared to the baseline period. The results also show that future climate variables and river hydrology contain uncertainty due to the choice of climate models, RCP scenarios, bias correction methods and hydrological models. During wet days, more uncertainty is observed due to the use of different climate models, whereas during dry days, the use of different hydrological models has a greater effect on uncertainty. Inter-comparison of the impacts of different climate models reveals that the REMO climate model shows higher uncertainty in the prediction of precipitation and

  8. Quantifying uncertainty in Gulf of Mexico forecasts stemming from uncertain initial conditions

    KAUST Repository

    Iskandarani, Mohamed

    2016-06-09

    Polynomial Chaos (PC) methods are used to quantify the impacts of initial conditions uncertainties on oceanic forecasts of the Gulf of Mexico circulation. Empirical Orthogonal Functions are used as initial conditions perturbations with their modal amplitudes considered as uniformly distributed uncertain random variables. These perturbations impact primarily the Loop Current system and several frontal eddies located in its vicinity. A small ensemble is used to sample the space of the modal amplitudes and to construct a surrogate for the evolution of the model predictions via a nonintrusive Galerkin projection. The analysis of the surrogate yields verification measures for the surrogate\\'s reliability and statistical information for the model output. A variance analysis indicates that the sea surface height predictability in the vicinity of the Loop Current is limited to about 20 days. © 2016. American Geophysical Union. All Rights Reserved.

  9. Quantifying the contribution of the root system of alpine vegetation in the soil aggregate stability of moraine

    Directory of Open Access Journals (Sweden)

    Csilla Hudek

    2017-03-01

    Full Text Available One fifth of the world's population is living in mountains or in their surrounding areas. This anthropogenic pressure continues to grow with the increasing number of settlements, especially in areas connected to touristic activities, such as the Italian Alps. The process of soil formation on high mountains is particularly slow and these soils are particularly vulnerable to soil degradation. In alpine regions, extreme meteorological events are increasingly frequent due to climate change, speeding up the process of soil degradation and increasing the number of severe erosion processes, shallow landslides and debris flows. Vegetation cover plays a crucial role in the stabilization of mountain soils thereby reducing the risk of natural hazards effecting downslope areas. Soil aggregate stability is one of the main soil properties that can be linked to soil loss processes. Soils developed on moraines in recently deglaciated areas typically have low levels of soil aggregation, and a limited or discontinuous vegetation cover making them more susceptible to degradation. However, soil structure can be influenced by the root system of the vegetation. Roots are actively involved in the formation of water-stable soil aggregation, increasing the stability of the soil and its nutrient content. In the present study, we aim to quantify the effect of the root system of alpine vegetation on the soil aggregate stability of the forefield of the Lys glacier, in the Aosta Valley (NW-Italy. This proglacial area provides the opportunity to study how the root system of ten pioneer alpine species from different successional stages can contribute to soil development and soil stabilization. To quantify the aggregate stability of root permeated soils, a modified wet sieving method was employed. The root length per soil volume of the different species was also determined and later correlated with the aggregate stability results. The results showed that soil aggregate

  10. Quantifying Surface Energy Flux Estimation Uncertainty Using Land Surface Temperature Observations

    Science.gov (United States)

    French, A. N.; Hunsaker, D.; Thorp, K.; Bronson, K. F.

    2015-12-01

    Remote sensing with thermal infrared is widely recognized as good way to estimate surface heat fluxes, map crop water use, and detect water-stressed vegetation. When combined with net radiation and soil heat flux data, observations of sensible heat fluxes derived from surface temperatures (LST) are indicative of instantaneous evapotranspiration (ET). There are, however, substantial reasons LST data may not provide the best way to estimate of ET. For example, it is well known that observations and models of LST, air temperature, or estimates of transport resistances may be so inaccurate that physically based model nevertheless yield non-meaningful results. Furthermore, using visible and near infrared remote sensing observations collected at the same time as LST often yield physically plausible results because they are constrained by less dynamic surface conditions such as green fractional cover. Although sensitivity studies exist that help identify likely sources of error and uncertainty, ET studies typically do not provide a way to assess the relative importance of modeling ET with and without LST inputs. To better quantify model benefits and degradations due to LST observational inaccuracies, a Bayesian uncertainty study was undertaken using data collected in remote sensing experiments at Maricopa, Arizona. Visible, near infrared and thermal infrared data were obtained from an airborne platform. The prior probability distribution of ET estimates were modeled using fractional cover, local weather data and a Penman-Monteith mode, while the likelihood of LST data was modeled from a two-source energy balance model. Thus the posterior probabilities of ET represented the value added by using LST data. Results from an ET study over cotton grown in 2014 and 2015 showed significantly reduced ET confidence intervals when LST data were incorporated.

  11. Aggregate surface areas quantified through laser measurements for South African asphalt mixtures

    CSIR Research Space (South Africa)

    Anochie-Boateng, Joseph

    2012-02-01

    Full Text Available design. This paper introduces the use of a three-dimensional (3D) laser scanning method to directly measure the surface area of aggregates used in road pavements in South Africa. As an application of the laser-based measurements, the asphalt film...

  12. What do recent advances in quantifying climate and carbon cycle uncertainties mean for climate policy?

    International Nuclear Information System (INIS)

    House, Joanna I; Knorr, Wolfgang; Cornell, Sarah E; Prentice, I Colin; Huntingford, Chris; Cox, Peter M; Harris, Glen R; Jones, Chris D; Lowe, Jason A

    2008-01-01

    Global policy targets for greenhouse gas emissions reductions are being negotiated. The amount of emitted carbon dioxide remaining in the atmosphere is controlled by carbon cycle processes in the ocean and on land. These processes are themselves affected by climate. The resulting 'climate-carbon cycle feedback' has recently been quantified, but the policy implications have not. Using a scheme to emulate the range of state-of-the-art model results for climate feedback strength, including the modelled range of climate sensitivity and other key uncertainties, we analyse recent global targets. The G8 target of a 50% cut in emissions by 2050 leaves CO 2 concentrations rising rapidly, approaching 1000 ppm by 2300. The Stern Review's proposed 25% cut in emissions by 2050, continuing to an 80% cut, does in fact approach stabilization of CO 2 concentration on a policy-relevant (century) timescale, with most models projecting concentrations between 500 and 600 ppm by 2100. However concentrations continue to rise gradually. Long-term stabilization at 550 ppm CO 2 requires cuts in emissions of 81 to 90% by 2300, and more beyond as a portion of the CO 2 emitted persists for centuries to millennia. Reductions of other greenhouse gases cannot compensate for the long-term effects of emitting CO 2 .

  13. On quantifying uncertainty for project selection: the case of renewable energy sources' investment

    International Nuclear Information System (INIS)

    Kirytopoulos, Konstantinos; Rentizelas, Athanassios; Tziralis, Georgios

    2006-01-01

    The selection of a project among different alternatives, considering the limited resources of a company (organisation), is an added value process that determines the prosperity of an undertaken project (investment). This applies also to the 'boming' Renewable Energy Sector, especially under the circumstances established by the recent activation of the Kyoto protocal and by the plethora of available choices for renewable energy sources (RES) projjects. The need for a reliable project selection method among the various alternatives is, therefore, highlighted and, in this context, the paper proposes the NPV function as one of possible criteria for the selection of a RES project. Furthermore, it differentiates from the typical NPV calculation process by adding the concept of a probabilistic NPV approach through Monte Carlo simulation. Reality is non-deterministic, so any attempt of modelling it by using a deterministic approach is by definition erroneous. The paper ultimately proposes a process of substituting the point with a range estimation, capable of quantifying the various uncertainty factors and in this way elucidate the accomplishment possibilities of eligible scenarious. The paper is enhanced by case study showing how the proposed method can be practically applied to support the investment decision, thus enabling the decision makers to judge its effectiveness and usefulness.(Author)

  14. A review on the CIRCE methodology to quantify the uncertainty of the physical models of a code

    International Nuclear Information System (INIS)

    Jeon, Seong Su; Hong, Soon Joon; Bang, Young Seok

    2012-01-01

    In the field of nuclear engineering, recent regulatory audit calculations of large break loss of coolant accident (LBLOCA) have been performed with the best estimate code such as MARS, RELAP5 and CATHARE. Since the credible regulatory audit calculation is very important in the evaluation of the safety of the nuclear power plant (NPP), there have been many researches to develop rules and methodologies for the use of best estimate codes. One of the major points is to develop the best estimate plus uncertainty (BEPU) method for uncertainty analysis. As a representative BEPU method, NRC proposes the CSAU (Code scaling, applicability and uncertainty) methodology, which clearly identifies the different steps necessary for an uncertainty analysis. The general idea is 1) to determine all the sources of uncertainty in the code, also called basic uncertainties, 2) quantify them and 3) combine them in order to obtain the final uncertainty for the studied application. Using the uncertainty analysis such as CSAU methodology, an uncertainty band for the code response (calculation result), important from the safety point of view is calculated and the safety margin of the NPP is quantified. An example of such a response is the peak cladding temperature (PCT) for a LBLOCA. However, there is a problem in the uncertainty analysis with the best estimate codes. Generally, it is very difficult to determine the uncertainties due to the empiricism of closure laws (also called correlations or constitutive relationships). So far the only proposed approach is based on the expert judgment. For this case, the uncertainty range of important parameters can be wide and inaccurate so that the confidence level of the BEPU calculation results can be decreased. In order to solve this problem, recently CEA (France) proposes a statistical method of data analysis, called CIRCE. The CIRCE method is intended to quantify the uncertainties of the correlations of a code. It may replace the expert judgment

  15. Quantifying uncertainties of climate signals related to the 11-year solar cycle

    Science.gov (United States)

    Kruschke, T.; Kunze, M.; Matthes, K. B.; Langematz, U.; Wahl, S.

    2017-12-01

    Although state-of-the-art reconstructions based on proxies and (semi-)empirical models converge in terms of total solar irradiance, they still significantly differ in terms of spectral solar irradiance (SSI) with respect to the mean spectral distribution of energy input and temporal variability. This study aims at quantifying uncertainties for the Earth's climate related to the 11-year solar cycle by forcing two chemistry-climate models (CCMs) - CESM1(WACCM) and EMAC - with five different SSI reconstructions (NRLSSI1, NRLSSI2, SATIRE-T, SATIRE-S, CMIP6-SSI) and the reference spectrum RSSV1-ATLAS3, derived from observations. We conduct a unique set of timeslice experiments. External forcings and boundary conditions are fixed and identical for all experiments, except for the solar forcing. The set of analyzed simulations consists of one solar minimum simulation, employing RSSV1-ATLAS3 and five solar maximum experiments. The latter are a result of adding the amplitude of solar cycle 22 according to the five reconstructions to RSSV1-ATLAS3. Our results show that the climate response to the 11y solar cycle is generally robust across CCMs and SSI forcings. However, analyzing the variance of the solar maximum ensemble by means of ANOVA-statistics reveals additional information on the uncertainties of the mean climate signals. The annual mean response agrees very well between the two CCMs for most parts of the lower and middle atmosphere. Only the upper mesosphere is subject to significant differences related to the choice of the model. However, the different SSI forcings lead to significant differences in ozone concentrations, shortwave heating rates, and temperature throughout large parts of the mesosphere and upper stratosphere. Regarding the seasonal evolution of the climate signals, our findings for short wave heating rates, and temperature are similar to the annual means with respect to the relative importance of the choice of the model or the SSI forcing for the

  16. Quantifying human behavior uncertainties in a coupled agent-based model for water resources management

    Science.gov (United States)

    Hyun, J. Y.; Yang, Y. C. E.; Tidwell, V. C.; Macknick, J.

    2017-12-01

    Modeling human behaviors and decisions in water resources management is a challenging issue due to its complexity and uncertain characteristics that affected by both internal (such as stakeholder's beliefs on any external information) and external factors (such as future policies and weather/climate forecast). Stakeholders' decision regarding how much water they need is usually not entirely rational in the real-world cases, so it is not quite suitable to model their decisions with a centralized (top-down) approach that assume everyone in a watershed follow the same order or pursue the same objective. Agent-based modeling (ABM) uses a decentralized approach (bottom-up) that allow each stakeholder to make his/her own decision based on his/her own objective and the belief of information acquired. In this study, we develop an ABM which incorporates the psychological human decision process by the theory of risk perception. The theory of risk perception quantifies human behaviors and decisions uncertainties using two sequential methodologies: the Bayesian Inference and the Cost-Loss Problem. The developed ABM is coupled with a regulation-based water system model: Riverware (RW) to evaluate different human decision uncertainties in water resources management. The San Juan River Basin in New Mexico (Figure 1) is chosen as a case study area, while we define 19 major irrigation districts as water use agents and their primary decision is to decide the irrigated area on an annual basis. This decision will be affected by three external factors: 1) upstream precipitation forecast (potential amount of water availability), 2) violation of the downstream minimum flow (required to support ecosystems), and 3) enforcement of a shortage sharing plan (a policy that is currently undertaken in the region for drought years). Three beliefs (as internal factors) that correspond to these three external factors will also be considered in the modeling framework. The objective of this study is

  17. Quantifying uncertainty for predictions with model error in non-Gaussian systems with intermittency

    International Nuclear Information System (INIS)

    Branicki, Michal; Majda, Andrew J

    2012-01-01

    This paper discusses a range of important mathematical issues arising in applications of a newly emerging stochastic-statistical framework for quantifying and mitigating uncertainties associated with prediction of partially observed and imperfectly modelled complex turbulent dynamical systems. The need for such a framework is particularly severe in climate science where the true climate system is vastly more complicated than any conceivable model; however, applications in other areas, such as neural networks and materials science, are just as important. The mathematical tools employed here rely on empirical information theory and fluctuation–dissipation theorems (FDTs) and it is shown that they seamlessly combine into a concise systematic framework for measuring and optimizing consistency and sensitivity of imperfect models. Here, we utilize a simple statistically exactly solvable ‘perfect’ system with intermittent hidden instabilities and with time-periodic features to address a number of important issues encountered in prediction of much more complex dynamical systems. These problems include the role and mitigation of model error due to coarse-graining, moment closure approximations, and the memory of initial conditions in producing short, medium and long-range predictions. Importantly, based on a suite of increasingly complex imperfect models of the perfect test system, we show that the predictive skill of the imperfect models and their sensitivity to external perturbations is improved by ensuring their consistency on the statistical attractor (i.e. the climate) with the perfect system. Furthermore, the discussed link between climate fidelity and sensitivity via the FDT opens up an enticing prospect of developing techniques for improving imperfect model sensitivity based on specific tests carried out in the training phase of the unperturbed statistical equilibrium/climate. (paper)

  18. Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean

    Science.gov (United States)

    Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.

    2011-12-01

    Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling

  19. Quantifying measurement uncertainties in ADCP measurements in non-steady, inhomogeneous flow

    Science.gov (United States)

    Schäfer, Stefan

    2017-04-01

    The author presents a laboratory study of fixed-platform four-beam ADCP and three-beam ADV measurements in the tailrace of a micro hydro power setup with a 35kW Kaplan-turbine and 2.5m head. The datasets discussed quantify measurement uncertainties of the ADCP measurement technique coming from non-steady, inhomogeneous flow. For constant discharge of 1.5m3/s, two different flow scenarios were investigated: one being the regular tailrace flow downstream the draft tube and the second being a straightened, less inhomogeneous flow, which was generated by the use of a flow straightening device: A rack of diameter 40mm pipe sections was mounted right behind the draft tube. ADCP measurements (sampling rate 1.35Hz) were conducted in three distances behind the draft tube and compared bin-wise to measurements of three simultaneously measuring ADV probes (sampling rate 64Hz). The ADV probes were aligned horizontally and the ADV bins were placed in the centers of two facing ADCP bins and in the vertical under the ADCP probe of the corresponding depth. Rotating the ADV probes by 90° allowed for measurements of the other two facing ADCP bins. For reasons of mutual probe interaction, ADCP and ADV measurements were not conducted at the same time. The datasets were evaluated by using mean and fluctuation velocities. Turbulence parameters were calculated and compared as far as applicable. Uncertainties coming from non-steady flow were estimated with the normalized mean square error und evaluated by comparing long-term measurements of 60 minutes to shorter measurement intervals. Uncertainties coming from inhomogeneous flow were evaluated by comparison of ADCP with ADV data along the ADCP beams where ADCP data were effectively measured and in the vertical under the ADCP probe where velocities of the ADCP measurements were displayed. Errors coming from non-steady flow could be compensated through sufficiently long measurement intervals with high enough sampling rates depending on the

  20. Future of Earthquake Early Warning: Quantifying Uncertainty and Making Fast Automated Decisions for Applications

    Science.gov (United States)

    Wu, Stephen

    can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.

  1. Quantified Uncertainties in Comparative Life Cycle Assessment : What Can Be Concluded?

    NARCIS (Netherlands)

    Mendoza Beltran, Angelica; Prado, Valentina; Font Vivanco, David; Henriksson, Patrik J.G.; Guinée, Jeroen B.; Heijungs, Reinout

    2018-01-01

    Interpretation of comparative Life Cycle Assessment (LCA) results can be challenging in the presence of uncertainty. To aid in interpreting such results under the goal of any comparative LCA, we aim to provide guidance to practitioners by gaining insights into uncertainty-statistics methods (USMs).

  2. Climate induced changes on the hydrology of Mediterranean basins - assessing uncertainties and quantifying risks

    Science.gov (United States)

    Ludwig, Ralf

    2014-05-01

    According to current climate projections, the Mediterranean area is at high risk for severe changes in the hydrological budget and extremes. With innovative scientific measures, integrated hydrological modeling and novel field geophysical field monitoring techniques, the FP7 project CLIMB (Climate Induced Changes on the Hydrology of Mediterranean Basins; GA: 244151) assessed the impacts of climate change on the hydrology in seven basins in the Mediterranean area, in Italy, France, Turkey, Tunisia, Egypt and the Gaza Strip, and quantified uncertainties and risks for the main stakeholders of each test site. Intensive climate model auditing selected four regional climate models, whose data was bias corrected and downscaled to serve as climate forcing for a set of hydrological models in each site. The results of the multi-model hydro-climatic ensemble and socio-economic factor analysis were applied to develop a risk model building upon spatial vulnerability and risk assessment. Findings generally reveal an increasing risk for water resources management in the test sites, yet at different rates and severity in the investigated sectors, with highest impacts likely to occur in the transition months. Most important elements of this research include the following aspects: • Climate change contributes, yet in strong regional variation, to water scarcity in the Mediterranean; other factors, e.g. pollution or poor management practices, are regionally still dominant pressures on water resources. • Rain-fed agriculture needs to adapt to seasonal changes; stable or increasing productivity likely depends on additional irrigation. • Tourism could benefit in shoulder seasons, but may expect income losses in the summer peak season due to increasing heat stress. • Local & regional water managers and water users, lack, as yet, awareness of climate change induced risks; emerging focus areas are supplies of domestic drinking water, irrigation, hydropower and livestock. • Data

  3. Advanced probabilistic methods for quantifying the effects of various uncertainties in structural response

    Science.gov (United States)

    Nagpal, Vinod K.

    1988-01-01

    The effects of actual variations, also called uncertainties, in geometry and material properties on the structural response of a space shuttle main engine turbopump blade are evaluated. A normal distribution was assumed to represent the uncertainties statistically. Uncertainties were assumed to be totally random, partially correlated, and fully correlated. The magnitude of these uncertainties were represented in terms of mean and variance. Blade responses, recorded in terms of displacements, natural frequencies, and maximum stress, was evaluated and plotted in the form of probabilistic distributions under combined uncertainties. These distributions provide an estimate of the range of magnitudes of the response and probability of occurrence of a given response. Most importantly, these distributions provide the information needed to estimate quantitatively the risk in a structural design.

  4. A framework to quantify uncertainties of seafloor backscatter from swath mapping echosounders

    Science.gov (United States)

    Malik, Mashkoor; Lurton, Xavier; Mayer, Larry

    2018-06-01

    Multibeam echosounders (MBES) have become a widely used acoustic remote sensing tool to map and study the seafloor, providing co-located bathymetry and seafloor backscatter. Although the uncertainty associated with MBES-derived bathymetric data has been studied extensively, the question of backscatter uncertainty has been addressed only minimally and hinders the quantitative use of MBES seafloor backscatter. This paper explores approaches to identifying uncertainty sources associated with MBES-derived backscatter measurements. The major sources of uncertainty are catalogued and the magnitudes of their relative contributions to the backscatter uncertainty budget are evaluated. These major uncertainty sources include seafloor insonified area (1-3 dB), absorption coefficient (up to > 6 dB), random fluctuations in echo level (5.5 dB for a Rayleigh distribution), and sonar calibration (device dependent). The magnitudes of these uncertainty sources vary based on how these effects are compensated for during data acquisition and processing. Various cases (no compensation, partial compensation and full compensation) for seafloor insonified area, transmission losses and random fluctuations were modeled to estimate their uncertainties in different scenarios. Uncertainty related to the seafloor insonified area can be reduced significantly by accounting for seafloor slope during backscatter processing while transmission losses can be constrained by collecting full water column absorption coefficient profiles (temperature and salinity profiles). To reduce random fluctuations to below 1 dB, at least 20 samples are recommended to be used while computing mean values. The estimation of uncertainty in backscatter measurements is constrained by the fact that not all instrumental components are characterized and documented sufficiently for commercially available MBES. Further involvement from manufacturers in providing this essential information is critically required.

  5. Helium Mass Spectrometer Leak Detection: A Method to Quantify Total Measurement Uncertainty

    Science.gov (United States)

    Mather, Janice L.; Taylor, Shawn C.

    2015-01-01

    In applications where leak rates of components or systems are evaluated against a leak rate requirement, the uncertainty of the measured leak rate must be included in the reported result. However, in the helium mass spectrometer leak detection method, the sensitivity, or resolution, of the instrument is often the only component of the total measurement uncertainty noted when reporting results. To address this shortfall, a measurement uncertainty analysis method was developed that includes the leak detector unit's resolution, repeatability, hysteresis, and drift, along with the uncertainty associated with the calibration standard. In a step-wise process, the method identifies the bias and precision components of the calibration standard, the measurement correction factor (K-factor), and the leak detector unit. Together these individual contributions to error are combined and the total measurement uncertainty is determined using the root-sum-square method. It was found that the precision component contributes more to the total uncertainty than the bias component, but the bias component is not insignificant. For helium mass spectrometer leak rate tests where unit sensitivity alone is not enough, a thorough evaluation of the measurement uncertainty such as the one presented herein should be performed and reported along with the leak rate value.

  6. SU-E-J-159: Intra-Patient Deformable Image Registration Uncertainties Quantified Using the Distance Discordance Metric

    International Nuclear Information System (INIS)

    Saleh, Z; Thor, M; Apte, A; Deasy, J; Sharp, G; Muren, L

    2014-01-01

    Purpose: The quantitative evaluation of deformable image registration (DIR) is currently challenging due to lack of a ground truth. In this study we test a new method proposed for quantifying multiple-image based DIRrelated uncertainties, for DIR of pelvic images. Methods: 19 patients were analyzed, each with 6 CT scans, who previously had radiotherapy for prostate cancer. Manually delineated structures for rectum and bladder, which served as ground truth structures, were delineated on the planning CT and each subsequent scan. For each patient, voxel-by-voxel DIR-related uncertainties were evaluated, following B-spline based DIR, by applying a previously developed metric, the distance discordance metric (DDM; Saleh et al., PMB (2014) 59:733). The DDM map was superimposed on the first acquired CT scan and DDM statistics were assessed, also relative to two metrics estimating the agreement between the propagated and the manually delineated structures. Results: The highest DDM values which correspond to greatest spatial uncertainties were observed near the body surface and in the bowel due to the presence of gas. The mean rectal and bladder DDM values ranged from 1.1–11.1 mm and 1.5–12.7 mm, respectively. There was a strong correlation in the DDMs between the rectum and bladder (Pearson R = 0.68 for the max DDM). For both structures, DDM was correlated with the ratio between the DIR-propagated and manually delineated volumes (R = 0.74 for the max rectal DDM). The maximum rectal DDM was negatively correlated with the Dice Similarity Coefficient between the propagated and the manually delineated volumes (R= −0.52). Conclusion: The multipleimage based DDM map quantified considerable DIR variability across different structures and among patients. Besides using the DDM for quantifying DIR-related uncertainties it could potentially be used to adjust for uncertainties in DIR-based accumulated dose distributions

  7. A new system to quantify uncertainties in LEO satellite position determination due to space weather events

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a new system for quantitative assessment of uncertainties in LEO satellite position caused by storm time changes in space environmental...

  8. Quantifying uncertainty in the measurement of arsenic in suspended particulate matter by Atomic Absorption Spectrometry with hydride generator

    Directory of Open Access Journals (Sweden)

    Ahuja Tarushee

    2011-04-01

    Full Text Available Abstract Arsenic is the toxic element, which creates several problems in human being specially when inhaled through air. So the accurate and precise measurement of arsenic in suspended particulate matter (SPM is of prime importance as it gives information about the level of toxicity in the environment, and preventive measures could be taken in the effective areas. Quality assurance is equally important in the measurement of arsenic in SPM samples before making any decision. The quality and reliability of the data of such volatile elements depends upon the measurement of uncertainty of each step involved from sampling to analysis. The analytical results quantifying uncertainty gives a measure of the confidence level of the concerned laboratory. So the main objective of this study was to determine arsenic content in SPM samples with uncertainty budget and to find out various potential sources of uncertainty, which affects the results. Keeping these facts, we have selected seven diverse sites of Delhi (National Capital of India for quantification of arsenic content in SPM samples with uncertainty budget following sampling by HVS to analysis by Atomic Absorption Spectrometer-Hydride Generator (AAS-HG. In the measurement of arsenic in SPM samples so many steps are involved from sampling to final result and we have considered various potential sources of uncertainties. The calculation of uncertainty is based on ISO/IEC17025: 2005 document and EURACHEM guideline. It has been found that the final results mostly depend on the uncertainty in measurement mainly due to repeatability, final volume prepared for analysis, weighing balance and sampling by HVS. After the analysis of data of seven diverse sites of Delhi, it has been concluded that during the period from 31st Jan. 2008 to 7th Feb. 2008 the arsenic concentration varies from 1.44 ± 0.25 to 5.58 ± 0.55 ng/m3 with 95% confidence level (k = 2.

  9. Quantifying remarks to the question of uncertainties of the 'general dose assessment fundamentals'

    International Nuclear Information System (INIS)

    Brenk, H.D.; Vogt, K.J.

    1982-12-01

    Dose prediction models are always subject to uncertainties due to a number of factors including deficiencies in the model structure and uncertainties of the model input parameter values. In lieu of validation experiments the evaluation of these uncertainties is restricted to scientific judgement. Several attempts have been made in the literature to evaluate the uncertainties of the current dose assessment models resulting from uncertainties of the model input parameter values using stochastic approaches. Less attention, however, has been paid to potential sources of systematic over- and underestimations of the predicted doses due to deficiencies in the model structure. The present study addresses this aspect with regard to dose assessment models currently used for regulatory purposes. The influence of a number of basic simplifications and conservative assumptions has been investigated. Our systematic approach is exemplified by a comparison of doses evaluated on the basis of the regulatory guide model and a more realistic model respectively. This is done for 3 critical exposure pathways. As a result of this comparison it can be concluded that the currently used regularoty-type models include significant safety factors resulting in a systematic overprediction of dose to man up to two orders of magnitude. For this reason there are some indications that these models usually more than compensate the bulk of the stochastic uncertainties caused by the variability of the input parameter values. (orig.) [de

  10. Quantifying the relative contributions of different solute carriers to aggregate substrate transport

    Science.gov (United States)

    Taslimifar, Mehdi; Oparija, Lalita; Verrey, Francois; Kurtcuoglu, Vartan; Olgac, Ufuk; Makrides, Victoria

    2017-01-01

    Determining the contributions of different transporter species to overall cellular transport is fundamental for understanding the physiological regulation of solutes. We calculated the relative activities of Solute Carrier (SLC) transporters using the Michaelis-Menten equation and global fitting to estimate the normalized maximum transport rate for each transporter (Vmax). Data input were the normalized measured uptake of the essential neutral amino acid (AA) L-leucine (Leu) from concentration-dependence assays performed using Xenopus laevis oocytes. Our methodology was verified by calculating Leu and L-phenylalanine (Phe) data in the presence of competitive substrates and/or inhibitors. Among 9 potentially expressed endogenous X. laevis oocyte Leu transporter species, activities of only the uniporters SLC43A2/LAT4 (and/or SLC43A1/LAT3) and the sodium symporter SLC6A19/B0AT1 were required to account for total uptake. Furthermore, Leu and Phe uptake by heterologously expressed human SLC6A14/ATB0,+ and SLC43A2/LAT4 was accurately calculated. This versatile systems biology approach is useful for analyses where the kinetics of each active protein species can be represented by the Hill equation. Furthermore, its applicable even in the absence of protein expression data. It could potentially be applied, for example, to quantify drug transporter activities in target cells to improve specificity. PMID:28091567

  11. Quantifying Uncertainties in Mass-Dimensional Relationships Through a Comparison Between CloudSat and SPartICus Reflectivity Factors

    Science.gov (United States)

    Mascio, J.; Mace, G. G.

    2015-12-01

    CloudSat and CALIPSO, two of the satellites in the A-Train constellation, use algorithms to calculate the scattering properties of small cloud particles, such as the T-matrix method. Ice clouds (i.e. cirrus) cause problems with these cloud property retrieval algorithms because of their variability in ice mass as a function of particle size. Assumptions regarding the microphysical properties, such as mass-dimensional (m-D) relationships, are often necessary in retrieval algorithms for simplification, but these assumptions create uncertainties of their own. Therefore, ice cloud property retrieval uncertainties can be substantial and are often not well known. To investigate these uncertainties, reflectivity factors measured by CloudSat are compared to those calculated from particle size distributions (PSDs) to which different m-D relationships are applied. These PSDs are from data collected in situ during three flights of the Small Particles in Cirrus (SPartICus) campaign. We find that no specific habit emerges as preferred and instead we conclude that the microphysical characteristics of ice crystal populations tend to be distributed over a continuum and, therefore, cannot be categorized easily. To quantify the uncertainties in the mass-dimensional relationships, an optimal estimation inversion was run to retrieve the m-D relationship per SPartICus flight, as well as to calculate uncertainties of the m-D power law.

  12. Practical Markov Logic Containing First-Order Quantifiers With Application to Identity Uncertainty

    National Research Council Canada - National Science Library

    Culotta, Aron; McCallum, Andrew

    2005-01-01

    .... In this paper, we present approximate inference and training methods that incrementally instantiate portions of the network as needed to enable first-order existential and universal quantifiers in Markov logic networks...

  13. Use of Paired Simple and Complex Models to Reduce Predictive Bias and Quantify Uncertainty

    DEFF Research Database (Denmark)

    Doherty, John; Christensen, Steen

    2011-01-01

    -constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology...... of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration...... that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights...

  14. Quantifying and Reducing Uncertainty in Correlated Multi-Area Short-Term Load Forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Hou, Zhangshuan; Meng, Da; Samaan, Nader A.; Makarov, Yuri V.; Huang, Zhenyu

    2016-07-17

    In this study, we represent and reduce the uncertainties in short-term electric load forecasting by integrating time series analysis tools including ARIMA modeling, sequential Gaussian simulation, and principal component analysis. The approaches are mainly focusing on maintaining the inter-dependency between multiple geographically related areas. These approaches are applied onto cross-correlated load time series as well as their forecast errors. Multiple short-term prediction realizations are then generated from the reduced uncertainty ranges, which are useful for power system risk analyses.

  15. Simulating and quantifying legacy topographic data uncertainty: an initial step to advancing topographic change analyses

    Science.gov (United States)

    Wasklewicz, Thad; Zhu, Zhen; Gares, Paul

    2017-12-01

    Rapid technological advances, sustained funding, and a greater recognition of the value of topographic data have helped develop an increasing archive of topographic data sources. Advances in basic and applied research related to Earth surface changes require researchers to integrate recent high-resolution topography (HRT) data with the legacy datasets. Several technical challenges and data uncertainty issues persist to date when integrating legacy datasets with more recent HRT data. The disparate data sources required to extend the topographic record back in time are often stored in formats that are not readily compatible with more recent HRT data. Legacy data may also contain unknown error or unreported error that make accounting for data uncertainty difficult. There are also cases of known deficiencies in legacy datasets, which can significantly bias results. Finally, scientists are faced with the daunting challenge of definitively deriving the extent to which a landform or landscape has or will continue to change in response natural and/or anthropogenic processes. Here, we examine the question: how do we evaluate and portray data uncertainty from the varied topographic legacy sources and combine this uncertainty with current spatial data collection techniques to detect meaningful topographic changes? We view topographic uncertainty as a stochastic process that takes into consideration spatial and temporal variations from a numerical simulation and physical modeling experiment. The numerical simulation incorporates numerous topographic data sources typically found across a range of legacy data to present high-resolution data, while the physical model focuses on more recent HRT data acquisition techniques. Elevation uncertainties observed from anchor points in the digital terrain models are modeled using "states" in a stochastic estimator. Stochastic estimators trace the temporal evolution of the uncertainties and are natively capable of incorporating sensor

  16. Quantifying the uncertainties of advection and boundary layer dynamics on the diurnal carbon dioxide budget

    NARCIS (Netherlands)

    Pino, D.; Kaikkonen, J.P.; Vilà-Guerau de Arellano, J.

    2013-01-01

    [1] We investigate the uncertainties in the carbon dioxide (CO2) mixing ratio and inferred surface flux associated with boundary layer processes and advection by using mixed-layer theory. By extending the previous analysis presented by Pino et al. (2012), new analytical expressions are derived to

  17. Quantifying Uncertainty in Instantaneous Orbital Data Products of TRMM over Indian Subcontinent

    Science.gov (United States)

    Jayaluxmi, I.; Nagesh, D.

    2013-12-01

    In the last 20 years, microwave radiometers have taken satellite images of earth's weather proving to be a valuable tool for quantitative estimation of precipitation from space. However, along with the widespread acceptance of microwave based precipitation products, it has also been recognized that they contain large uncertainties. While most of the uncertainty evaluation studies focus on the accuracy of rainfall accumulated over time (e.g., season/year), evaluation of instantaneous rainfall intensities from satellite orbital data products are relatively rare. These instantaneous products are known to potentially cause large uncertainties during real time flood forecasting studies at the watershed scale. Especially over land regions, where the highly varying land surface emissivity offer a myriad of complications hindering accurate rainfall estimation. The error components of orbital data products also tend to interact nonlinearly with hydrologic modeling uncertainty. Keeping these in mind, the present study fosters the development of uncertainty analysis using instantaneous satellite orbital data products (version 7 of 1B11, 2A25, 2A23) derived from the passive and active sensors onboard Tropical Rainfall Measuring Mission (TRMM) satellite, namely TRMM microwave imager (TMI) and Precipitation Radar (PR). The study utilizes 11 years of orbital data from 2002 to 2012 over the Indian subcontinent and examines the influence of various error sources on the convective and stratiform precipitation types. Analysis conducted over the land regions of India investigates three sources of uncertainty in detail. These include 1) Errors due to improper delineation of rainfall signature within microwave footprint (rain/no rain classification), 2) Uncertainty offered by the transfer function linking rainfall with TMI low frequency channels and 3) Sampling errors owing to the narrow swath and infrequent visits of TRMM sensors. Case study results obtained during the Indian summer

  18. Method for quantifying the uncertainty with the extraction of the raw data of a gamma ray spectrum by deconvolution software

    International Nuclear Information System (INIS)

    Vigineix, Thomas; Guillot, Nicolas; Saurel, Nicolas

    2013-06-01

    Gamma ray spectrometry is a passive non destructive assay most commonly used to identify and quantify the radionuclides present in complex huge objects such as nuclear waste packages. The treatment of spectra from the measurement of nuclear waste is done in two steps: the first step is to extract the raw data from the spectra (energies and the net photoelectric absorption peaks area) and the second step is to determine the detection efficiency of the measuring scene. Commercial software use different methods to extract the raw data spectrum but none are optimal in the treatment of spectra containing actinides. Spectra should be handled individually and requires settings and an important feedback part from the operator, which prevents the automatic process of spectrum and increases the risk of human error. In this context the Nuclear Measurement and Valuation Laboratory (LMNE) in the Atomic Energy Commission Valduc (CEA Valduc) has developed a new methodology for quantifying the uncertainty associated with the extraction of the raw data over spectrum. This methodology was applied with raw data and commercial software that need configuration by the operator (GENIE2000, Interwinner...). This robust and fully automated methodology of uncertainties calculation is performed on the entire process of the software. The methodology ensures for all peaks processed by the deconvolution software an extraction of energy peaks closed to 2 channels and an extraction of net areas with an uncertainty less than 5 percents. The methodology was tested experimentally with actinides spectrum. (authors)

  19. Planning Under Uncertainty for Aggregated Electric Vehicle Charging with Renewable Energy Supply

    NARCIS (Netherlands)

    Walraven, E.M.P.; Spaan, M.T.J.; Kaminka, Gal A.; Fox, Maria; Bouquet, Paolo; Hüllermeier, Eyke; Dignum, Virginia; Dignum, Frank; van Harmelen, Frank

    2016-01-01

    Renewable energy sources introduce uncertainty regarding generated power in smart grids. For instance, power that is generated by wind turbines is time-varying and dependent on the weather. Electric vehicles will become increasingly important in the development of smart grids with a high penetration

  20. Quantifying reactor safety margins: Part 1: An overview of the code scaling, applicability, and uncertainty evaluation methodology

    International Nuclear Information System (INIS)

    Boyack, B.E.; Duffey, R.B.; Griffith, P.

    1988-01-01

    In August 1988, the Nuclear Regulatory Commission (NRC) approved the final version of a revised rule on the acceptance of emergency core cooling systems (ECCS) entitled ''Emergency Core Cooling System; Revisions to Acceptance Criteria.'' The revised rule states an alternate ECCS performance analysis, based on best-estimate methods, may be used to provide more realistic estimates of plant safety margins, provided the licensee quantifies the uncertainty of the estimates and included that uncertainty when comparing the calculated results with prescribed acceptance limits. To support the revised ECCS rule, the NRC and its contractors and consultants have developed and demonstrated a method called the Code Scaling, Applicability, and Uncertainty (CSAU) evaluation methodology. It is an auditable, traceable, and practical method for combining quantitative analyses and expert opinions to arrive at computed values of uncertainty. This paper provides an overview of the CSAU evaluation methodology and its application to a postulated cold-leg, large-break loss-of-coolant accident in a Westinghouse four-loop pressurized water reactor with 17 /times/ 17 fuel. The code selected for this demonstration of the CSAU methodology was TRAC-PF1/MOD1, Version 14.3. 23 refs., 5 figs., 1 tab

  1. Characterisation of a reference site for quantifying uncertainties related to soil sampling

    International Nuclear Information System (INIS)

    Barbizzi, Sabrina; Zorzi, Paolo de; Belli, Maria; Pati, Alessandra; Sansone, Umberto; Stellato, Luisa; Barbina, Maria; Deluisa, Andrea; Menegon, Sandro; Coletti, Valter

    2004-01-01

    An integrated approach to quality assurance in soil sampling remains to be accomplished. - The paper reports a methodology adopted to face problems related to quality assurance in soil sampling. The SOILSAMP project, funded by the Environmental Protection Agency of Italy (APAT), is aimed at (i) establishing protocols for soil sampling in different environments; (ii) assessing uncertainties associated with different soil sampling methods in order to select the 'fit-for-purpose' method; (iii) qualifying, in term of trace elements spatial variability, a reference site for national and international inter-comparison exercises. Preliminary results and considerations are illustrated

  2. Quantifying the Contribution of Post-Processing in Computed Tomography Measurement Uncertainty

    DEFF Research Database (Denmark)

    Stolfi, Alessandro; Thompson, Mary Kathryn; Carli, Lorenzo

    2016-01-01

    by calculating the standard deviation of 10 repeated measurement evaluations on the same data set. The evaluations were performed on an industrial assembly. Each evaluation includes several dimensional and geometrical measurands that were expected to have different responses to the various post......-processing settings. It was found that the definition of the datum system had the largest impact on the uncertainty with a standard deviation of a few microns. The surface determination and data fitting had smaller contributions with sub-micron repeatability....

  3. Application of probability distributions for quantifying uncertainty in radionuclide source terms for Seabrook risk assessment

    International Nuclear Information System (INIS)

    Walker, D.H.; Savin, N.L.

    1985-01-01

    The calculational models developed for the Reactor Safety Study (RSS) have traditionally been used to generate 'point estimate values' for radionuclide release to the environment for nuclear power plant risk assessments. The point estimate values so calculated are acknowledged by most knowledgeable individuals to be conservatively high. Further, recent evaluations of the overall uncertainties in the various components that make up risk estimates for nuclear electric generating stations show that one of the large uncertainties is associated with the magnitude of the radionuclide release to the environment. In the approach developed for the RSS, values for fission product release from the fuel are derived from data obtained from small experiments. A reappraisal of the RSS release fractions was published in 1981 in NUREG-0772. Estimates of fractional releases from fuel are similar to those of the RSS. In the RSS approach, depletion during transport from the core (where the fission products are released) to the containment is assumed to be zero for calculation purposes. In the containment, the CORRAL code is applied to calculate radioactivity depletion by containment processes and to calculate the quantity and timing of release to the environment

  4. Quantifying uncertainties in the estimation of safety parameters by using bootstrapped artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Secchi, Piercesare [MOX, Department of Mathematics, Polytechnic of Milan (Italy); Zio, Enrico [Department of Energy, Polytechnic of Milan, Via Ponzio 34/3, 20133 Milano (Italy)], E-mail: enrico.zio@polimi.it; Di Maio, Francesco [Department of Energy, Polytechnic of Milan, Via Ponzio 34/3, 20133 Milano (Italy)

    2008-12-15

    For licensing purposes, safety cases of Nuclear Power Plants (NPPs) must be presented at the Regulatory Authority with the necessary confidence on the models used to describe the plant safety behavior. In principle, this requires the repetition of a large number of model runs to account for the uncertainties inherent in the model description of the true plant behavior. The present paper propounds the use of bootstrapped Artificial Neural Networks (ANNs) for performing the numerous model output calculations needed for estimating safety margins with appropriate confidence intervals. Account is given both to the uncertainties inherent in the plant model and to those introduced by the ANN regression models used for performing the repeated safety parameter evaluations. The proposed framework of analysis is first illustrated with reference to a simple analytical model and then to the estimation of the safety margin on the maximum fuel cladding temperature reached during a complete group distribution header blockage scenario in a RBMK-1500 nuclear reactor. The results are compared with those obtained by a traditional parametric approach.

  5. Uncertainty of soil erosion modelling using open source high resolution and aggregated DEMs

    Directory of Open Access Journals (Sweden)

    Arun Mondal

    2017-05-01

    Full Text Available Digital Elevation Model (DEM is one of the important parameters for soil erosion assessment. Notable uncertainties are observed in this study while using three high resolution open source DEMs. The Revised Universal Soil Loss Equation (RUSLE model has been applied to analysis the assessment of soil erosion uncertainty using open source DEMs (SRTM, ASTER and CARTOSAT and their increasing grid space (pixel size from the actual. The study area is a part of the Narmada river basin in Madhya Pradesh state, which is located in the central part of India and the area covered 20,558 km2. The actual resolution of DEMs is 30 m and their increasing grid spaces are taken as 90, 150, 210, 270 and 330 m for this study. Vertical accuracy of DEMs has been assessed using actual heights of the sample points that have been taken considering planimetric survey based map (toposheet. Elevations of DEMs are converted to the same vertical datum from WGS 84 to MSL (Mean Sea Level, before the accuracy assessment and modelling. Results indicate that the accuracy of the SRTM DEM with the RMSE of 13.31, 14.51, and 18.19 m in 30, 150 and 330 m resolution respectively, is better than the ASTER and the CARTOSAT DEMs. When the grid space of the DEMs increases, the accuracy of the elevation and calculated soil erosion decreases. This study presents a potential uncertainty introduced by open source high resolution DEMs in the accuracy of the soil erosion assessment models. The research provides an analysis of errors in selecting DEMs using the original and increased grid space for soil erosion modelling.

  6. The effect of uncertainty and aggregate investments on crude oil price dynamics

    International Nuclear Information System (INIS)

    Tvedt, Jostein

    2002-01-01

    This paper is a study of the dynamics of the oil industry and we derive a mean reverting process for the crude oil price. Oil is supplied by a market leader, OPEC, and by an aggregate that represents non-OPEC producers. The non-OPEC producers take the oil price as given. The cost of non-OPEC producers depends on past investments. Shifts in these investments are influenced by costs of structural change in the construction industry. A drop in the oil price to below a given level triggers lower investments, but if the oil price reverts back to a high level investments may not immediately expand. In an uncertain oil demand environment cost of structural change creates a value of waiting to invest. This investment behaviour influences the oil price process

  7. The sensitivity analysis as a method of quantifying the degree of uncertainty

    Directory of Open Access Journals (Sweden)

    Manole Tatiana

    2013-01-01

    Full Text Available In this article the author relates about the uncertainty of any proposed investment or government policies. Taking in account this situation, it is necessary to do an analysis of proposed projects for implementation and from multiple choices to choose the project that is most advantageous. This is a general principle. The financial science provides to the researchers a set of tools with what we can identify the best project. The author aims to examine three projects that have the same features, applying them to various methods of financial analysis, such as net present value (NPV, the discount rate (SAR, recovery time (TR, additional income (VS and return on invested (RR. All these tools of financial analysis are in the cost-benefit analysis (CBA and have the aim to streamline the public money that are invested to achieve successful performance.

  8. Applying an animal model to quantify the uncertainties of an image-based 4D-CT algorithm

    International Nuclear Information System (INIS)

    Pierce, Greg; Battista, Jerry; Wang, Kevin; Lee, Ting-Yim

    2012-01-01

    The purpose of this paper is to use an animal model to quantify the spatial displacement uncertainties and test the fundamental assumptions of an image-based 4D-CT algorithm in vivo. Six female Landrace cross pigs were ventilated and imaged using a 64-slice CT scanner (GE Healthcare) operating in axial cine mode. The breathing amplitude pattern of the pigs was varied by periodically crimping the ventilator gas return tube during the image acquisition. The image data were used to determine the displacement uncertainties that result from matching CT images at the same respiratory phase using normalized cross correlation (NCC) as the matching criteria. Additionally, the ability to match the respiratory phase of a 4.0 cm subvolume of the thorax to a reference subvolume using only a single overlapping 2D slice from the two subvolumes was tested by varying the location of the overlapping matching image within the subvolume and examining the effect this had on the displacement relative to the reference volume. The displacement uncertainty resulting from matching two respiratory images using NCC ranged from 0.54 ± 0.10 mm per match to 0.32 ± 0.16 mm per match in the lung of the animal. The uncertainty was found to propagate in quadrature, increasing with number of NCC matches performed. In comparison, the minimum displacement achievable if two respiratory images were matched perfectly in phase ranged from 0.77 ± 0.06 to 0.93 ± 0.06 mm in the lung. The assumption that subvolumes from separate cine scan could be matched by matching a single overlapping 2D image between to subvolumes was validated. An in vivo animal model was developed to test an image-based 4D-CT algorithm. The uncertainties associated with using NCC to match the respiratory phase of two images were quantified and the assumption that a 4.0 cm 3D subvolume can by matched in respiratory phase by matching a single 2D image from the 3D subvolume was validated. The work in this paper shows the image-based 4D

  9. Using MERRA Gridded Innovation for Quantifying Uncertainties in Analysis Fields and Diagnosing Observing System Inhomogeneities

    Science.gov (United States)

    da Silva, A.; Redder, C. R.

    2010-12-01

    MERRA is a NASA reanalysis for the satellite era using a major new version of the Goddard Earth Observing System Data Assimilation System Version 5 (GEOS-5). The Project focuses on historical analyses of the hydrological cycle on a broad range of weather and climate time scales and places the NASA EOS suite of observations in a climate context. The characterization of uncertainty in reanalysis fields is a commonly requested feature by users of such data. While intercomparison with reference data sets is common practice for ascertaining the realism of the datasets, such studies typically are restricted to long term climatological statistics and seldom provide state dependent measures of the uncertainties involved. In principle, variational data assimilation algorithms have the ability of producing error estimates for the analysis variables (typically surface pressure, winds, temperature, moisture and ozone) consistent with the assumed background and observation error statistics. However, these "perceived error estimates" are expensive to obtain and are limited by the somewhat simplistic errors assumed in the algorithm. The observation minus forecast residuals (innovations) by-product of any assimilation system constitutes a powerful tool for estimating the systematic and random errors in the analysis fields. Unfortunately, such data is usually not readily available with reanalysis products, often requiring the tedious decoding of large datasets and not so-user friendly file formats. With MERRA we have introduced a gridded version of the observations/innovations used in the assimilation process, using the same grid and data formats as the regular datasets. Such dataset empowers the user with the ability of conveniently performing observing system related analysis and error estimates. The scope of this dataset will be briefly described. We will present a systematic analysis of MERRA innovation time series for the conventional observing system, including maximum

  10. Quantifying Uncertainty in Estimation of Potential Recharge in Tropical and Temperate Catchments using a Crop Model and Microwave Remote Sensing

    Science.gov (United States)

    Krishnan Kutty, S.; Sekhar, M.; Ruiz, L.; Tomer, S. K.; Bandyopadhyay, S.; Buis, S.; Guerif, M.; Gascuel-odoux, C.

    2012-12-01

    Groundwater recharge in a semi-arid region is generally low, but could exhibit high spatial variability depending on the soil type and plant cover. The potential recharge (the drainage flux just beneath the root zone) is found to be sensitive to water holding capacity and rooting depth (Rushton, 2003). Simple water balance approaches for recharge estimation often fail to consider the effect of plant cover, growth phases and rooting depth. Hence a crop model based approach might be better suited to assess sensitivity of recharge for various crop-soil combinations in agricultural catchments. Martinez et al. (2009) using a root zone modelling approach to estimate groundwater recharge stressed that future studies should focus on quantifying the uncertainty in recharge estimates due to uncertainty in soil water parameters such as soil layers, field capacity, rooting depth etc. Uncertainty in the parameters may arise due to the uncertainties in retrieved variables (surface soil moisture and leaf area index) from satellite. Hence a good estimate of parameters as well as their uncertainty is essential for a reliable estimate of the potential recharge. In this study we focus on assessing the sensitivity of crop and soil types on the potential recharge by using a generic crop model STICS. The effect of uncertainty in the soil parameters on the estimates of recharge and its uncertainty is investigated. The multi-layer soil water parameters and their uncertainty is estimated by inversion of STICS model using the GLUE approach. Surface soil moisture and LAI either retrieved from microwave remote sensing data or measured in field plots (Sreelash et al., 2012) were found to provide good estimates of the soil water properties and therefore both these data sets were used in this study to estimate the parameters and the potential recharge for a combination of soil-crop systems. These investigations were made in two field experimental catchments. The first one is in the tropical semi

  11. A hierarchical bayesian model to quantify uncertainty of stream water temperature forecasts.

    Directory of Open Access Journals (Sweden)

    Guillaume Bal

    Full Text Available Providing generic and cost effective modelling approaches to reconstruct and forecast freshwater temperature using predictors as air temperature and water discharge is a prerequisite to understanding ecological processes underlying the impact of water temperature and of global warming on continental aquatic ecosystems. Using air temperature as a simple linear predictor of water temperature can lead to significant bias in forecasts as it does not disentangle seasonality and long term trends in the signal. Here, we develop an alternative approach based on hierarchical Bayesian statistical time series modelling of water temperature, air temperature and water discharge using seasonal sinusoidal periodic signals and time varying means and amplitudes. Fitting and forecasting performances of this approach are compared with that of simple linear regression between water and air temperatures using i an emotive simulated example, ii application to three French coastal streams with contrasting bio-geographical conditions and sizes. The time series modelling approach better fit data and does not exhibit forecasting bias in long term trends contrary to the linear regression. This new model also allows for more accurate forecasts of water temperature than linear regression together with a fair assessment of the uncertainty around forecasting. Warming of water temperature forecast by our hierarchical Bayesian model was slower and more uncertain than that expected with the classical regression approach. These new forecasts are in a form that is readily usable in further ecological analyses and will allow weighting of outcomes from different scenarios to manage climate change impacts on freshwater wildlife.

  12. CLIMB - Climate induced changes on the hydrology of mediterranean basins - Reducing uncertainties and quantifying risk

    Science.gov (United States)

    Ludwig, Ralf

    2010-05-01

    According to future climate projections, Mediterranean countries are at high risk for an even pronounced susceptibility to changes in the hydrological budget and extremes. These changes are expected to have severe direct impacts on the management of water resources. Threats include severe droughts and extreme flooding, salinization of coastal aquifers, degradation of fertile soils and desertification due to poor and unsustainable water management practices. It can be foreseen that, unless appropriate adaptation measures are undertaken, the changes in the hydrologic cycle will give rise to an increasing potential for tension and conflict among the political and economic actors in this vulnerable region. The presented project initiative CLIMB, funded under EC's 7th Framework Program (FP7-ENV-2009-1), has started in January 2010. In its 4-year design, it shall analyze ongoing and future climate induced changes in hydrological budgets and extremes across the Mediterranean and neighboring regions. This is undertaken in study sites located in Sardinia, Northern Italy, Southern France, Tunisia, Egypt and the Palestinian-administered area Gaza. The work plan is targeted to selected river or aquifer catchments, where the consortium will employ a combination of novel field monitoring and remote sensing concepts, data assimilation, integrated hydrologic (and biophysical) modeling and socioeconomic factor analyses to reduce existing uncertainties in climate change impact analysis. Advanced climate scenario analysis will be employed and available ensembles of regional climate model simulations will be downscaling. This process will provide the drivers for an ensemble of hydro(-geo)logical models with different degrees of complexity in terms of process description and level of integration. The results of hydrological modeling and socio-economic factor analysis will enable the development of a GIS-based Vulnerability and Risk Assessment Tool. This tool will serve as a platform

  13. An uncertainty-based framework to quantifying climate change impacts on coastal flood vulnerability: case study of New York City.

    Science.gov (United States)

    Zahmatkesh, Zahra; Karamouz, Mohammad

    2017-10-17

    The continued development efforts around the world, growing population, and the increased probability of occurrence of extreme hydrologic events have adversely affected natural and built environments. Flood damages and loss of lives from the devastating storms, such as Irene and Sandy on the East Coast of the USA, are examples of the vulnerability to flooding that even developed countries have to face. The odds of coastal flooding disasters have been increased due to accelerated sea level rise, climate change impacts, and communities' interest to live near the coastlines. Climate change, for instance, is becoming a major threat to sustainable development because of its adverse impacts on the hydrologic cycle. Effective management strategies are thus required for flood vulnerability reduction and disaster preparedness. This paper is an extension to the flood resilience studies in the New York City coastal watershed. Here, a framework is proposed to quantify coastal flood vulnerability while accounting for climate change impacts. To do so, a multi-criteria decision making (MCDM) approach that combines watershed characteristics (factors) and their weights is proposed to quantify flood vulnerability. Among the watershed characteristics, potential variation in the hydrologic factors under climate change impacts is modeled utilizing the general circulation models' (GCMs) outputs. The considered factors include rainfall, extreme water level, and sea level rise that exacerbate flood vulnerability through increasing exposure and susceptibility to flooding. Uncertainty in the weights as well as values of factors is incorporated in the analysis using the Monte Carlo (MC) sampling method by selecting the best-fitted distributions to the parameters with random nature. A number of low impact development (LID) measures are then proposed to improve watershed adaptive capacity to deal with coastal flooding. Potential range of current and future vulnerability to flooding is

  14. Quantifying geomorphic change and characterizing uncertainty in repeat aerial lidar over an enormous area: Blue Earth County, MN

    Science.gov (United States)

    Schaffrath, K. R.; Belmont, P.; Wheaton, J. M.

    2013-12-01

    High-resolution topography data (lidar) are being collected over increasingly larger geographic areas. These data contain an immense amount of information regarding the topography of bare-earth and vegetated surfaces. Repeat lidar data (collected at multiple times for the same location) enables extraction of an unprecedented level of detailed information about landscape form and function and provides an opportunity to quantify volumetric change and identify hot spots of erosion and deposition. However, significant technological and scientific challenges remain in the analysis of repeat lidar data over enormous areas (>1000 square kilometers), not the least of which involves robust quantification of uncertainty. Excessive sedimentation has been documented in the Minnesota River and many reaches of the mainstem and tributaries are listed as impaired for turbidity and eutrophication under the Clean Water Act of 1972. The Blue Earth River and its tributaries (Greater Blue Earth basin) have been identified as one of the main sources of sediment to the Minnesota River. Much of the Greater Blue Earth basin is located in Blue Earth County (1,982 square kilometers) where airborne lidar data were collected in 2005 and 2012, with average bare-earth point densities of 1 point per square meter and closer to 2 points per square meter, respectively. One of the largest floods on record (100-year recurrence interval) occurred in September 2010. A sediment budget for the Greater Blue Earth basin is being developed to inform strategies to reduce current sediment loads and better predict how the basin may respond to changing climate and management practices. Here we evaluate the geomorphic changes that occurred between 2005 and 2012 to identify hotspots of erosion and deposition, and to quantify some of the terms in the sediment budget. To make meaningful interpretations of the differences between the 2005 and 2012 lidar digital elevation models (DEMs), total uncertainty must be

  15. Quantifying uncertainty in soot volume fraction estimates using Bayesian inference of auto-correlated laser-induced incandescence measurements

    Science.gov (United States)

    Hadwin, Paul J.; Sipkens, T. A.; Thomson, K. A.; Liu, F.; Daun, K. J.

    2016-01-01

    Auto-correlated laser-induced incandescence (AC-LII) infers the soot volume fraction (SVF) of soot particles by comparing the spectral incandescence from laser-energized particles to the pyrometrically inferred peak soot temperature. This calculation requires detailed knowledge of model parameters such as the absorption function of soot, which may vary with combustion chemistry, soot age, and the internal structure of the soot. This work presents a Bayesian methodology to quantify such uncertainties. This technique treats the additional "nuisance" model parameters, including the soot absorption function, as stochastic variables and incorporates the current state of knowledge of these parameters into the inference process through maximum entropy priors. While standard AC-LII analysis provides a point estimate of the SVF, Bayesian techniques infer the posterior probability density, which will allow scientists and engineers to better assess the reliability of AC-LII inferred SVFs in the context of environmental regulations and competing diagnostics.

  16. Quantifying the uncertainty of regional and national estimates of soil carbon stocks

    Science.gov (United States)

    Papritz, Andreas

    2013-04-01

    At regional and national scales, carbon (C) stocks are frequently estimated by means of regression models. Such statistical models link measurements of carbons stocks, recorded for a set of soil profiles or soil cores, to covariates that characterize soil formation conditions and land management. A prerequisite is that these covariates are available for any location within a region of interest G because they are used along with the fitted regression coefficients to predict the carbon stocks at the nodes of a fine-meshed grid that is laid over G. The mean C stock in G is then estimated by the arithmetic mean of the stock predictions for the grid nodes. Apart from the mean stock, the precision of the estimate is often also of interest, for example to judge whether the mean C stock has changed significantly between two inventories. The standard error of the estimated mean stock in G can be computed from the regression results as well. Two issues are thereby important: (i) How large is the area of G relative to the support of the measurements? (ii) Are the residuals of the regression model spatially auto-correlated or is the assumption of statistical independence tenable? Both issues are correctly handled if one adopts a geostatistical block kriging approach for estimating the mean C stock within a region and its standard error. In the presentation I shall summarize the main ideas of external drift block kriging. To compute the standard error of the mean stock, one has in principle to sum the elements a potentially very large covariance matrix of point prediction errors, but I shall show that the required term can be approximated very well by Monte Carlo techniques. I shall further illustrated with a few examples how the standard error of the mean stock estimate changes with the size of G and with the strenght of the auto-correlation of the regression residuals. As an application a robust variant of block kriging is used to quantify the mean carbon stock stored in the

  17. Space-based retrieval of NO2 over biomass burning regions: quantifying and reducing uncertainties

    Science.gov (United States)

    Bousserez, N.

    2014-10-01

    The accuracy of space-based nitrogen dioxide (NO2) retrievals from solar backscatter radiances critically depends on a priori knowledge of the vertical profiles of NO2 and aerosol optical properties. This information is used to calculate an air mass factor (AMF), which accounts for atmospheric scattering and is used to convert the measured line-of-sight "slant" columns into vertical columns. In this study we investigate the impact of biomass burning emissions on the AMF in order to quantify NO2 retrieval errors in the Ozone Monitoring Instrument (OMI) products over these sources. Sensitivity analyses are conducted using the Linearized Discrete Ordinate Radiative Transfer (LIDORT) model. The NO2 and aerosol profiles are obtained from a 3-D chemistry-transport model (GEOS-Chem), which uses the Fire Locating and Monitoring of Burning Emissions (FLAMBE) daily biomass burning emission inventory. Aircraft in situ data collected during two field campaigns, the Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) and the Dust and Biomass-burning Experiment (DABEX), are used to evaluate the modeled aerosol optical properties and NO2 profiles over Canadian boreal fires and West African savanna fires, respectively. Over both domains, the effect of biomass burning emissions on the AMF through the modified NO2 shape factor can be as high as -60%. A sensitivity analysis also revealed that the effect of aerosol and shape factor perturbations on the AMF is very sensitive to surface reflectance and clouds. As an illustration, the aerosol correction can range from -20 to +100% for different surface reflectances, while the shape factor correction varies from -70 to -20%. Although previous studies have shown that in clear-sky conditions the effect of aerosols on the AMF was in part implicitly accounted for by the modified cloud parameters, here it is suggested that when clouds are present above a surface layer of scattering aerosols, an explicit

  18. An inexact fuzzy two-stage stochastic model for quantifying the efficiency of nonpoint source effluent trading under uncertainty

    International Nuclear Information System (INIS)

    Luo, B.; Maqsood, I.; Huang, G.H.; Yin, Y.Y.; Han, D.J.

    2005-01-01

    Reduction of nonpoint source (NPS) pollution from agricultural lands is a major concern in most countries. One method to reduce NPS pollution is through land retirement programs. This method, however, may result in enormous economic costs especially when large sums of croplands need to be retired. To reduce the cost, effluent trading can be employed to couple with land retirement programs. However, the trading efforts can also become inefficient due to various uncertainties existing in stochastic, interval, and fuzzy formats in agricultural systems. Thus, it is desired to develop improved methods to effectively quantify the efficiency of potential trading efforts by considering those uncertainties. In this respect, this paper presents an inexact fuzzy two-stage stochastic programming model to tackle such problems. The proposed model can facilitate decision-making to implement trading efforts for agricultural NPS pollution reduction through land retirement programs. The applicability of the model is demonstrated through a hypothetical effluent trading program within a subcatchment of the Lake Tai Basin in China. The study results indicate that the efficiency of the trading program is significantly influenced by precipitation amount, agricultural activities, and level of discharge limits of pollutants. The results also show that the trading program will be more effective for low precipitation years and with stricter discharge limits

  19. Uncertainties

    Indian Academy of Sciences (India)

    To reflect this uncertainty in the climate scenarios, the use of AOGCMs that explicitly simulate the carbon cycle and chemistry of all the substances are needed. The Hadley Centre has developed a version of the climate model that allows the effect of climate change on the carbon cycle and its feedback into climate, to be ...

  20. Uncertainty

    International Nuclear Information System (INIS)

    Silva, T.A. da

    1988-01-01

    The comparison between the uncertainty method recommended by International Atomic Energy Agency (IAEA) and the and the International Weight and Measure Commitee (CIPM) are showed, for the calibration of clinical dosimeters in the secondary standard Dosimetry Laboratory (SSDL). (C.G.C.) [pt

  1. Quantifying reactor safety margins: Application of code scaling, applicability, and uncertainty evaluation methodology to a large-break, loss-of-coolant accident

    International Nuclear Information System (INIS)

    Boyack, B.; Duffey, R.; Wilson, G.; Griffith, P.; Lellouche, G.; Levy, S.; Rohatgi, U.; Wulff, W.; Zuber, N.

    1989-12-01

    The US Nuclear Regulatory Commission (NRC) has issued a revised rule for loss-of-coolant accident/emergency core cooling system (ECCS) analysis of light water reactors to allow the use of best-estimate computer codes in safety analysis as an option. A key feature of this option requires the licensee to quantify the uncertainty of the calculations and include that uncertainty when comparing the calculated results with acceptance limits provided in 10 CFR Part 50. To support the revised ECCS rule and illustrate its application, the NRC and its contractors and consultants have developed and demonstrated an uncertainty evaluation methodology called code scaling, applicability, and uncertainty (CSAU). The CSAU methodology and an example application described in this report demonstrate that uncertainties in complex phenomena can be quantified. The methodology is structured, traceable, and practical, as is needed in the regulatory arena. The methodology is systematic and comprehensive as it addresses and integrates the scenario, experiments, code, and plant to resolve questions concerned with: (a) code capability to scale-up processes from test facility to full-scale nuclear power plants; (b) code applicability to safety studies of a postulated accident scenario in a specified nuclear power plant; and (c) quantifying uncertainties of calculated results. 127 refs., 55 figs., 40 tabs

  2. Aggregating and Communicating Uncertainty.

    Science.gov (United States)

    1980-04-01

    accomplished within a given time. We may, Hamlet -like, continue to debate about the pros and cons, the ifs and the buts, but if we fail to make a 7-3 L...Washington, D.C., September 1977. 122. Slovic, Paul; "From Shakespeare to Simon: Speculations - And Some Evidence - About Man’s Ability to Process

  3. Managing Groundwater Recharge and Pumping for Late Summer Streamflow Increases: Quantifying Uncertainty Using Null Space Monte Carlo

    Science.gov (United States)

    Tolley, D. G., III; Foglia, L.; Harter, T.

    2017-12-01

    Late summer and early fall streamflow decreases caused by climate change and agricultural pumping contribute to increased water temperatures and result in large disconnected sections during dry years in many semi-arid regions with Mediterranean climate. This negatively impacts aquatic habitat of fish species such as coho and fall-run Chinook salmon. In collaboration with local stakeholders, the Scott Valley Integrated Hydrologic Model (SVIHMv3) was developed to assess future water management scenarios with the goal of improving aquatic species habitat while maintaining agricultural production in the valley. The Null Space Monte Carlo (NSMC) method available in PEST was used to quantify the range of predicted streamflow changes for three conjunctive use scenarios: 1) managed aquifer recharge (MAR), 2) in lieu recharge (ILR, substituting surface-water irrigation for irrigation with groundwater while flows are available), and 3) MAR + ILR. Random parameter sets were generated using the calibrated covariance matrix of the model, which were then recalibrated if the sum of squared residuals was greater than 10% of the original sum of squared weighted residuals. These calibration-constrained stochastic parameter sets were then used to obtain a distribution of streamflow changes resulting from implementing the conjunctive use scenarios. Preliminary results show that while the range of streamflow increases using managed aquifer recharge is much narrower (i.e., greater degree of certainty) than in lieu recharge, there are potentially much greater benefits to streamflow by implementing in lieu recharge (although also greater costs). Combining the two scenarios provides the greatest benefit for increasing late summer and early fall streamflow, as most of the MAR streamflow increases are during the spring and early summer which ILR is able to take advantage of. Incorporation of uncertainty into model predictions is critical for establishing and maintaining stakeholder trust

  4. Methods for Quantifying the Uncertainties of LSIT Test Parameters, Test Results, and Full-Scale Mixing Performance Using Models Developed from Scaled Test Data

    International Nuclear Information System (INIS)

    Piepel, Gregory F.; Cooley, Scott K.; Kuhn, William L.; Rector, David R.; Heredia-Langner, Alejandro

    2015-01-01

    This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to ''address uncertainties and increase confidence in the projected, full-scale mixing performance and operations'' in the Waste Treatment and Immobilization Plant (WTP).

  5. Methods for Quantifying the Uncertainties of LSIT Test Parameters, Test Results, and Full-Scale Mixing Performance Using Models Developed from Scaled Test Data

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cooley, Scott K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kuhn, William L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rector, David R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Heredia-Langner, Alejandro [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to “address uncertainties and increase confidence in the projected, full-scale mixing performance and operations” in the Waste Treatment and Immobilization Plant (WTP).

  6. Aggregation in environmental systems - Part 1: Seasonal tracer cycles quantify young water fractions, but not mean transit times, in spatially heterogeneous catchments

    Science.gov (United States)

    Kirchner, J. W.

    2016-01-01

    Environmental heterogeneity is ubiquitous, but environmental systems are often analyzed as if they were homogeneous instead, resulting in aggregation errors that are rarely explored and almost never quantified. Here I use simple benchmark tests to explore this general problem in one specific context: the use of seasonal cycles in chemical or isotopic tracers (such as Cl-, δ18O, or δ2H) to estimate timescales of storage in catchments. Timescales of catchment storage are typically quantified by the mean transit time, meaning the average time that elapses between parcels of water entering as precipitation and leaving again as streamflow. Longer mean transit times imply greater damping of seasonal tracer cycles. Thus, the amplitudes of tracer cycles in precipitation and streamflow are commonly used to calculate catchment mean transit times. Here I show that these calculations will typically be wrong by several hundred percent, when applied to catchments with realistic degrees of spatial heterogeneity. This aggregation bias arises from the strong nonlinearity in the relationship between tracer cycle amplitude and mean travel time. I propose an alternative storage metric, the young water fraction in streamflow, defined as the fraction of runoff with transit times of less than roughly 0.2 years. I show that this young water fraction (not to be confused with event-based "new water" in hydrograph separations) is accurately predicted by seasonal tracer cycles within a precision of a few percent, across the entire range of mean transit times from almost zero to almost infinity. Importantly, this relationship is also virtually free from aggregation error. That is, seasonal tracer cycles also accurately predict the young water fraction in runoff from highly heterogeneous mixtures of subcatchments with strongly contrasting transit-time distributions. Thus, although tracer cycle amplitudes yield biased and unreliable estimates of catchment mean travel times in heterogeneous

  7. Quantifying the uncertainty in discharge data using hydraulic knowledge and uncertain gaugings: a Bayesian method named BaRatin

    Science.gov (United States)

    Le Coz, Jérôme; Renard, Benjamin; Bonnifait, Laurent; Branger, Flora; Le Boursicaud, Raphaël; Horner, Ivan; Mansanarez, Valentin; Lang, Michel; Vigneau, Sylvain

    2015-04-01

    River discharge is a crucial variable for Hydrology: as the output variable of most hydrologic models, it is used for sensitivity analyses, model structure identification, parameter estimation, data assimilation, prediction, etc. A major difficulty stems from the fact that river discharge is not measured continuously. Instead, discharge time series used by hydrologists are usually based on simple stage-discharge relations (rating curves) calibrated using a set of direct stage-discharge measurements (gaugings). In this presentation, we present a Bayesian approach (cf. Le Coz et al., 2014) to build such hydrometric rating curves, to estimate the associated uncertainty and to propagate this uncertainty to discharge time series. The three main steps of this approach are described: (1) Hydraulic analysis: identification of the hydraulic controls that govern the stage-discharge relation, identification of the rating curve equation and specification of prior distributions for the rating curve parameters; (2) Rating curve estimation: Bayesian inference of the rating curve parameters, accounting for the individual uncertainties of available gaugings, which often differ according to the discharge measurement procedure and the flow conditions; (3) Uncertainty propagation: quantification of the uncertainty in discharge time series, accounting for both the rating curve uncertainties and the uncertainty of recorded stage values. The rating curve uncertainties combine the parametric uncertainties and the remnant uncertainties that reflect the limited accuracy of the mathematical model used to simulate the physical stage-discharge relation. In addition, we also discuss current research activities, including the treatment of non-univocal stage-discharge relationships (e.g. due to hydraulic hysteresis, vegetation growth, sudden change of the geometry of the section, etc.). An operational version of the BaRatin software and its graphical interface are made available free of charge on

  8. A Bayesian analysis of sensible heat flux estimation: Quantifying uncertainty in meteorological forcing to improve model prediction

    KAUST Repository

    Ershadi, Ali; McCabe, Matthew; Evans, Jason P.; Mariethoz, Gregoire; Kavetski, Dmitri

    2013-01-01

    The influence of uncertainty in land surface temperature, air temperature, and wind speed on the estimation of sensible heat flux is analyzed using a Bayesian inference technique applied to the Surface Energy Balance System (SEBS) model

  9. Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier–Stokes simulations: A data-driven, physics-informed Bayesian approach

    International Nuclear Information System (INIS)

    Xiao, H.; Wu, J.-L.; Wang, J.-X.; Sun, R.; Roy, C.J.

    2016-01-01

    Despite their well-known limitations, Reynolds-Averaged Navier–Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations. Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach has

  10. Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier–Stokes simulations: A data-driven, physics-informed Bayesian approach

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, H., E-mail: hengxiao@vt.edu; Wu, J.-L.; Wang, J.-X.; Sun, R.; Roy, C.J.

    2016-11-01

    Despite their well-known limitations, Reynolds-Averaged Navier–Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations. Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach

  11. Estimating uncertainty and its temporal variation related to global climate models in quantifying climate change impacts on hydrology

    Science.gov (United States)

    Shen, Mingxi; Chen, Jie; Zhuan, Meijia; Chen, Hua; Xu, Chong-Yu; Xiong, Lihua

    2018-01-01

    Uncertainty estimation of climate change impacts on hydrology has received much attention in the research community. The choice of a global climate model (GCM) is usually considered as the largest contributor to the uncertainty of climate change impacts. The temporal variation of GCM uncertainty needs to be investigated for making long-term decisions to deal with climate change. Accordingly, this study investigated the temporal variation (mainly long-term) of uncertainty related to the choice of a GCM in predicting climate change impacts on hydrology by using multi-GCMs over multiple continuous future periods. Specifically, twenty CMIP5 GCMs under RCP4.5 and RCP8.5 emission scenarios were adapted to adequately represent this uncertainty envelope, fifty-one 30-year future periods moving from 2021 to 2100 with 1-year interval were produced to express the temporal variation. Future climatic and hydrological regimes over all future periods were compared to those in the reference period (1971-2000) using a set of metrics, including mean and extremes. The periodicity of climatic and hydrological changes and their uncertainty were analyzed using wavelet analysis, while the trend was analyzed using Mann-Kendall trend test and regression analysis. The results showed that both future climate change (precipitation and temperature) and hydrological response predicted by the twenty GCMs were highly uncertain, and the uncertainty increased significantly over time. For example, the change of mean annual precipitation increased from 1.4% in 2021-2050 to 6.5% in 2071-2100 for RCP4.5 in terms of the median value of multi-models, but the projected uncertainty reached 21.7% in 2021-2050 and 25.1% in 2071-2100 for RCP4.5. The uncertainty under a high emission scenario (RCP8.5) was much larger than that under a relatively low emission scenario (RCP4.5). Almost all climatic and hydrological regimes and their uncertainty did not show significant periodicity at the P = .05 significance

  12. Quantifying the uncertainties of China's emission inventory for industrial sources: From national to provincial and city scales

    Science.gov (United States)

    Zhao, Yu; Zhou, Yaduan; Qiu, Liping; Zhang, Jie

    2017-09-01

    A comprehensive uncertainty analysis was conducted on emission inventories for industrial sources at national (China), provincial (Jiangsu), and city (Nanjing) scales for 2012. Based on various methods and data sources, Monte-Carlo simulation was applied at sector level for national inventory, and at plant level (whenever possible) for provincial and city inventories. The uncertainties of national inventory were estimated at -17-37% (expressed as 95% confidence intervals, CIs), -21-35%, -19-34%, -29-40%, -22-47%, -21-54%, -33-84%, and -32-92% for SO2, NOX, CO, TSP (total suspended particles), PM10, PM2.5, black carbon (BC), and organic carbon (OC) emissions respectively for the whole country. At provincial and city levels, the uncertainties of corresponding pollutant emissions were estimated at -15-18%, -18-33%, -16-37%, -20-30%, -23-45%, -26-50%, -33-79%, and -33-71% for Jiangsu, and -17-22%, -10-33%, -23-75%, -19-36%, -23-41%, -28-48%, -45-82%, and -34-96% for Nanjing, respectively. Emission factors (or associated parameters) were identified as the biggest contributors to the uncertainties of emissions for most source categories except iron & steel production in the national inventory. Compared to national one, uncertainties of total emissions in the provincial and city-scale inventories were not significantly reduced for most species with an exception of SO2. For power and other industrial boilers, the uncertainties were reduced, and the plant-specific parameters played more important roles to the uncertainties. Much larger PM10 and PM2.5 emissions for Jiangsu were estimated in this provincial inventory than other studies, implying the big discrepancies on data sources of emission factors and activity data between local and national inventories. Although the uncertainty analysis of bottom-up emission inventories at national and local scales partly supported the ;top-down; estimates using observation and/or chemistry transport models, detailed investigations and

  13. Quantifying uncertainties in radar forward models through a comparison between CloudSat and SPartICus reflectivity factors

    Science.gov (United States)

    Mascio, Jeana; Mace, Gerald G.

    2017-02-01

    Interpretations of remote sensing measurements collected in sample volumes containing ice-phase hydrometeors are very sensitive to assumptions regarding the distributions of mass with ice crystal dimension, otherwise known as mass-dimensional or m-D relationships. How these microphysical characteristics vary in nature is highly uncertain, resulting in significant uncertainty in algorithms that attempt to derive bulk microphysical properties from remote sensing measurements. This uncertainty extends to radar reflectivity factors forward calculated from model output because the statistics of the actual m-D in nature is not known. To investigate the variability in m-D relationships in cirrus clouds, reflectivity factors measured by CloudSat are combined with particle size distributions (PSDs) collected by coincident in situ aircraft by using an optimal estimation-based (OE) retrieval of the m-D power law. The PSDs were collected by 12 flights of the Stratton Park Engineering Company Learjet during the Small Particles in Cirrus campaign. We find that no specific habit emerges as preferred, and instead, we find that the microphysical characteristics of ice crystal populations tend to be distributed over a continuum-defying simple categorization. With the uncertainties derived from the OE algorithm, the uncertainties in forward-modeled backscatter cross section and, in turn, radar reflectivity is calculated by using a bootstrapping technique, allowing us to infer the uncertainties in forward-modeled radar reflectivity that would be appropriately applied to remote sensing simulator algorithms.

  14. How to quantify uncertainty and variability in life cycle assessment: the case of greenhouse gas emissions of gas power generation in the US

    Science.gov (United States)

    Hauck, M.; Steinmann, Z. J. N.; Laurenzi, I. J.; Karuppiah, R.; Huijbregts, M. A. J.

    2014-07-01

    This study quantified the contributions of uncertainty and variability to the range of life-cycle greenhouse gas (LCGHG) emissions associated with conventional gas-fired electricity generation in the US. Whereas uncertainty is defined as lack of knowledge and can potentially be reduced by additional research, variability is an inherent characteristic of supply chains and cannot be reduced without physically modifying the system. The life-cycle included four stages: production, processing, transmission and power generation, and utilized a functional unit of 1 kWh of electricity generated at plant. Technological variability requires analyses of life cycles of individual power plants, e.g. combined cycle plants or boilers. Parameter uncertainty was modeled via Monte Carlo simulation. Our approach reveals that technological differences are the predominant cause for the range of LCGHG emissions associated with gas power, primarily due to variability in plant efficiencies. Uncertainties in model parameters played a minor role for 100 year time horizon. Variability in LCGHG emissions was a factor of 1.4 for combined cycle plants, and a factor of 1.3 for simple cycle plants (95% CI, 100 year horizon). The results can be used to assist decision-makers in assessing factors that contribute to LCGHG emissions despite uncertainties in parameters employed to estimate those emissions.

  15. How to quantify uncertainty and variability in life cycle assessment: the case of greenhouse gas emissions of gas power generation in the US

    International Nuclear Information System (INIS)

    Hauck, M; Steinmann, Z J N; Huijbregts, M A J; Laurenzi, I J; Karuppiah, R

    2014-01-01

    This study quantified the contributions of uncertainty and variability to the range of life-cycle greenhouse gas (LCGHG) emissions associated with conventional gas-fired electricity generation in the US. Whereas uncertainty is defined as lack of knowledge and can potentially be reduced by additional research, variability is an inherent characteristic of supply chains and cannot be reduced without physically modifying the system. The life-cycle included four stages: production, processing, transmission and power generation, and utilized a functional unit of 1 kWh of electricity generated at plant. Technological variability requires analyses of life cycles of individual power plants, e.g. combined cycle plants or boilers. Parameter uncertainty was modeled via Monte Carlo simulation. Our approach reveals that technological differences are the predominant cause for the range of LCGHG emissions associated with gas power, primarily due to variability in plant efficiencies. Uncertainties in model parameters played a minor role for 100 year time horizon. Variability in LCGHG emissions was a factor of 1.4 for combined cycle plants, and a factor of 1.3 for simple cycle plants (95% CI, 100 year horizon). The results can be used to assist decision-makers in assessing factors that contribute to LCGHG emissions despite uncertainties in parameters employed to estimate those emissions. (letter)

  16. A probabilistic approach to quantify the uncertainties in internal dose assessment using response surface and neural network

    International Nuclear Information System (INIS)

    Baek, M.; Lee, S.K.; Lee, U.C.; Kang, C.S.

    1996-01-01

    A probabilistic approach is formulated to assess the internal radiation exposure following the intake of radioisotopes. This probabilistic approach consists of 4 steps as follows: (1) screening, (2) quantification of uncertainties, (3) propagation of uncertainties, and (4) analysis of output. The approach has been applied for Pu-induced internal dose assessment and a multi-compartment dosimetric model is used for internal transport. In this approach, surrogate models of original system are constructed using response and neural network. And the results of these surrogate models are compared with those of original model. Each surrogate model well approximates the original model. The uncertainty and sensitivity analysis of the model parameters are evaluated in this process. Dominant contributors to each organ are identified and the results show that this approach could serve a good tool of assessing the internal radiation exposure

  17. Quantifying Trace Amounts of Aggregates in Biopharmaceuticals Using Analytical Ultracentrifugation Sedimentation Velocity: Bayesian Analyses and F Statistics.

    Science.gov (United States)

    Wafer, Lucas; Kloczewiak, Marek; Luo, Yin

    2016-07-01

    Analytical ultracentrifugation-sedimentation velocity (AUC-SV) is often used to quantify high molar mass species (HMMS) present in biopharmaceuticals. Although these species are often present in trace quantities, they have received significant attention due to their potential immunogenicity. Commonly, AUC-SV data is analyzed as a diffusion-corrected, sedimentation coefficient distribution, or c(s), using SEDFIT to numerically solve Lamm-type equations. SEDFIT also utilizes maximum entropy or Tikhonov-Phillips regularization to further allow the user to determine relevant sample information, including the number of species present, their sedimentation coefficients, and their relative abundance. However, this methodology has several, often unstated, limitations, which may impact the final analysis of protein therapeutics. These include regularization-specific effects, artificial "ripple peaks," and spurious shifts in the sedimentation coefficients. In this investigation, we experimentally verified that an explicit Bayesian approach, as implemented in SEDFIT, can largely correct for these effects. Clear guidelines on how to implement this technique and interpret the resulting data, especially for samples containing micro-heterogeneity (e.g., differential glycosylation), are also provided. In addition, we demonstrated how the Bayesian approach can be combined with F statistics to draw more accurate conclusions and rigorously exclude artifactual peaks. Numerous examples with an antibody and an antibody-drug conjugate were used to illustrate the strengths and drawbacks of each technique.

  18. Harnessing the theoretical foundations of the exponential and beta-Poisson dose-response models to quantify parameter uncertainty using Markov Chain Monte Carlo.

    Science.gov (United States)

    Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward

    2013-09-01

    Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. © Her Majesty the Queen in Right of Canada 2013. Reproduced with the permission of the Minister of the Public Health Agency of Canada.

  19. Quantifying type I and type II errors in decision-making under uncertainty : The case of GM crops

    NARCIS (Netherlands)

    Ansink, Erik; Wesseler, Justus

    2009-01-01

    In a recent paper, Hennessy and Moschini (American Journal of Agricultural Economics 88(2): 308-323, 2006) analyse the interactions between scientific uncertainty and costly regulatory actions. We use their model to analyse the costs of making type I and type II errors, in the context of the

  20. Quantifying type I and type II errors in decision-making under uncertainty: the case of GM crops

    NARCIS (Netherlands)

    Ansink, E.J.H.; Wesseler, J.H.H.

    2009-01-01

    In a recent paper, Hennessy and Moschini (American Journal of Agricultural Economics 88(2): 308¿323, 2006) analyse the interactions between scientific uncertainty and costly regulatory actions. We use their model to analyse the costs of making type I and type II errors, in the context of the

  1. Quantifying uncertainties influencing the long-term impacts of oil prices on energy markets and carbon emissions

    Science.gov (United States)

    McCollum, David L.; Jewell, Jessica; Krey, Volker; Bazilian, Morgan; Fay, Marianne; Riahi, Keywan

    2016-07-01

    Oil prices have fluctuated remarkably in recent years. Previous studies have analysed the impacts of future oil prices on the energy system and greenhouse gas emissions, but none have quantitatively assessed how the broader, energy-system-wide impacts of diverging oil price futures depend on a suite of critical uncertainties. Here we use the MESSAGE integrated assessment model to study several factors potentially influencing this interaction, thereby shedding light on which future unknowns hold the most importance. We find that sustained low or high oil prices could have a major impact on the global energy system over the next several decades; and depending on how the fuel substitution dynamics play out, the carbon dioxide consequences could be significant (for example, between 5 and 20% of the budget for staying below the internationally agreed 2 ∘C target). Whether or not oil and gas prices decouple going forward is found to be the biggest uncertainty.

  2. Quantifying soil carbon loss and uncertainty from a peatland wildfire using multi-temporal LiDAR

    Science.gov (United States)

    Reddy, Ashwan D.; Hawbaker, Todd J.; Wurster, F.; Zhu, Zhiliang; Ward, S.; Newcomb, Doug; Murray, R.

    2015-01-01

    Peatlands are a major reservoir of global soil carbon, yet account for just 3% of global land cover. Human impacts like draining can hinder the ability of peatlands to sequester carbon and expose their soils to fire under dry conditions. Estimating soil carbon loss from peat fires can be challenging due to uncertainty about pre-fire surface elevations. This study uses multi-temporal LiDAR to obtain pre- and post-fire elevations and estimate soil carbon loss caused by the 2011 Lateral West fire in the Great Dismal Swamp National Wildlife Refuge, VA, USA. We also determine how LiDAR elevation error affects uncertainty in our carbon loss estimate by randomly perturbing the LiDAR point elevations and recalculating elevation change and carbon loss, iterating this process 1000 times. We calculated a total loss using LiDAR of 1.10 Tg C across the 25 km2 burned area. The fire burned an average of 47 cm deep, equivalent to 44 kg C/m2, a value larger than the 1997 Indonesian peat fires (29 kg C/m2). Carbon loss via the First-Order Fire Effects Model (FOFEM) was estimated to be 0.06 Tg C. Propagating the LiDAR elevation error to the carbon loss estimates, we calculated a standard deviation of 0.00009 Tg C, equivalent to 0.008% of total carbon loss. We conclude that LiDAR elevation error is not a significant contributor to uncertainty in soil carbon loss under severe fire conditions with substantial peat consumption. However, uncertainties may be more substantial when soil elevation loss is of a similar or smaller magnitude than the reported LiDAR error.

  3. A Bayesian analysis of sensible heat flux estimation: Quantifying uncertainty in meteorological forcing to improve model prediction

    KAUST Repository

    Ershadi, Ali

    2013-05-01

    The influence of uncertainty in land surface temperature, air temperature, and wind speed on the estimation of sensible heat flux is analyzed using a Bayesian inference technique applied to the Surface Energy Balance System (SEBS) model. The Bayesian approach allows for an explicit quantification of the uncertainties in input variables: a source of error generally ignored in surface heat flux estimation. An application using field measurements from the Soil Moisture Experiment 2002 is presented. The spatial variability of selected input meteorological variables in a multitower site is used to formulate the prior estimates for the sampling uncertainties, and the likelihood function is formulated assuming Gaussian errors in the SEBS model. Land surface temperature, air temperature, and wind speed were estimated by sampling their posterior distribution using a Markov chain Monte Carlo algorithm. Results verify that Bayesian-inferred air temperature and wind speed were generally consistent with those observed at the towers, suggesting that local observations of these variables were spatially representative. Uncertainties in the land surface temperature appear to have the strongest effect on the estimated sensible heat flux, with Bayesian-inferred values differing by up to ±5°C from the observed data. These differences suggest that the footprint of the in situ measured land surface temperature is not representative of the larger-scale variability. As such, these measurements should be used with caution in the calculation of surface heat fluxes and highlight the importance of capturing the spatial variability in the land surface temperature: particularly, for remote sensing retrieval algorithms that use this variable for flux estimation.

  4. Bootstrap and Order Statistics for Quantifying Thermal-Hydraulic Code Uncertainties in the Estimation of Safety Margins

    Directory of Open Access Journals (Sweden)

    Enrico Zio

    2008-01-01

    Full Text Available In the present work, the uncertainties affecting the safety margins estimated from thermal-hydraulic code calculations are captured quantitatively by resorting to the order statistics and the bootstrap technique. The proposed framework of analysis is applied to the estimation of the safety margin, with its confidence interval, of the maximum fuel cladding temperature reached during a complete group distribution blockage scenario in a RBMK-1500 nuclear reactor.

  5. Quantifying geological uncertainty in metamorphic phase equilibria modelling; a Monte Carlo assessment and implications for tectonic interpretations

    Directory of Open Access Journals (Sweden)

    Richard M. Palin

    2016-07-01

    Full Text Available Pseudosection modelling is rapidly becoming an essential part of a petrologist's toolkit and often forms the basis of interpreting the tectonothermal evolution of a rock sample, outcrop, or geological region. Of the several factors that can affect the accuracy and precision of such calculated phase diagrams, “geological” uncertainty related to natural petrographic variation at the hand sample- and/or thin section-scale is rarely considered. Such uncertainty influences the sample's bulk composition, which is the primary control on its equilibrium phase relationships and thus the interpreted pressure–temperature (P–T conditions of formation. Two case study examples—a garnet–cordierite granofels and a garnet–staurolite–kyanite schist—are used to compare the relative importance that geological uncertainty has on bulk compositions determined via (1 X-ray fluorescence (XRF or (2 point counting techniques. We show that only minor mineralogical variation at the thin-section scale propagates through the phase equilibria modelling procedure and affects the absolute P–T conditions at which key assemblages are stable. Absolute displacements of equilibria can approach ±1 kbar for only a moderate degree of modal proportion uncertainty, thus being essentially similar to the magnitudes reported for analytical uncertainties in conventional thermobarometry. Bulk compositions determined from multiple thin sections of a heterogeneous garnet–staurolite–kyanite schist show a wide range in major-element oxides, owing to notable variation in mineral proportions. Pseudosections constructed for individual point count-derived bulks accurately reproduce this variability on a case-by-case basis, though averaged proportions do not correlate with those calculated at equivalent peak P–T conditions for a whole-rock XRF-derived bulk composition. The main discrepancies relate to varying proportions of matrix phases (primarily mica relative to

  6. A method for extending stage-discharge relationships using a hydrodynamic model and quantifying the associated uncertainty

    Science.gov (United States)

    Shao, Quanxi; Dutta, Dushmanta; Karim, Fazlul; Petheram, Cuan

    2018-01-01

    Streamflow discharge is a fundamental dataset required to effectively manage water and land resources. However, developing robust stage - discharge relationships called rating curves, from which streamflow discharge is derived, is time consuming and costly, particularly in remote areas and especially at high stage levels. As a result stage - discharge relationships are often heavily extrapolated. Hydrodynamic (HD) models are physically based models used to simulate the flow of water along river channels and over adjacent floodplains. In this paper we demonstrate a method by which a HD model can be used to generate a 'synthetic' stage - discharge relationship at high stages. The method uses a both-side Box-Cox transformation to calibrate the synthetic rating curve such that the regression residuals are as close to the normal distribution as possible. By doing this both-side transformation, the statistical uncertainty in the synthetically derived stage - discharge relationship can be calculated. This enables people trying to make decisions to determine whether the uncertainty in the synthetically generated rating curve at high stage levels is acceptable for their decision. The proposed method is demonstrated in two streamflow gauging stations in north Queensland, Australia.

  7. The use of expert elicitation to quantify uncertainty in incomplete sorption data bases for Waste Isolation Pilot Plant performance assessment

    International Nuclear Information System (INIS)

    Anderson, D.R.; Trauth, K.M.; Hora, S.C.

    1991-01-01

    Iterative, annual performance-assessment calculations are being performed for the Waste Isolation Pilot Plant (WIPP), a planned underground repository in southeastern New Mexico, USA for the disposal of transuranic waste. The performance-assessment calculations estimate the long-term radionuclide releases from the disposal system to the accessible environment. Because direct experimental data in some areas are presently of insufficient quantity to form the basis for the required distributions. Expert judgment was used to estimate the concentrations of specific radionuclides in a brine exiting a repository room or drift as it migrates up an intruding borehole, and also the distribution coefficients that describe the retardation of radionuclides in the overlying Culebra Dolomite. The variables representing these concentrations and coefficients have been shown by 1990 sensitivity analyses to be among the set of parameters making the greatest contribution to the uncertainty in WIPP performance-assessment predictions. Utilizing available information, the experts (one expert panel addressed concentrations and a second panel addressed retardation) developed an understanding of the problem and were formally elicited to obtain probability distributions that characterize the uncertainty in fixed, but unknown, quantities. The probability distributions developed by the experts are being incorporated into the 1991 performance-assessment calculations. 16 refs., 4 tabs

  8. Sequential Test Selection by Quantifying of the Reduction in Diagnostic Uncertainty for the Diagnosis of Proximal Caries

    Directory of Open Access Journals (Sweden)

    Umut Arslan

    2013-06-01

    Full Text Available Background: In order to determine the presence or absence of a certain disease, multiple diagnostic tests may be necessary. Performance of these tests can be sequentially evaluated. Aims: The aim of the study is to determine the contribution of the test in each step, in reducing diagnostic uncertainty when multiple tests are sequentially used for the diagnosis. Study Design: Diagnostic accuracy study Methods: Radiographs of seventy-three patients of the Department of Dento-Maxillofacial Radiology of Hacettepe University Faculty of Dentistry were assessed. Panoramic (PAN, full mouth intraoral (FM, and bitewing (BW radiographs were used for the diagnosis of proximal caries in the maxillary and mandibular molar regions. Diagnostic performance of radiography was sequentially evaluated by using the reduction in diagnostic uncertainty. Results: FM provided maximum diagnostic information for ruling in potential in the maxillary and mandibular molar regions in the first step. FM provided more diagnostic information than BW radiographs for ruling in the mandibular region in the second step. In the mandibular region, BW radiographs provided more diagnostic information than FM for ruling out in the first step. Conclusion: The presented method in this study provides the clinicians with a solution for the decision of the sequential selection of diagnostic tests for the correct diagnosis of the presence or absence of a certain disease.

  9. Quantifying the Effects of Spatial Uncertainty in Fracture Permeability on CO2 Leakage through Columbia River Basalt Flow Interiors

    Science.gov (United States)

    Gierzynski, A.; Pollyea, R.

    2016-12-01

    Recent studies suggest that continental flood basalts may be suitable for geologic carbon sequestration, due to fluid-rock reactions that mineralize injected CO2 on relatively short time-scales. Flood basalts also possess a morphological structure conducive to injection, with alternating high-permeability (flow margin) and low-permeability (flow interior) layers. However, little information exists on the behavior of CO2 migration within field-scale fracture networks, particularly within flow interiors and at conditions near the critical point for CO2. In this study, numerical simulation is used to investigate the influence of fracture permeability uncertainty during gravity-driven CO2 migration within a jointed basalt flow interior as CO2 undergoes phase change from supercritical fluid to a subcritical phase. The model domain comprises a 2D fracture network mapped with terrestrial LiDAR scans of Columbia River Basalt acquired near Starbuck, WA. The model domain is 5 m × 5 m with bimodal heterogeneity (fracture and matrix), and initial conditions corresponding to a hydrostatic pressure gradient between 750 and 755 m depth. Under these conditions, the critical point for CO2 occurs 1.5 m above the bottom of the domain. For this model scenario, CO2 enters the base of the fracture network at 0.5 MPa overpressure, and matrix permeability is assumed constant. Fracture permeability follows a lognormal distribution on the basis of fracture aperture values from literature. In order to account for spatial uncertainty, the lognormal fracture permeability distribution is randomly located in the model domain and CO2 migration is simulated within the same fracture network for 50 equally probable realizations. Model results suggest that fracture connectivity, which is independent of permeability distribution, governs the path taken by buoyant CO2 as it rises through the flow interior; however, the permeability distribution strongly governs the CO2 flux magnitude. In particular

  10. Collaborative Research: Quantifying the Uncertainties of Aerosol Indirect Effects and Impacts on Decadal-Scale Climate Variability in NCAR CAM5 and CESM1

    Energy Technology Data Exchange (ETDEWEB)

    Nenes, Athanasios [Georgia Inst. of Technology, Atlanta, GA (United States)

    2017-06-23

    The goal of this proposed project is to assess the climatic importance and sensitivity of aerosol indirect effect (AIE) to cloud and aerosol processes and feedbacks, which include organic aerosol hygroscopicity, cloud condensation nuclei (CCN) activation kinetics, Giant CCN, cloud-scale entrainment, ice nucleation in mixed-phase and cirrus clouds, and treatment of subgrid variability of vertical velocity. A key objective was to link aerosol, cloud microphysics and dynamics feedbacks in CAM5 with a suite of internally consistent and integrated parameterizations that provide the appropriate degrees of freedom to capture the various aspects of the aerosol indirect effect. The proposal integrated new parameterization elements into the cloud microphysics, moist turbulence and aerosol modules used by the NCAR Community Atmospheric Model version 5 (CAM5). The CAM5 model was then used to systematically quantify the uncertainties of aerosol indirect effects through a series of sensitivity tests with present-day and preindustrial aerosol emissions. New parameterization elements were developed as a result of these efforts, and new diagnostic tools & methodologies were also developed to quantify the impacts of aerosols on clouds and climate within fully coupled models. Observations were used to constrain key uncertainties in the aerosol-cloud links. Advanced sensitivity tools were developed and implements to probe the drivers of cloud microphysical variability with unprecedented temporal and spatial scale. All these results have been published in top and high impact journals (or are in the final stages of publication). This proposal has also supported a number of outstanding graduate students.

  11. Bayesian integration of flux tower data into a process-based simulator for quantifying uncertainty in simulated output

    Science.gov (United States)

    Raj, Rahul; van der Tol, Christiaan; Hamm, Nicholas Alexander Samuel; Stein, Alfred

    2018-01-01

    Parameters of a process-based forest growth simulator are difficult or impossible to obtain from field observations. Reliable estimates can be obtained using calibration against observations of output and state variables. In this study, we present a Bayesian framework to calibrate the widely used process-based simulator Biome-BGC against estimates of gross primary production (GPP) data. We used GPP partitioned from flux tower measurements of a net ecosystem exchange over a 55-year-old Douglas fir stand as an example. The uncertainties of both the Biome-BGC parameters and the simulated GPP values were estimated. The calibrated parameters leaf and fine root turnover (LFRT), ratio of fine root carbon to leaf carbon (FRC : LC), ratio of carbon to nitrogen in leaf (C : Nleaf), canopy water interception coefficient (Wint), fraction of leaf nitrogen in RuBisCO (FLNR), and effective soil rooting depth (SD) characterize the photosynthesis and carbon and nitrogen allocation in the forest. The calibration improved the root mean square error and enhanced Nash-Sutcliffe efficiency between simulated and flux tower daily GPP compared to the uncalibrated Biome-BGC. Nevertheless, the seasonal cycle for flux tower GPP was not reproduced exactly and some overestimation in spring and underestimation in summer remained after calibration. We hypothesized that the phenology exhibited a seasonal cycle that was not accurately reproduced by the simulator. We investigated this by calibrating the Biome-BGC to each month's flux tower GPP separately. As expected, the simulated GPP improved, but the calibrated parameter values suggested that the seasonal cycle of state variables in the simulator could be improved. It was concluded that the Bayesian framework for calibration can reveal features of the modelled physical processes and identify aspects of the process simulator that are too rigid.

  12. Bayesian integration of flux tower data into a process-based simulator for quantifying uncertainty in simulated output

    Directory of Open Access Journals (Sweden)

    R. Raj

    2018-01-01

    Full Text Available Parameters of a process-based forest growth simulator are difficult or impossible to obtain from field observations. Reliable estimates can be obtained using calibration against observations of output and state variables. In this study, we present a Bayesian framework to calibrate the widely used process-based simulator Biome-BGC against estimates of gross primary production (GPP data. We used GPP partitioned from flux tower measurements of a net ecosystem exchange over a 55-year-old Douglas fir stand as an example. The uncertainties of both the Biome-BGC parameters and the simulated GPP values were estimated. The calibrated parameters leaf and fine root turnover (LFRT, ratio of fine root carbon to leaf carbon (FRC : LC, ratio of carbon to nitrogen in leaf (C : Nleaf, canopy water interception coefficient (Wint, fraction of leaf nitrogen in RuBisCO (FLNR, and effective soil rooting depth (SD characterize the photosynthesis and carbon and nitrogen allocation in the forest. The calibration improved the root mean square error and enhanced Nash–Sutcliffe efficiency between simulated and flux tower daily GPP compared to the uncalibrated Biome-BGC. Nevertheless, the seasonal cycle for flux tower GPP was not reproduced exactly and some overestimation in spring and underestimation in summer remained after calibration. We hypothesized that the phenology exhibited a seasonal cycle that was not accurately reproduced by the simulator. We investigated this by calibrating the Biome-BGC to each month's flux tower GPP separately. As expected, the simulated GPP improved, but the calibrated parameter values suggested that the seasonal cycle of state variables in the simulator could be improved. It was concluded that the Bayesian framework for calibration can reveal features of the modelled physical processes and identify aspects of the process simulator that are too rigid.

  13. Using LIDAR and Quickbird Data to Model Plant Production and Quantify Uncertainties Associated with Wetland Detection and Land Cover Generalizations

    Science.gov (United States)

    Cook, Bruce D.; Bolstad, Paul V.; Naesset, Erik; Anderson, Ryan S.; Garrigues, Sebastian; Morisette, Jeffrey T.; Nickeson, Jaime; Davis, Kenneth J.

    2009-01-01

    Spatiotemporal data from satellite remote sensing and surface meteorology networks have made it possible to continuously monitor global plant production, and to identify global trends associated with land cover/use and climate change. Gross primary production (GPP) and net primary production (NPP) are routinely derived from the MOderate Resolution Imaging Spectroradiometer (MODIS) onboard satellites Terra and Aqua, and estimates generally agree with independent measurements at validation sites across the globe. However, the accuracy of GPP and NPP estimates in some regions may be limited by the quality of model input variables and heterogeneity at fine spatial scales. We developed new methods for deriving model inputs (i.e., land cover, leaf area, and photosynthetically active radiation absorbed by plant canopies) from airborne laser altimetry (LiDAR) and Quickbird multispectral data at resolutions ranging from about 30 m to 1 km. In addition, LiDAR-derived biomass was used as a means for computing carbon-use efficiency. Spatial variables were used with temporal data from ground-based monitoring stations to compute a six-year GPP and NPP time series for a 3600 ha study site in the Great Lakes region of North America. Model results compared favorably with independent observations from a 400 m flux tower and a process-based ecosystem model (BIOME-BGC), but only after removing vapor pressure deficit as a constraint on photosynthesis from the MODIS global algorithm. Fine resolution inputs captured more of the spatial variability, but estimates were similar to coarse-resolution data when integrated across the entire vegetation structure, composition, and conversion efficiencies were similar to upland plant communities. Plant productivity estimates were noticeably improved using LiDAR-derived variables, while uncertainties associated with land cover generalizations and wetlands in this largely forested landscape were considered less important.

  14. Quantifying uncertainty in coral Sr/Ca-based SST estimates from Orbicella faveolata: A basis for multi-colony SST reconstructions

    Science.gov (United States)

    Richey, J. N.; Flannery, J. A.; Toth, L. T.; Kuffner, I. B.; Poore, R. Z.

    2017-12-01

    The Sr/Ca in massive corals can be used as a proxy for sea surface temperature (SST) in shallow tropical to sub-tropical regions; however, the relationship between Sr/Ca and SST varies throughout the ocean, between different species of coral, and often between different colonies of the same species. We aimed to quantify the uncertainty associated with the Sr/Ca-SST proxy due to sample handling (e.g., micro-drilling or analytical error), vital effects (e.g., among-colony differences in coral growth), and local-scale variability in microhabitat. We examine the intra- and inter-colony reproducibility of Sr/Ca records extracted from five modern Orbicella faveolata colonies growing in the Dry Tortugas, Florida, USA. The average intra-colony absolute difference (AD) in Sr/Ca of the five colonies during an overlapping interval (1997-2008) was 0.055 ± 0.044 mmol mol-1 (0.96 ºC) and the average inter-colony Sr/Ca AD was 0.039 ± 0.01 mmol mol-1 (0.51 ºC). All available Sr/Ca-SST data pairs from 1997-2008 were combined and regressed against the HadISST1 gridded SST data set (24 ºN and 82 ºW) to produce a calibration equation that could be applied to O. faveolata specimens from throughout the Gulf of Mexico/Caribbean/Atlantic region after accounting for the potential uncertainties in Sr/Ca-derived SSTs. We quantified a combined error term for O. faveolata using the root-sum-square (RMS) of the analytical, intra-, and inter-colony uncertainties and suggest that an overall uncertainty of 0.046 mmol mol-1 (0.81 ºC, 1σ), should be used to interpret Sr/Ca records from O. faveolata specimens of unknown age or origin to reconstruct SST. We also explored how uncertainty is affected by the number of corals used in a reconstruction by iteratively calculating the RMS error for composite coral time-series using two, three, four, and five overlapping coral colonies. Our results indicate that maximum RMS error at the 95% confidence interval on mean annual SST estimates is 1.4 º

  15. Quantifying uncertainty in pest risk maps and assessments: adopting a risk-averse decision maker’s perspective

    Directory of Open Access Journals (Sweden)

    Denys Yemshanov

    2013-09-01

    Full Text Available Pest risk maps are important decision support tools when devising strategies to minimize introductions of invasive organisms and mitigate their impacts. When possible management responses to an invader include costly or socially sensitive activities, decision-makers tend to follow a more certain (i.e., risk-averse course of action. We presented a new mapping technique that assesses pest invasion risk from the perspective of a risk-averse decision maker.We demonstrated the method by evaluating the likelihood that an invasive forest pest will be transported to one of the U.S. states or Canadian provinces in infested firewood by visitors to U.S. federal campgrounds. We tested the impact of the risk aversion assumption using distributions of plausible pest arrival scenarios generated with a geographically explicit model developed from data documenting camper travel across the study area. Next, we prioritized regions of high and low pest arrival risk via application of two stochastic ordering techniques that employed, respectively, first- and second-degree stochastic dominance rules, the latter of which incorporated the notion of risk aversion. We then identified regions in the study area where the pest risk value changed considerably after incorporating risk aversion.While both methods identified similar areas of highest and lowest risk, they differed in how they demarcated moderate-risk areas. In general, the second-order stochastic dominance method assigned lower risk rankings to moderate-risk areas. Overall, this new method offers a better strategy to deal with the uncertainty typically associated with risk assessments and provides a tractable way to incorporate decision-making preferences into final risk estimates, and thus helps to better align these estimates with particular decision-making scenarios about a pest organism of concern. Incorporation of risk aversion also helps prioritize the set of locations to target for inspections and

  16. Quantifying uncertainty in the impacts of climate change on river discharge in sub-catchments of the Yangtze and Yellow River Basins, China

    Directory of Open Access Journals (Sweden)

    H. Xu

    2011-01-01

    Full Text Available Quantitative evaluations of the impacts of climate change on water resources are primarily constrained by uncertainty in climate projections from GCMs. In this study we assess uncertainty in the impacts of climate change on river discharge in two catchments of the Yangtze and Yellow River Basins that feature contrasting climate regimes (humid and semi-arid. Specifically we quantify uncertainty associated with GCM structure from a subset of CMIP3 AR4 GCMs (HadCM3, HadGEM1, CCSM3.0, IPSL, ECHAM5, CSIRO, CGCM3.1, SRES emissions scenarios (A1B, A2, B1, B2 and prescribed increases in global mean air temperature (1 °C to 6 °C. Climate projections, applied to semi-distributed hydrological models (SWAT 2005 in both catchments, indicate trends toward warmer and wetter conditions. For prescribed warming scenarios of 1 °C to 6 °C, linear increases in mean annual river discharge, relative to baseline (1961–1990, for the River Xiangxi and River Huangfuchuan are +9% and 11% per +1 °C respectively. Intra-annual changes include increases in flood (Q05 discharges for both rivers as well as a shift in the timing of flood discharges from summer to autumn and a rise (24 to 93% in dry season (Q95 discharge for the River Xiangxi. Differences in projections of mean annual river discharge between SRES emission scenarios using HadCM3 are comparatively minor for the River Xiangxi (13 to 17% rise from baseline but substantial (73 to 121% for the River Huangfuchuan. With one minor exception of a slight (−2% decrease in river discharge projected using HadGEM1 for the River Xiangxi, mean annual river discharge is projected to increase in both catchments under both the SRES A1B emission scenario and 2° rise in global mean air temperature using all AR4 GCMs on the CMIP3 subset. For the River Xiangxi, there is substantial uncertainty associated with GCM structure in the magnitude of the rise in flood (Q05 discharges (−1 to 41% under SRES A1B and −3 to 41% under 2

  17. Information Aggregation in Organizations

    OpenAIRE

    Schulte, Elisabeth

    2006-01-01

    This dissertation contributes to the analysis of information aggregation procedures within organizations. Facing uncertainty about the consequences of a collective decision, information has to be aggregated before making a choice. Two main questions are addressed. Firstly, how well is an organization suited for the aggregation of decision-relevant information? Secondly, how should an organization be designed in order to aggregate information efficiently? The main part deals with information a...

  18. A multicenter study to quantify systematic variations and associated uncertainties in source positioning with commonly used HDR afterloaders and ring applicators for the treatment of cervical carcinomas

    Energy Technology Data Exchange (ETDEWEB)

    Awunor, O., E-mail: onuora.awunor@stees.nhs.uk [The Medical Physics Department, The James Cook University Hospital, Marton Road, Middlesbrough TS4 3BW, England (United Kingdom); Berger, D. [Department of Radiotherapy, General Hospital of Vienna, Vienna A-1090 (Austria); Kirisits, C. [Department of Radiotherapy, Comprehensive Cancer Center, Medical University of Vienna, Vienna A-1090 (Austria)

    2015-08-15

    Purpose: The reconstruction of radiation source position in the treatment planning system is a key part of the applicator reconstruction process in high dose rate (HDR) brachytherapy treatment of cervical carcinomas. The steep dose gradients, of as much as 12%/mm, associated with typical cervix treatments emphasize the importance of accurate and precise determination of source positions. However, a variety of methodologies with a range in associated measurement uncertainties, of up to ±2.5 mm, are currently employed by various centers to do this. In addition, a recent pilot study by Awunor et al. [“Direct reconstruction and associated uncertainties of {sup 192}Ir source dwell positions in ring applicators using gafchromic film in the treatment planning of HDR brachytherapy cervix patients,” Phys. Med. Biol. 58, 3207–3225 (2013)] reported source positional differences of up to 2.6 mm between ring sets of the same type and geometry. This suggests a need for a comprehensive study to assess and quantify systematic source position variations between commonly used ring applicators and HDR afterloaders across multiple centers. Methods: Eighty-six rings from 20 European brachytherapy centers were audited in the form of a postal audit with each center collecting the data independently. The data were collected by setting up the rings using a bespoke jig and irradiating gafchromic films at predetermined dwell positions using four afterloader types, MicroSelectron, Flexitron, GammaMed, and MultiSource, from three manufacturers, Nucletron, Varian, and Eckert & Ziegler BEBIG. Five different ring types in six sizes (Ø25–Ø35 mm) and two angles (45° and 60°) were used. Coordinates of irradiated positions relative to the ring center were determined and collated, and source position differences quantified by ring type, size, and angle. Results: The mean expanded measurement uncertainty (k = 2) along the direction of source travel was ±1.4 mm. The standard deviation

  19. Quantifying the Uncertainty in High Spatial and Temporal Resolution Synthetic Land Surface Reflectance at Pixel Level Using Ground-Based Measurements

    Science.gov (United States)

    Kong, J.; Ryu, Y.

    2017-12-01

    Algorithms for fusing high temporal frequency and high spatial resolution satellite images are widely used to develop dense time-series land surface observations. While many studies have revealed that the synthesized frequent high spatial resolution images could be successfully applied in vegetation mapping and monitoring, validation and correction of fused images have not been focused than its importance. To evaluate the precision of fused image in pixel level, in-situ reflectance measurements which could account for the pixel-level heterogeneity are necessary. In this study, the synthetic images of land surface reflectance were predicted by the coarse high-frequency images acquired from MODIS and high spatial resolution images from Landsat-8 OLI using the Flexible Spatiotemporal Data Fusion (FSDAF). Ground-based reflectance was measured by JAZ Spectrometer (Ocean Optics, Dunedin, FL, USA) on rice paddy during five main growth stages in Cheorwon-gun, Republic of Korea, where the landscape heterogeneity changes through the growing season. After analyzing the spatial heterogeneity and seasonal variation of land surface reflectance based on the ground measurements, the uncertainties of the fused images were quantified at pixel level. Finally, this relationship was applied to correct the fused reflectance images and build the seasonal time series of rice paddy surface reflectance. This dataset could be significant for rice planting area extraction, phenological stages detection, and variables estimation.

  20. Magnetic vs. non-magnetic colloids - A comparative adsorption study to quantify the effect of dye-induced aggregation on the binding affinity of an organic dye.

    Science.gov (United States)

    Williams, Tyler A; Lee, Jenny; Diemler, Cory A; Subir, Mahamud

    2016-11-01

    Due to attractive magnetic forces, magnetic particles (MPs) can exhibit colloidal instability upon molecular adsorption. Thus, by comparing the dye adsorption isotherms of MPs and non-magnetic particles of the same size, shape and functional group it should be possible to characterize the influence of magnetic attraction on MP aggregation. For a range of particle densities, a comparative adsorption study of malachite green (MG(+)) onto magnetic and non-magnetic colloids was carried out using a combination of a separation technique coupled with UV-vis spectroscopy, optical microscopy, and polarization dependent second harmonic generation (SHG) spectroscopy. Significant MP aggregation occurs in aqueous solution due to MG(+) adsorption. This alters the adsorption isotherm and challenges the determination of the adsorption equilibrium constant, Kads. The dye-induced aggregation is directly related to the MG(+) concentration, [MG(+)]. A modified Langmuir equation, which incorporates loss of surface sites due to this aggregation, accurately describes the resulting adsorption isotherms. The Kads of 1.1 (±0.3)×10(7) and a loss of maximum MP surface capacity of 2.8 (±0.7)×10(3)M(-1) per [MG(+)] has been obtained. Additionally, SHG has been established as an effective tool to detect aggregation in nanoparticles. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Evaluation of uncertainties in benefit-cost studies of electrical power plants. II. Development and application of a procedure for quantifying environmental uncertainties of a nuclear power plant. Final report

    International Nuclear Information System (INIS)

    Sullivan, W.G.

    1977-07-01

    Steam-electric generation plants are evaluated on a benefit-cost basis. Non-economic factors in the development and application of a procedure for quantifying environmental uncertainties of a nuclear power plant are discussed. By comparing monetary costs of a particular power plant assessed in Part 1 with non-monetary values arrived at in Part 2 and using an evaluation procedure developed in this study, a proposed power plant can be selected as a preferred alternative. This procedure enables policymakers to identify the incremental advantages and disadvantages of different power plants in view of their geographic locations. The report presents the evaluation procedure on a task by task basis and shows how it can be applied to a particular power plant. Because of the lack of objective data, it draws heavily on subjectively-derived inputs of individuals who are knowledgeable about the plant being investigated. An abbreviated study at another power plant demonstrated the transferability of the general evaluation procedure. Included in the appendices are techniques for developing scoring functions and a user's manual for the Fortran IV Program

  2. Where do uncertainties reside within environmental risk assessments? Expert opinion on uncertainty distributions for pesticide risks to surface water organisms.

    Science.gov (United States)

    Skinner, Daniel J C; Rocks, Sophie A; Pollard, Simon J T

    2016-12-01

    A reliable characterisation of uncertainties can aid uncertainty identification during environmental risk assessments (ERAs). However, typologies can be implemented inconsistently, causing uncertainties to go unidentified. We present an approach based on nine structured elicitations, in which subject-matter experts, for pesticide risks to surface water organisms, validate and assess three dimensions of uncertainty: its level (the severity of uncertainty, ranging from determinism to ignorance); nature (whether the uncertainty is epistemic or aleatory); and location (the data source or area in which the uncertainty arises). Risk characterisation contains the highest median levels of uncertainty, associated with estimating, aggregating and evaluating the magnitude of risks. Regarding the locations in which uncertainty is manifest, data uncertainty is dominant in problem formulation, exposure assessment and effects assessment. The comprehensive description of uncertainty described will enable risk analysts to prioritise the required phases, groups of tasks, or individual tasks within a risk analysis according to the highest levels of uncertainty, the potential for uncertainty to be reduced or quantified, or the types of location-based uncertainty, thus aiding uncertainty prioritisation during environmental risk assessments. In turn, it is expected to inform investment in uncertainty reduction or targeted risk management action. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  3. APPLICATION AND EVALUATION OF AN AGGREGATE PHYSICALLY-BASED TWO-STAGE MONTE CARLO PROBABILISTIC MODEL FOR QUANTIFYING CHILDREN'S RESIDENTIAL EXPOSURE AND DOSE TO CHLORPYRIFOS

    Science.gov (United States)

    Critical voids in exposure data and models lead risk assessors to rely on conservative assumptions. Risk assessors and managers need improved tools beyond the screening level analysis to address aggregate exposures to pesticides as required by the Food Quality Protection Act o...

  4. Uncertainty, joint uncertainty, and the quantum uncertainty principle

    International Nuclear Information System (INIS)

    Narasimhachar, Varun; Poostindouz, Alireza; Gour, Gilad

    2016-01-01

    Historically, the element of uncertainty in quantum mechanics has been expressed through mathematical identities called uncertainty relations, a great many of which continue to be discovered. These relations use diverse measures to quantify uncertainty (and joint uncertainty). In this paper we use operational information-theoretic principles to identify the common essence of all such measures, thereby defining measure-independent notions of uncertainty and joint uncertainty. We find that most existing entropic uncertainty relations use measures of joint uncertainty that yield themselves to a small class of operational interpretations. Our notion relaxes this restriction, revealing previously unexplored joint uncertainty measures. To illustrate the utility of our formalism, we derive an uncertainty relation based on one such new measure. We also use our formalism to gain insight into the conditions under which measure-independent uncertainty relations can be found. (paper)

  5. A statistical approach to quantify uncertainty in carbon monoxide measurements at the Izaña global GAW station: 2008–2011

    Directory of Open Access Journals (Sweden)

    A. J. Gomez-Pelaez

    2013-03-01

    Full Text Available Atmospheric CO in situ measurements are carried out at the Izaña (Tenerife global GAW (Global Atmosphere Watch Programme of the World Meteorological Organization – WMO mountain station using a Reduction Gas Analyser (RGA. In situ measurements at Izaña are representative of the subtropical Northeast Atlantic free troposphere, especially during nighttime. We present the measurement system configuration, the response function, the calibration scheme, the data processing, the Izaña 2008–2011 CO nocturnal time series, and the mean diurnal cycle by months. We have developed a rigorous uncertainty analysis for carbon monoxide measurements carried out at the Izaña station, which could be applied to other GAW stations. We determine the combined standard measurement uncertainty taking into consideration four contributing components: uncertainty of the WMO standard gases interpolated over the range of measurement, the uncertainty that takes into account the agreement between the standard gases and the response function used, the uncertainty due to the repeatability of the injections, and the propagated uncertainty related to the temporal consistency of the response function parameters (which also takes into account the covariance between the parameters. The mean value of the combined standard uncertainty decreased significantly after March 2009, from 2.37 nmol mol−1 to 1.66 nmol mol−1, due to improvements in the measurement system. A fifth type of uncertainty we call representation uncertainty is considered when some of the data necessary to compute the temporal mean are absent. Any computed mean has also a propagated uncertainty arising from the uncertainties of the data used to compute the mean. The law of propagation depends on the type of uncertainty component (random or systematic. In situ hourly means are compared with simultaneous and collocated NOAA flask samples. The uncertainty of the differences is computed and used to determine

  6. A statistical approach to quantify uncertainty in carbon monoxide measurements at the Izaña global GAW station: 2008-2011

    Science.gov (United States)

    Gomez-Pelaez, A. J.; Ramos, R.; Gomez-Trueba, V.; Novelli, P. C.; Campo-Hernandez, R.

    2013-03-01

    Atmospheric CO in situ measurements are carried out at the Izaña (Tenerife) global GAW (Global Atmosphere Watch Programme of the World Meteorological Organization - WMO) mountain station using a Reduction Gas Analyser (RGA). In situ measurements at Izaña are representative of the subtropical Northeast Atlantic free troposphere, especially during nighttime. We present the measurement system configuration, the response function, the calibration scheme, the data processing, the Izaña 2008-2011 CO nocturnal time series, and the mean diurnal cycle by months. We have developed a rigorous uncertainty analysis for carbon monoxide measurements carried out at the Izaña station, which could be applied to other GAW stations. We determine the combined standard measurement uncertainty taking into consideration four contributing components: uncertainty of the WMO standard gases interpolated over the range of measurement, the uncertainty that takes into account the agreement between the standard gases and the response function used, the uncertainty due to the repeatability of the injections, and the propagated uncertainty related to the temporal consistency of the response function parameters (which also takes into account the covariance between the parameters). The mean value of the combined standard uncertainty decreased significantly after March 2009, from 2.37 nmol mol-1 to 1.66 nmol mol-1, due to improvements in the measurement system. A fifth type of uncertainty we call representation uncertainty is considered when some of the data necessary to compute the temporal mean are absent. Any computed mean has also a propagated uncertainty arising from the uncertainties of the data used to compute the mean. The law of propagation depends on the type of uncertainty component (random or systematic). In situ hourly means are compared with simultaneous and collocated NOAA flask samples. The uncertainty of the differences is computed and used to determine whether the differences are

  7. Quantifying reactor safety margins: Application of CSAU [Code Scalability, Applicability and Uncertainty] methodology to LBLOCA: Part 3, Assessment and ranging of parameters for the uncertainty analysis of LBLOCA codes

    International Nuclear Information System (INIS)

    Wulff, W.; Boyack, B.E.; Duffey, R.B.

    1988-01-01

    Comparisons of results from TRAC-PF1/MOD1 code calculations with measurements from Separate Effects Tests, and published experimental data for modeling parameters have been used to determine the uncertainty ranges of code input and modeling parameters which dominate the uncertainty in predicting the Peak Clad Temperature for a postulated Large Break Loss of Coolant Accident (LBLOCA) in a four-loop Westinghouse Pressurized Water Reactor. The uncertainty ranges are used for a detailed statistical analysis to calculate the probability distribution function for the TRAC code-predicted Peak Clad Temperature, as is described in an attendant paper. Measurements from Separate Effects Tests and Integral Effects Tests have been compared with results from corresponding TRAC-PF1/MOD1 code calculations to determine globally the total uncertainty in predicting the Peak Clad Temperature for LBLOCAs. This determination is in support of the detailed statistical analysis mentioned above. The analyses presented here account for uncertainties in input parameters, in modeling and scaling, in computing and in measurements. The analyses are an important part of the work needed to implement the Code Scalability, Applicability and Uncertainty (CSAU) methodology. CSAU is needed to determine the suitability of a computer code for reactor safety analyses and the uncertainty in computer predictions. The results presented here are used to estimate the safety margin of a particular nuclear reactor power plant for a postulated accident. 25 refs., 10 figs., 11 tabs

  8. Using Statistical Downscaling to Quantify the GCM-Related Uncertainty in Regional Climate Change Scenarios: A Case Study of Swedish Precipitation

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    There are a number of sources of uncertainty in regional climate change scenarios. When statistical downscaling is used to obtain regional climate change scenarios, the uncertainty may originate from the uncertainties in the global climate models used, the skill of the statistical model, and the forcing scenarios applied to the global climate model. The uncertainty associated with global climate models can be evaluated by examining the differences in the predictors and in the downscaled climate change scenarios based on a set of different global climate models. When standardized global climate model simulations such as the second phase of the Coupled Model Intercomparison Project (CMIP2) are used, the difference in the downscaled variables mainly reflects differences in the climate models and the natural variability in the simulated climates. It is proposed that the spread of the estimates can be taken as a measure of the uncertainty associated with global climate models. The proposed method is applied to the estimation of global-climate-model-related uncertainty in regional precipitation change scenarios in Sweden. Results from statistical downscaling based on 17 global climate models show that there is an overall increase in annual precipitation all over Sweden although a considerable spread of the changes in the precipitation exists. The general increase can be attributed to the increased large-scale precipitation and the enhanced westerly wind. The estimated uncertainty is nearly independent of region. However, there is a seasonal dependence. The estimates for winter show the highest level of confidence, while the estimates for summer show the least.

  9. Quantifying uncertainty in measurement of mercury in suspended particulate matter by cold vapor technique using atomic absorption spectrometry with hydride generator.

    Science.gov (United States)

    Singh, Nahar; Ahuja, Tarushee; Ojha, Vijay Narain; Soni, Daya; Tripathy, S Swarupa; Leito, Ivo

    2013-01-01

    As a result of rapid industrialization several chemical forms of organic and inorganic mercury are constantly introduced to the environment and affect humans and animals directly. All forms of mercury have toxic effects; therefore accurate measurement of mercury is of prime importance especially in suspended particulate matter (SPM) collected through high volume sampler (HVS). In the quantification of mercury in SPM samples several steps are involved from sampling to final result. The quality, reliability and confidence level of the analyzed data depends upon the measurement uncertainty of the whole process. Evaluation of measurement uncertainty of results is one of the requirements of the standard ISO/IEC 17025:2005 (European Standard EN IS/ISO/IEC 17025:2005, issue1:1-28, 2006). In the presented study the uncertainty estimation in mercury determination in suspended particulate matter (SPM) has been carried out using cold vapor Atomic Absorption Spectrometer-Hydride Generator (AAS-HG) technique followed by wet chemical digestion process. For the calculation of uncertainty, we have considered many general potential sources of uncertainty. After the analysis of data of seven diverse sites of Delhi, it has been concluded that the mercury concentration varies from 1.59 ± 0.37 to 14.5 ± 2.9 ng/m(3) with 95% confidence level (k = 2).

  10. Quantifying Carbon Financial Risk in the International Greenhouse Gas Market: An Application Using Remotely-Sensed Data to Align Scientific Uncertainty with Financial Decisions

    Science.gov (United States)

    Hultman, N. E.

    2002-12-01

    A common complaint about environmental policy is that regulations inadequately reflect scientific uncertainty and scientific consensus. While the causes of this phenomenon are complex and hard to discern, we know that corporations are the primary implementers of environmental regulations; therefore, focusing on how policy relates scientific knowledge to corporate decisions can provide valuable insights. Within the context of the developing international market for greenhouse gas emissions, I examine how corporations would apply finance theory into their investment decisions for carbon abatement projects. Using remotely-sensed ecosystem scale carbon flux measurements, I show how to determine much financial risk of carbon is diversifiable. I also discuss alternative, scientifically sound methods for hedging the non-diversifiable risks in carbon abatement projects. In providing a quantitative common language for scientific and corporate uncertainties, the concept of carbon financial risk provides an opportunity for expanding communication between these elements essential to successful climate policy.

  11. Quantifying uncertainties on the solution model of seismic tomography; Quelle confiance accorder au modele solution de la tomographie de reflexion 3D?

    Energy Technology Data Exchange (ETDEWEB)

    Duffet, C.

    2004-12-01

    Reflection tomography allows the determination of a velocity model that fits the travel time data associated with reflections of seismic waves propagating in the subsurface. A least-square formulation is used to compare the observed travel times and the travel times computed by the forward operator based on a ray tracing. This non-linear optimization problem is solved classically by a Gauss-Newton method based on successive linearization of the forward operator. The obtained solution is only one among many possible models. Indeed, the uncertainties on the observed travel times (resulting from an interpretative event picking on seismic records) and more generally the under-determination of the inverse problem lead to uncertainties on the solution. An a posteriori uncertainty analysis is then crucial to delimit the range of possible solutions that fit, with the expected accuracy, the data and the a priori information. A linearized a posteriori analysis is possible by an analysis of the a posteriori covariance matrix, inverse of the Gauss-Newton approximation of the matrix. The computation of this matrix is generally expensive (the matrix is huge for 3D problems) and the physical interpretation of the results is difficult. Then we propose a formalism which allows to compute uncertainties on relevant geological quantities for a reduced computational time. Nevertheless, this approach is only valid in the vicinity of the solution model (linearized framework) and complex cases may require a non-linear approach. An experimental approach consists in solving the inverse problem under constraints to test different geological scenarios. (author)

  12. Quantifying the uncertainties of climate change effects on the storage-yield and performance characteristics of the Pong multi-purpose reservoir, India

    Directory of Open Access Journals (Sweden)

    B. Soundharajan

    2015-06-01

    Full Text Available Climate change is predicted to affect water resources infrastructure due to its effect on rainfall, temperature and evapotranspiration. However, there are huge uncertainties on both the magnitude and direction of these effects. The Pong reservoir on the Beas River in northern India serves irrigation and hydropower needs. The hydrology of the catchment is highly influenced by Himalayan seasonal snow and glaciers, and Monsoon rainfall; the changing pattern of the latter and the predicted disappearance of the former will have profound effects on the performance of the reservoir. This study employed a Monte-Carlo simulation approach to characterise the uncertainties in the future storage requirements and performance of the reservoir. Using a calibrated rainfall-runoff (R-R model, the baseline runoff scenario was first simulated. The R-R inputs (rainfall and temperature were then perturbed using plausible delta-changes to produce simulated climate change runoff scenarios. Stochastic models of the runoff were developed and used to generate ensembles of both the current and climate-change perturbed future scenarios. The resulting runoff ensembles were used to simulate the behaviour of the reservoir and determine "populations" of reservoir storage capacity and performance characteristics. Comparing these parameters between the current and the perturbed provided the population of climate change effects which was then analysed to determine the uncertainties. The results show that contrary to the usual practice of using single records, there is wide variability in the assessed impacts. This variability or uncertainty will, no doubt, complicate the development of climate change adaptation measures; however, knowledge of its sheer magnitude as demonstrated in this study will help in the formulation of appropriate policy and technical interventions for sustaining and possibly enhancing water security for irrigation and other uses served by Pong reservoir.

  13. Quantifying Parameter and Structural Uncertainty of Dynamic Disease Transmission Models Using MCMC: An Application to Rotavirus Vaccination in England and Wales.

    Science.gov (United States)

    Bilcke, Joke; Chapman, Ruth; Atchison, Christina; Cromer, Deborah; Johnson, Helen; Willem, Lander; Cox, Martin; Edmunds, William John; Jit, Mark

    2015-07-01

    Two vaccines (Rotarix and RotaTeq) are highly effective at preventing severe rotavirus disease. Rotavirus vaccination has been introduced in the United Kingdom and other countries partly based on modeling and cost-effectiveness results. However, most of these models fail to account for the uncertainty about several vaccine characteristics and the mechanism of vaccine action. A deterministic dynamic transmission model of rotavirus vaccination in the United Kingdom was developed. This improves on previous models by 1) allowing for 2 different mechanisms of action for Rotarix and RotaTeq, 2) using clinical trial data to understand these mechanisms, and 3) accounting for uncertainty by using Markov Chain Monte Carlo. In the long run, Rotarix and RotaTeq are predicted to reduce the overall rotavirus incidence by 50% (39%-63%) and 44% (30%-62%), respectively but with an increase in incidence in primary school children and adults up to 25 y of age. The vaccines are estimated to give more protection than 1 or 2 natural infections. The duration of protection is highly uncertain but has only impact on the predicted reduction in rotavirus burden for values lower than 10 y. The 2 vaccine mechanism structures fit equally well with the clinical trial data. Long-term postvaccination dynamics cannot be predicted reliably with the data available. Accounting for the joint uncertainty of several vaccine characteristics resulted in more insight into which of these are crucial for determining the impact of rotavirus vaccination. Data for up to at least 10 y postvaccination and covering older children and adults are crucial to address remaining questions on the impact of widespread rotavirus vaccination. © The Author(s) 2015.

  14. Uncertainties in Forecasting Streamflow using Entropy Theory

    Science.gov (United States)

    Cui, H.; Singh, V. P.

    2017-12-01

    Streamflow forecasting is essential in river restoration, reservoir operation, power generation, irrigation, navigation, and water management. However, there is always uncertainties accompanied in forecast, which may affect the forecasting results and lead to large variations. Therefore, uncertainties must be considered and be assessed properly when forecasting streamflow for water management. The aim of our work is to quantify the uncertainties involved in forecasting streamflow and provide reliable streamflow forecast. Despite that streamflow time series are stochastic, they exhibit seasonal and periodic patterns. Therefore, streamflow forecasting entails modeling seasonality, periodicity, and its correlation structure, and assessing uncertainties. This study applies entropy theory to forecast streamflow and measure uncertainties during the forecasting process. To apply entropy theory for streamflow forecasting, spectral analysis is combined to time series analysis, as spectral analysis can be employed to characterize patterns of streamflow variation and identify the periodicity of streamflow. That is, it permits to extract significant information for understanding the streamflow process and prediction thereof. Application of entropy theory for streamflow forecasting involves determination of spectral density, determination of parameters, and extension of autocorrelation function. The uncertainties brought by precipitation input, forecasting model and forecasted results are measured separately using entropy. With information theory, how these uncertainties transported and aggregated during these processes will be described.

  15. 87Sr/86Sr as a quantitative geochemical proxy for 14C reservoir age in dynamic, brackish waters: assessing applicability and quantifying uncertainties.

    Science.gov (United States)

    Lougheed, Bryan; van der Lubbe, Jeroen; Davies, Gareth

    2016-04-01

    Accurate geochronologies are crucial for reconstructing the sensitivity of brackish and estuarine environments to rapidly changing past external impacts. A common geochronological method used for such studies is radiocarbon (14C) dating, but its application in brackish environments is severely limited by an inability to quantify spatiotemporal variations in 14C reservoir age, or R(t), due to dynamic interplay between river runoff and marine water. Additionally, old carbon effects and species-specific behavioural processes also influence 14C ages. Using the world's largest brackish water body (the estuarine Baltic Sea) as a test-bed, combined with a comprehensive approach that objectively excludes both old carbon and species-specific effects, we demonstrate that it is possible to use 87Sr/86Sr ratios to quantify R(t) in ubiquitous mollusc shell material, leading to almost one order of magnitude increase in Baltic Sea 14C geochronological precision over the current state-of-the-art. We propose that this novel proxy method can be developed for other brackish water bodies worldwide, thereby improving geochronological control in these climate sensitive, near-coastal environments.

  16. A review of different perspectives on uncertainty and risk and an alternative modeling paradigm

    International Nuclear Information System (INIS)

    Samson, Sundeep; Reneke, James A.; Wiecek, Margaret M.

    2009-01-01

    The literature in economics, finance, operations research, engineering and in general mathematics is first reviewed on the subject of defining uncertainty and risk. The review goes back to 1901. Different perspectives on uncertainty and risk are examined and a new paradigm to model uncertainty and risk is proposed using relevant ideas from this study. This new paradigm is used to represent, aggregate and propagate uncertainty and interpret the resulting variability in a challenge problem developed by Oberkampf et al. [2004, Challenge problems: uncertainty in system response given uncertain parameters. Reliab Eng Syst Safety 2004; 85(1): 11-9]. The challenge problem is further extended into a decision problem that is treated within a multicriteria decision making framework to illustrate how the new paradigm yields optimal decisions under uncertainty. The accompanying risk is defined as the probability of an unsatisfactory system response quantified by a random function of the uncertainty

  17. A Bayesian method to quantify azimuthal anisotropy model uncertainties: application to global azimuthal anisotropy in the upper mantle and transition zone

    Science.gov (United States)

    Yuan, K.; Beghein, C.

    2018-04-01

    Seismic anisotropy is a powerful tool to constrain mantle deformation, but its existence in the deep upper mantle and topmost lower mantle is still uncertain. Recent results from higher mode Rayleigh waves have, however, revealed the presence of 1 per cent azimuthal anisotropy between 300 and 800 km depth, and changes in azimuthal anisotropy across the mantle transition zone boundaries. This has important consequences for our understanding of mantle convection patterns and deformation of deep mantle material. Here, we propose a Bayesian method to model depth variations in azimuthal anisotropy and to obtain quantitative uncertainties on the fast seismic direction and anisotropy amplitude from phase velocity dispersion maps. We applied this new method to existing global fundamental and higher mode Rayleigh wave phase velocity maps to assess the likelihood of azimuthal anisotropy in the deep upper mantle and to determine whether previously detected changes in anisotropy at the transition zone boundaries are robustly constrained by those data. Our results confirm that deep upper-mantle azimuthal anisotropy is favoured and well constrained by the higher mode data employed. The fast seismic directions are in agreement with our previously published model. The data favour a model characterized, on average, by changes in azimuthal anisotropy at the top and bottom of the transition zone. However, this change in fast axes is not a global feature as there are regions of the model where the azimuthal anisotropy direction is unlikely to change across depths in the deep upper mantle. We were, however, unable to detect any clear pattern or connection with surface tectonics. Future studies will be needed to further improve the lateral resolution of this type of model at transition zone depths.

  18. Uncertainty analysis of moderate- versus coarse-scale satellite fire products for quantifying agricultural burning: Implications for Air Quality in European Russia, Belarus, and Lithuania

    Science.gov (United States)

    McCarty, J. L.; Krylov, A.; Prishchepov, A. V.; Banach, D. M.; Potapov, P.; Tyukavina, A.; Rukhovitch, D.; Koroleva, P.; Turubanova, S.; Romanenkov, V.

    2015-12-01

    Cropland and pasture burning are common agricultural management practices that negatively impact air quality at a local and regional scale, including contributing to short-lived climate pollutants (SLCPs). This research focuses on both cropland and pasture burning in European Russia, Lithuania, and Belarus. Burned area and fire detections were derived from 500 m and 1 km Moderate Resolution Imaging Spectroradiometer (MODIS), 30 m Landsat 7 Enhanced Thematic Mapper Plus (ETM+), and Landsat 8 Operational Land Imager (OLI) data. Carbon, particulate matter, volatile organic carbon (VOCs), and harmful air pollutants (HAPs) emissions were then calculated using MODIS and Landsat-based estimates of fire and land-cover and land-use. Agricultural burning in Belarus, Lithuania, and European Russia showed a strong and consistent seasonal geographic pattern from 2002 to 2012, with the majority of fire detections occurring in March - June and smaller peak in July and August. Over this 11-year period, there was a decrease in both cropland and pasture burning throughout this region. For Smolensk Oblast, a Russian administrative region with comparable agro-environmental conditions to Belarus and Lithuania, a detailed analysis of Landsat-based burned area estimations for croplands and pastures and field data collected in summer 2014 showed that the agricultural burning area can be up to 10 times higher than the 1 km MODIS active fire estimates. In general, European Russia is the main source of agricultural burning emissions compared to Lithuania and Belarus. On average, all cropland burning in European Russia as detected by the MCD45A1 MODIS Burned Area Product emitted 17.66 Gg of PM10 while annual burning of pasture in Smolensk Oblast, Russia as detected by Landsat burn scars emitted 494.85 Gg of PM10, a 96% difference. This highlights that quantifying the contribution of pasture burning and burned area versus cropland burning in agricultural regions is important for accurately

  19. Graph Aggregation

    NARCIS (Netherlands)

    Endriss, U.; Grandi, U.

    Graph aggregation is the process of computing a single output graph that constitutes a good compromise between several input graphs, each provided by a different source. One needs to perform graph aggregation in a wide variety of situations, e.g., when applying a voting rule (graphs as preference

  20. Quantifying and Visualizing Uncertainties in Molecular Models

    OpenAIRE

    Rasheed, Muhibur; Clement, Nathan; Bhowmick, Abhishek; Bajaj, Chandrajit

    2015-01-01

    Computational molecular modeling and visualization has seen significant progress in recent years with sev- eral molecular modeling and visualization software systems in use today. Nevertheless the molecular biology community lacks techniques and tools for the rigorous analysis, quantification and visualization of the associated errors in molecular structure and its associated properties. This paper attempts at filling this vacuum with the introduction of a systematic statistical framework whe...

  1. Quantifying Uncertainty in Expert Judgment: Initial Results

    Science.gov (United States)

    2013-03-01

    lines of source code were added in . ---------- C++ = 32%; JavaScript = 29%; XML = 15%; C = 7%; CSS = 7%; Java = 5%; Oth- er = 5% LOC = 927,266...much total effort in person years has been spent on this project? CMU/SEI-2013-TR-001 | 33 5 MySQL , the most popular Open Source SQL...as MySQL , Oracle, PostgreSQL, MS SQL Server, ODBC, or Interbase. Features include email reminders, iCal/vCal import/export, re- mote subscriptions

  2. Quantifying Uncertainty in Soil Volume Estimates

    International Nuclear Information System (INIS)

    Roos, A.D.; Hays, D.C.; Johnson, R.L.; Durham, L.A.; Winters, M.

    2009-01-01

    Proper planning and design for remediating contaminated environmental media require an adequate understanding of the types of contaminants and the lateral and vertical extent of contamination. In the case of contaminated soils, this generally takes the form of volume estimates that are prepared as part of a Feasibility Study for Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) sites and/or as part of the remedial design. These estimates are typically single values representing what is believed to be the most likely volume of contaminated soil present at the site. These single-value estimates, however, do not convey the level of confidence associated with the estimates. Unfortunately, the experience has been that pre-remediation soil volume estimates often significantly underestimate the actual volume of contaminated soils that are encountered during the course of remediation. This underestimation has significant implications, both technically (e.g., inappropriate remedial designs) and programmatically (e.g., establishing technically defensible budget and schedule baselines). Argonne National Laboratory (Argonne) has developed a joint Bayesian/geostatistical methodology for estimating contaminated soil volumes based on sampling results, that also provides upper and lower probabilistic bounds on those volumes. This paper evaluates the performance of this method in a retrospective study that compares volume estimates derived using this technique with actual excavated soil volumes for select Formerly Utilized Sites Remedial Action Program (FUSRAP) Maywood properties that have completed remedial action by the U.S. Army Corps of Engineers (USACE) New York District. (authors)

  3. Rydberg aggregates

    Science.gov (United States)

    Wüster, S.; Rost, J.-M.

    2018-02-01

    We review Rydberg aggregates, assemblies of a few Rydberg atoms exhibiting energy transport through collective eigenstates, considering isolated atoms or assemblies embedded within clouds of cold ground-state atoms. We classify Rydberg aggregates, and provide an overview of their possible applications as quantum simulators for phenomena from chemical or biological physics. Our main focus is on flexible Rydberg aggregates, in which atomic motion is an essential feature. In these, simultaneous control over Rydberg-Rydberg interactions, external trapping and electronic energies, allows Born-Oppenheimer surfaces for the motion of the entire aggregate to be tailored as desired. This is illustrated with theory proposals towards the demonstration of joint motion and excitation transport, conical intersections and non-adiabatic effects. Additional flexibility for quantum simulations is enabled by the use of dressed dipole-dipole interactions or the embedding of the aggregate in a cold gas or Bose-Einstein condensate environment. Finally we provide some guidance regarding the parameter regimes that are most suitable for the realization of either static or flexible Rydberg aggregates based on Li or Rb atoms. The current status of experimental progress towards enabling Rydberg aggregates is also reviewed.

  4. Quantifying Matter

    CERN Document Server

    Angelo, Joseph A

    2011-01-01

    Quantifying Matter explains how scientists learned to measure matter and quantify some of its most fascinating and useful properties. It presents many of the most important intellectual achievements and technical developments that led to the scientific interpretation of substance. Complete with full-color photographs, this exciting new volume describes the basic characteristics and properties of matter. Chapters include:. -Exploring the Nature of Matter. -The Origin of Matter. -The Search for Substance. -Quantifying Matter During the Scientific Revolution. -Understanding Matter's Electromagnet

  5. Error Analysis of CM Data Products Sources of Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, Brian D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eckert-Gallup, Aubrey Celia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cochran, Lainy Dromgoole [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kraus, Terrence D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Allen, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Beal, Bill [National Security Technologies, Joint Base Andrews, MD (United States); Okada, Colin [National Security Technologies, LLC. (NSTec), Las Vegas, NV (United States); Simpson, Mathew [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-02-01

    This goal of this project is to address the current inability to assess the overall error and uncertainty of data products developed and distributed by DOE’s Consequence Management (CM) Program. This is a widely recognized shortfall, the resolution of which would provide a great deal of value and defensibility to the analysis results, data products, and the decision making process that follows this work. A global approach to this problem is necessary because multiple sources of error and uncertainty contribute to the ultimate production of CM data products. Therefore, this project will require collaboration with subject matter experts across a wide range of FRMAC skill sets in order to quantify the types of uncertainty that each area of the CM process might contain and to understand how variations in these uncertainty sources contribute to the aggregated uncertainty present in CM data products. The ultimate goal of this project is to quantify the confidence level of CM products to ensure that appropriate public and worker protections decisions are supported by defensible analysis.

  6. Adjoint-Based Uncertainty Quantification with MCNP

    Energy Technology Data Exchange (ETDEWEB)

    Seifried, Jeffrey E. [Univ. of California, Berkeley, CA (United States)

    2011-09-01

    This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.

  7. Enhanced Named Entity Extraction via Error-Driven Aggregation

    Energy Technology Data Exchange (ETDEWEB)

    Lemmond, T D; Perry, N C; Guensche, J W; Nitao, J J; Glaser, R E; Kidwell, P; Hanley, W G

    2010-02-22

    Despite recent advances in named entity extraction technologies, state-of-the-art extraction tools achieve insufficient accuracy rates for practical use in many operational settings. However, they are not generally prone to the same types of error, suggesting that substantial improvements may be achieved via appropriate combinations of existing tools, provided their behavior can be accurately characterized and quantified. In this paper, we present an inference methodology for the aggregation of named entity extraction technologies that is founded upon a black-box analysis of their respective error processes. This method has been shown to produce statistically significant improvements in extraction relative to standard performance metrics and to mitigate the weak performance of entity extractors operating under suboptimal conditions. Moreover, this approach provides a framework for quantifying uncertainty and has demonstrated the ability to reconstruct the truth when majority voting fails.

  8. Uncertainty Management of Dynamic Tariff Method for Congestion Management in Distribution Networks

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Cheng, Lin

    2016-01-01

    The dynamic tariff (DT) method is designed for the distribution system operator (DSO) to alleviate congestions that might occur in a distribution network with high penetration of distributed energy resources (DERs). Uncertainty management is required for the decentralized DT method because the DT...... is de- termined based on optimal day-ahead energy planning with forecasted parameters such as day-ahead energy prices and en- ergy needs which might be different from the parameters used by aggregators. The uncertainty management is to quantify and mitigate the risk of the congestion when employing...

  9. Uncertainty in oil projects

    International Nuclear Information System (INIS)

    Limperopoulos, G.J.

    1995-01-01

    This report presents an oil project valuation under uncertainty by means of two well-known financial techniques: The Capital Asset Pricing Model (CAPM) and The Black-Scholes Option Pricing Formula. CAPM gives a linear positive relationship between expected rate of return and risk but does not take into consideration the aspect of flexibility which is crucial for an irreversible investment as an oil price is. Introduction of investment decision flexibility by using real options can increase the oil project value substantially. Some simple tests for the importance of uncertainty in stock market for oil investments are performed. Uncertainty in stock returns is correlated with aggregate product market uncertainty according to Pindyck (1991). The results of the tests are not satisfactory due to the short data series but introducing two other explanatory variables the interest rate and Gross Domestic Product make the situation better. 36 refs., 18 figs., 6 tabs

  10. Oil price uncertainty in Canada

    Energy Technology Data Exchange (ETDEWEB)

    Elder, John [Department of Finance and Real Estate, 1272 Campus Delivery, Colorado State University, Fort Collins, CO 80523 (United States); Serletis, Apostolos [Department of Economics, University of Calgary, Calgary, Alberta (Canada)

    2009-11-15

    Bernanke [Bernanke, Ben S. Irreversibility, uncertainty, and cyclical investment. Quarterly Journal of Economics 98 (1983), 85-106.] shows how uncertainty about energy prices may induce optimizing firms to postpone investment decisions, thereby leading to a decline in aggregate output. Elder and Serletis [Elder, John and Serletis, Apostolos. Oil price uncertainty.] find empirical evidence that uncertainty about oil prices has tended to depress investment in the United States. In this paper we assess the robustness of these results by investigating the effects of oil price uncertainty in Canada. Our results are remarkably similar to existing results for the United States, providing additional evidence that uncertainty about oil prices may provide another explanation for why the sharp oil price declines of 1985 failed to produce rapid output growth. Impulse-response analysis suggests that uncertainty about oil prices may tend to reinforce the negative response of output to positive oil shocks. (author)

  11. Advanced LOCA code uncertainty assessment

    International Nuclear Information System (INIS)

    Wickett, A.J.; Neill, A.P.

    1990-11-01

    This report describes a pilot study that identified, quantified and combined uncertainties for the LOBI BL-02 3% small break test. A ''dials'' version of TRAC-PF1/MOD1, called TRAC-F, was used. (author)

  12. Planejamento agregado da produção ótimo com limite mínimo de estoque influenciado pelas incertezas de demanda Optimal aggregate production planning with minimum inventory limit affected by demand uncertainties

    Directory of Open Access Journals (Sweden)

    Oscar S. Silva Filho

    1995-04-01

    Full Text Available O artigo trata da determinação de uma política ótima de decisão para um problema de planejamento da produção com restrições de estoque e produção. O horizonte de planejamento é finito, aproximadamente de 1 a 2 anos, com período de discretização mensal. Os dados do problema estão totalmente agregados e a flutuação de demanda ao longo dos períodos do horizonte é aleatória, com distribuição de probabilidade assumida como gaussiana. Assim, o problema estudado é de planejamento estocástico com restrição probabilística na variável de estoque. Mostra-se que é possível, a partir de transformações apropriadas, obter uma formulação determinística equivalente, para a qual uma solução do tipo malha-aberta (que é uma solução aproximada para o problema original pode ser gerada. É também mostrado que as incertezas relacionadas com flutuações futuras de demanda são explicitadas na formulação determinística por meio de uma função restrição para o limite mínimo do nível de estoque. Esta função é essencialmente côncava e crescente e depende da variância da variável de estoque e de uma medida de probabilidade, fixada a priori pelo usuário. Para ilustrar os desenvolvimentos teóricos, um exemplo simples de um sistema de produção do tipo monoproduto é proposto e resolvido por meio de programação dinâmica determinística. Então, a solução malha-aberta (i.e. solução aproximada gerada pelo problema equivalente é comparada com a solução verdadeira do problema estocástico, obtida via algoritmo de programação estocástica.This paper deals with the determination of an optimal decision policy for a production planning problem with inventory and production constraints. The planning time horizon is finite, from 1 to 2 years approximately at monthly periods, which means that all data involved with the problem are totally aggregated and the fluctuating demand for each one period is stochastic, with

  13. Forecast Accuracy Uncertainty and Momentum

    OpenAIRE

    Bing Han; Dong Hong; Mitch Warachka

    2009-01-01

    We demonstrate that stock price momentum and earnings momentum can result from uncertainty surrounding the accuracy of cash flow forecasts. Our model has multiple information sources issuing cash flow forecasts for a stock. The investor combines these forecasts into an aggregate cash flow estimate that has minimal mean-squared forecast error. This aggregate estimate weights each cash flow forecast by the estimated accuracy of its issuer, which is obtained from their past forecast errors. Mome...

  14. Treatment of uncertainties in the IPCC: a philosophical analysis

    Science.gov (United States)

    Jebeile, J.; Drouet, I.

    2014-12-01

    The IPCC produces scientific reports out of findings on climate and climate change. Because the findings are uncertain in many respects, the production of reports requires aggregating assessments of uncertainties of different kinds. This difficult task is currently regulated by the Guidance note for lead authors of the IPCC fifth assessment report on consistent treatment of uncertainties. The note recommends that two metrics—i.e. confidence and likelihood— be used for communicating the degree of certainty in findings. Confidence is expressed qualitatively "based on the type, amount, quality, and consistency of evidence […] and the degree of agreement", while likelihood is expressed probabilistically "based on statistical analysis of observations or model results, or expert judgment". Therefore, depending on the evidence evaluated, authors have the choice to present either an assigned level of confidence or a quantified measure of likelihood. But aggregating assessments of uncertainties of these two different kinds express distinct and conflicting methodologies. So the question arises whether the treatment of uncertainties in the IPCC is rationally justified. In order to answer the question, it is worth comparing the IPCC procedures with the formal normative theories of epistemic rationality which have been developed by philosophers. These theories—which include contributions to the philosophy of probability and to bayesian probabilistic confirmation theory—are relevant for our purpose because they are commonly used to assess the rationality of common collective jugement formation based on uncertain knowledge. In this paper we make the comparison and pursue the following objectives: i/we determine whether the IPCC confidence and likelihood can be compared with the notions of uncertainty targeted by or underlying the formal normative theories of epistemic rationality; ii/we investigate whether the formal normative theories of epistemic rationality justify

  15. Quantifying Transmission.

    Science.gov (United States)

    Woolhouse, Mark

    2017-07-01

    Transmissibility is the defining characteristic of infectious diseases. Quantifying transmission matters for understanding infectious disease epidemiology and designing evidence-based disease control programs. Tracing individual transmission events can be achieved by epidemiological investigation coupled with pathogen typing or genome sequencing. Individual infectiousness can be estimated by measuring pathogen loads, but few studies have directly estimated the ability of infected hosts to transmit to uninfected hosts. Individuals' opportunities to transmit infection are dependent on behavioral and other risk factors relevant given the transmission route of the pathogen concerned. Transmission at the population level can be quantified through knowledge of risk factors in the population or phylogeographic analysis of pathogen sequence data. Mathematical model-based approaches require estimation of the per capita transmission rate and basic reproduction number, obtained by fitting models to case data and/or analysis of pathogen sequence data. Heterogeneities in infectiousness, contact behavior, and susceptibility can have substantial effects on the epidemiology of an infectious disease, so estimates of only mean values may be insufficient. For some pathogens, super-shedders (infected individuals who are highly infectious) and super-spreaders (individuals with more opportunities to transmit infection) may be important. Future work on quantifying transmission should involve integrated analyses of multiple data sources.

  16. Uncertainty Management and Sensitivity Analysis

    DEFF Research Database (Denmark)

    Rosenbaum, Ralph K.; Georgiadis, Stylianos; Fantke, Peter

    2018-01-01

    Uncertainty is always there and LCA is no exception to that. The presence of uncertainties of different types and from numerous sources in LCA results is a fact, but managing them allows to quantify and improve the precision of a study and the robustness of its conclusions. LCA practice sometimes...... suffers from an imbalanced perception of uncertainties, justifying modelling choices and omissions. Identifying prevalent misconceptions around uncertainties in LCA is a central goal of this chapter, aiming to establish a positive approach focusing on the advantages of uncertainty management. The main...... objectives of this chapter are to learn how to deal with uncertainty in the context of LCA, how to quantify it, interpret and use it, and how to communicate it. The subject is approached more holistically than just focusing on relevant statistical methods or purely mathematical aspects. This chapter...

  17. The Uncertainty Multiplier and Business Cycles

    OpenAIRE

    Saijo, Hikaru

    2013-01-01

    I study a business cycle model where agents learn about the state of the economy by accumulating capital. During recessions, agents invest less, and this generates noisier estimates of macroeconomic conditions and an increase in uncertainty. The endogenous increase in aggregate uncertainty further reduces economic activity, which in turn leads to more uncertainty, and so on. Thus, through changes in uncertainty, learning gives rise to a multiplier effect that amplifies business cycles. I use ...

  18. Understanding uncertainty

    CERN Document Server

    Lindley, Dennis V

    2013-01-01

    Praise for the First Edition ""...a reference for everyone who is interested in knowing and handling uncertainty.""-Journal of Applied Statistics The critically acclaimed First Edition of Understanding Uncertainty provided a study of uncertainty addressed to scholars in all fields, showing that uncertainty could be measured by probability, and that probability obeyed three basic rules that enabled uncertainty to be handled sensibly in everyday life. These ideas were extended to embrace the scientific method and to show how decisions, containing an uncertain element, could be rationally made.

  19. Uncertainty in hydrological signatures

    Science.gov (United States)

    McMillan, Hilary; Westerberg, Ida

    2015-04-01

    Information that summarises the hydrological behaviour or flow regime of a catchment is essential for comparing responses of different catchments to understand catchment organisation and similarity, and for many other modelling and water-management applications. Such information types derived as an index value from observed data are known as hydrological signatures, and can include descriptors of high flows (e.g. mean annual flood), low flows (e.g. mean annual low flow, recession shape), the flow variability, flow duration curve, and runoff ratio. Because the hydrological signatures are calculated from observed data such as rainfall and flow records, they are affected by uncertainty in those data. Subjective choices in the method used to calculate the signatures create a further source of uncertainty. Uncertainties in the signatures may affect our ability to compare different locations, to detect changes, or to compare future water resource management scenarios. The aim of this study was to contribute to the hydrological community's awareness and knowledge of data uncertainty in hydrological signatures, including typical sources, magnitude and methods for its assessment. We proposed a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrated it for a variety of commonly used signatures. The study was made for two data rich catchments, the 50 km2 Mahurangi catchment in New Zealand and the 135 km2 Brue catchment in the UK. For rainfall data the uncertainty sources included point measurement uncertainty, the number of gauges used in calculation of the catchment spatial average, and uncertainties relating to lack of quality control. For flow data the uncertainty sources included uncertainties in stage/discharge measurement and in the approximation of the true stage-discharge relation by a rating curve. The resulting uncertainties were compared across the different signatures and catchments, to quantify uncertainty

  20. Measurement Uncertainty

    Science.gov (United States)

    Koch, Michael

    Measurement uncertainty is one of the key issues in quality assurance. It became increasingly important for analytical chemistry laboratories with the accreditation to ISO/IEC 17025. The uncertainty of a measurement is the most important criterion for the decision whether a measurement result is fit for purpose. It also delivers help for the decision whether a specification limit is exceeded or not. Estimation of measurement uncertainty often is not trivial. Several strategies have been developed for this purpose that will shortly be described in this chapter. In addition the different possibilities to take into account the uncertainty in compliance assessment are explained.

  1. Quantify Risk to Manage Cost and Schedule

    National Research Council Canada - National Science Library

    Raymond, Fred

    1999-01-01

    Too many projects suffer from unachievable budget and schedule goals, caused by unrealistic estimates and the failure to quantify and communicate the uncertainty of these estimates to managers and sponsoring executives...

  2. An uncertainty inventory demonstration - a primary step in uncertainty quantification

    Energy Technology Data Exchange (ETDEWEB)

    Langenbrunner, James R. [Los Alamos National Laboratory; Booker, Jane M [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Salazar, Issac F [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2009-01-01

    Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.

  3. Potential Carbon Transport: Linking Soil Aggregate Stability and Sediment Enrichment for Updating the Soil Active Layer within Intensely Managed Landscapes

    Science.gov (United States)

    Wacha, K.; Papanicolaou, T.; Abban, B. K.; Wilson, C. G.

    2014-12-01

    Currently, many biogeochemical models lack the mechanistic capacity to accurately simulate soil organic carbon (SOC) dynamics, especially within intensely managed landscapes (IMLs) such as those found in the U.S. Midwest. These modeling limitations originate by not accounting for downslope connectivity of flowpathways initiated and governed by landscape processes and hydrologic forcing, which induce dynamic updates to the soil active layer (generally top 20-30cm of soil) with various sediment size fractions and aggregates being transported and deposited along the downslope. These hydro-geomorphic processes, often amplified in IMLs by tillage events and seasonal canopy, can greatly impact biogeochemical cycles (e.g., enhanced mineralization during aggregate breakdown) and in turn, have huge implications/uncertainty when determining SOC budgets. In this study, some of these limitations were addressed through a new concept, Potential Carbon Transport (PCT), a term which quantifies a maximum amount of material available for transport at various positions of the landscape, which was used to further refine a coupled modeling framework focused on SOC redistribution through downslope/lateral connectivity. Specifically, the size fractions slaked from large and small aggregates during raindrop-induced aggregate stability tests were used in conjunction with rainfall-simulated sediment enrichment ratio (ER) experiments to quantify the PCT under various management practices, soil types and landscape positions. Field samples used in determining aggregate stability and the ER experiments were collected/performed within the historic Clear Creek Watershed, home of the IML Critical Zone Observatory, located in Southeastern Iowa.

  4. Uncertainty theory

    CERN Document Server

    Liu, Baoding

    2015-01-01

    When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, c...

  5. Some illustrative examples of model uncertainty

    International Nuclear Information System (INIS)

    Bier, V.M.

    1994-01-01

    In this paper, we first discuss the view of model uncertainty proposed by Apostolakis. We then present several illustrative examples related to model uncertainty, some of which are not well handled by this formalism. Thus, Apostolakis' approach seems to be well suited to describing some types of model uncertainty, but not all. Since a comprehensive approach for characterizing and quantifying model uncertainty is not yet available, it is hoped that the examples presented here will service as a springboard for further discussion

  6. Single particle detection and characterization of synuclein co-aggregation

    International Nuclear Information System (INIS)

    Giese, Armin; Bader, Benedikt; Bieschke, Jan; Schaffar, Gregor; Odoy, Sabine; Kahle, Philipp J.; Haass, Christian; Kretzschmar, Hans

    2005-01-01

    Protein aggregation is the key event in a number of human diseases such as Alzheimer's and Parkinson's disease. We present a general method to quantify and characterize protein aggregates by dual-colour scanning for intensely fluorescent targets (SIFT). In addition to high sensitivity, this approach offers a unique opportunity to study co-aggregation processes. As the ratio of two fluorescently labelled components can be analysed for each aggregate separately in a homogeneous assay, the molecular composition of aggregates can be studied even in samples containing a mixture of different types of aggregates. Using this method, we could show that wild-type α-synuclein forms co-aggregates with a mutant variant found in familial Parkinson's disease. Moreover, we found a striking increase in aggregate formation at non-equimolar mixing ratios, which may have important therapeutic implications, as lowering the relative amount of aberrant protein may cause an increase of protein aggregation leading to adverse effects

  7. Critical loads - assessment of uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Barkman, A.

    1998-10-01

    The effects of data uncertainty in applications of the critical loads concept were investigated on different spatial resolutions in Sweden and northern Czech Republic. Critical loads of acidity (CL) were calculated for Sweden using the biogeochemical model PROFILE. Three methods with different structural complexity were used to estimate the adverse effects of S0{sub 2} concentrations in northern Czech Republic. Data uncertainties in the calculated critical loads/levels and exceedances (EX) were assessed using Monte Carlo simulations. Uncertainties within cumulative distribution functions (CDF) were aggregated by accounting for the overlap between site specific confidence intervals. Aggregation of data uncertainties within CDFs resulted in lower CL and higher EX best estimates in comparison with percentiles represented by individual sites. Data uncertainties were consequently found to advocate larger deposition reductions to achieve non-exceedance based on low critical loads estimates on 150 x 150 km resolution. Input data were found to impair the level of differentiation between geographical units at all investigated resolutions. Aggregation of data uncertainty within CDFs involved more constrained confidence intervals for a given percentile. Differentiation as well as identification of grid cells on 150 x 150 km resolution subjected to EX was generally improved. Calculation of the probability of EX was shown to preserve the possibility to differentiate between geographical units. Re-aggregation of the 95%-ile EX on 50 x 50 km resolution generally increased the confidence interval for each percentile. Significant relationships were found between forest decline and the three methods addressing risks induced by S0{sub 2} concentrations. Modifying S0{sub 2} concentrations by accounting for the length of the vegetation period was found to constitute the most useful trade-off between structural complexity, data availability and effects of data uncertainty. Data

  8. How to live with uncertainties?

    International Nuclear Information System (INIS)

    Michel, R.

    2012-01-01

    In a short introduction, the problem of uncertainty as a general consequence of incomplete information as well as the approach to quantify uncertainty in metrology are addressed. A little history of the more than 30 years of the working group AK SIGMA is followed by an appraisal of its up-to-now achievements. Then, the potential future of the AK SIGMA is discussed based on its actual tasks and on open scientific questions and future topics. (orig.)

  9. Quantification of uncertainties of modeling and simulation

    International Nuclear Information System (INIS)

    Ma Zhibo; Yin Jianwei

    2012-01-01

    The principles of Modeling and Simulation (M and S) is interpreted by a functional relation, from which the total uncertainties of M and S are identified and sorted to three parts considered to vary along with the conceptual models' parameters. According to the idea of verification and validation, the space of the parameters is parted to verified and applied domains, uncertainties in the verified domain are quantified by comparison between numerical and standard results, and those in the applied domain are quantified by a newly developed extrapolating method. Examples are presented to demonstrate and qualify the ideas aimed to build a framework to quantify the uncertainties of M and S. (authors)

  10. Reliability analysis under epistemic uncertainty

    International Nuclear Information System (INIS)

    Nannapaneni, Saideep; Mahadevan, Sankaran

    2016-01-01

    This paper proposes a probabilistic framework to include both aleatory and epistemic uncertainty within model-based reliability estimation of engineering systems for individual limit states. Epistemic uncertainty is considered due to both data and model sources. Sparse point and/or interval data regarding the input random variables leads to uncertainty regarding their distribution types, distribution parameters, and correlations; this statistical uncertainty is included in the reliability analysis through a combination of likelihood-based representation, Bayesian hypothesis testing, and Bayesian model averaging techniques. Model errors, which include numerical solution errors and model form errors, are quantified through Gaussian process models and included in the reliability analysis. The probability integral transform is used to develop an auxiliary variable approach that facilitates a single-level representation of both aleatory and epistemic uncertainty. This strategy results in an efficient single-loop implementation of Monte Carlo simulation (MCS) and FORM/SORM techniques for reliability estimation under both aleatory and epistemic uncertainty. Two engineering examples are used to demonstrate the proposed methodology. - Highlights: • Epistemic uncertainty due to data and model included in reliability analysis. • A novel FORM-based approach proposed to include aleatory and epistemic uncertainty. • A single-loop Monte Carlo approach proposed to include both types of uncertainties. • Two engineering examples used for illustration.

  11. Teaching Uncertainties

    Science.gov (United States)

    Duerdoth, Ian

    2009-01-01

    The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…

  12. Calibration uncertainty

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Anglov, Thomas

    2002-01-01

    Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...

  13. Quantifying climate risk - the starting point

    International Nuclear Information System (INIS)

    Fairweather, Helen; Luo, Qunying; Liu, De Li; Wiles, Perry

    2007-01-01

    Full text: All natural systems have evolved to their current state as a result inter alia of the climate in which they developed. Similarly, man-made systems (such as agricultural production) have developed to suit the climate experienced over the last 100 or so years. The capacity of different systems to adapt to changes in climate that are outside those that have been experienced previously is largely unknown. This results in considerable uncertainty when predicting climate change impacts. However, it is possible to quantify the relative probabilities of a range of potential impacts of climate change. Quantifying current climate risks is an effective starting point for analysing the probable impacts of future climate change and guiding the selection of appropriate adaptation strategies. For a farming system to be viable within the current climate, its profitability must be sustained and, therefore, possible adaptation strategies need to be tested for continued viability in a changed climate. The methodology outlined in this paper examines historical patterns of key climate variables (rainfall and temperature) across the season and their influence on the productivity of wheat growing in NSW. This analysis is used to identify the time of year that the system is most vulnerable to climate variation, within the constraints of the current climate. Wheat yield is used as a measure of productivity, which is also assumed to be a surrogate for profitability. A time series of wheat yields is sorted into ascending order and categorised into five percentile groupings (i.e. 20th, 40th, 60th and 80th percentiles) for each shire across NSW (-100 years). Five time series of climate data (which are aggregated daily data from the years in each percentile) are analysed to determine the period that provides the greatest climate risk to the production system. Once this period has been determined, this risk is quantified in terms of the degree of separation of the time series

  14. Analysis of uncertainty in modeling perceived risks

    International Nuclear Information System (INIS)

    Melnyk, R.; Sandquist, G.M.

    2005-01-01

    Expanding on a mathematical model developed for quantifying and assessing perceived risks, the distribution functions, variances, and uncertainties associated with estimating the model parameters are quantified. The analytical model permits the identification and assignment of any number of quantifiable risk perception factors that can be incorporated within standard risk methodology. Those risk perception factors associated with major technical issues are modeled using lognormal probability density functions to span the potentially large uncertainty variations associated with these risk perceptions. The model quantifies the logic of public risk perception and provides an effective means for measuring and responding to perceived risks. (authors)

  15. Uncertainty management in Real Estate Development: Studying the potential of SCRUM design methodology

    NARCIS (Netherlands)

    Blokpoel, S.B.; Reymen, Isabelle; Dewulf, Geert P.M.R.; Sariyildiz, S.; Tuncer, B.

    2005-01-01

    Real estate development is all about assessing and controlling risks and uncertainties. Risk management implies making decisions based on quantified risks to execute riskresponse measures. Uncertainties, on the other hand, cannot be quantified and are therefore unpredictable. In literature, much

  16. Quantifying interspecific coagulation efficiency of phytoplankton

    DEFF Research Database (Denmark)

    Hansen, J.L.S.; Kiørboe, Thomas

    1997-01-01

    . nordenskjoeldii. Mutual coagulation between Skeletonema costatum and the non-sticky cel:ls of Ditylum brightwellii also proceeded with hall the efficiency of S. costatum alone. The latex beads were suitable to be used as 'standard particles' to quantify the ability of phytoplankton to prime aggregation...

  17. Demand Uncertainty

    DEFF Research Database (Denmark)

    Nguyen, Daniel Xuyen

    This paper presents a model of trade that explains why firms wait to export and why many exporters fail. Firms face uncertain demands that are only realized after the firm enters the destination. The model retools the timing of uncertainty resolution found in productivity heterogeneity models....... This retooling addresses several shortcomings. First, the imperfect correlation of demands reconciles the sales variation observed in and across destinations. Second, since demands for the firm's output are correlated across destinations, a firm can use previously realized demands to forecast unknown demands...... in untested destinations. The option to forecast demands causes firms to delay exporting in order to gather more information about foreign demand. Third, since uncertainty is resolved after entry, many firms enter a destination and then exit after learning that they cannot profit. This prediction reconciles...

  18. Uncertainty budget for k0-NAA

    International Nuclear Information System (INIS)

    Robouch, P.; Arana, G.; Eguskiza, M.; Etxebarria, N.

    2000-01-01

    The concepts of the Guide to the expression of Uncertainties in Measurements for chemical measurements (GUM) and the recommendations of the Eurachem document 'Quantifying Uncertainty in Analytical Methods' are applied to set up the uncertainty budget for k 0 -NAA. The 'universally applicable spreadsheet technique', described by KRAGTEN, is applied to the k 0 -NAA basic equations for the computation of uncertainties. The variance components - individual standard uncertainties - highlight the contribution and the importance of the different parameters to be taken into account. (author)

  19. Towards General Temporal Aggregation

    DEFF Research Database (Denmark)

    Boehlen, Michael H.; Gamper, Johann; Jensen, Christian Søndergaard

    2008-01-01

    associated with the management of temporal data. Indeed, temporal aggregation is complex and among the most difficult, and thus interesting, temporal functionality to support. This paper presents a general framework for temporal aggregation that accommodates existing kinds of aggregation, and it identifies...

  20. Development of Property Models with Uncertainty Estimate for Process Design under Uncertainty

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Sarup, Bent; Abildskov, Jens

    more reliable predictions with a new and improved set of model parameters for GC (group contribution) based and CI (atom connectivity index) based models and to quantify the uncertainties in the estimated property values from a process design point-of-view. This includes: (i) parameter estimation using....... The comparison of model prediction uncertainties with reported range of measurement uncertainties is presented for the properties with related available data. The application of the developed methodology to quantify the effect of these uncertainties on the design of different unit operations (distillation column......, the developed methodology can be used to quantify the sensitivity of process design to uncertainties in property estimates; obtain rationally the risk/safety factors in process design; and identify additional experimentation needs in order to reduce most critical uncertainties....

  1. Piecewise Polynomial Aggregation as Preprocessing for Data Numerical Modeling

    Science.gov (United States)

    Dobronets, B. S.; Popova, O. A.

    2018-05-01

    Data aggregation issues for numerical modeling are reviewed in the present study. The authors discuss data aggregation procedures as preprocessing for subsequent numerical modeling. To calculate the data aggregation, the authors propose using numerical probabilistic analysis (NPA). An important feature of this study is how the authors represent the aggregated data. The study shows that the offered approach to data aggregation can be interpreted as the frequency distribution of a variable. To study its properties, the density function is used. For this purpose, the authors propose using the piecewise polynomial models. A suitable example of such approach is the spline. The authors show that their approach to data aggregation allows reducing the level of data uncertainty and significantly increasing the efficiency of numerical calculations. To demonstrate the degree of the correspondence of the proposed methods to reality, the authors developed a theoretical framework and considered numerical examples devoted to time series aggregation.

  2. Uncertainty quantification theory, implementation, and applications

    CERN Document Server

    Smith, Ralph C

    2014-01-01

    The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...

  3. Treatment of uncertainty in low-level waste performance assessment

    International Nuclear Information System (INIS)

    Kozak, M.W.; Olague, N.E.; Gallegos, D.P.; Rao, R.R.

    1991-01-01

    Uncertainties arise from a number of different sources in low-level waste performance assessment. In this paper the types of uncertainty are reviewed, and existing methods for quantifying and reducing each type of uncertainty are discussed. These approaches are examined in the context of the current low-level radioactive waste regulatory performance objectives, which are deterministic. The types of uncertainty discussed in this paper are model uncertainty, uncertainty about future conditions, and parameter uncertainty. The advantages and disadvantages of available methods for addressing uncertainty in low-level waste performance assessment are presented. 25 refs

  4. Quantifying the Adaptive Cycle.

    Directory of Open Access Journals (Sweden)

    David G Angeler

    Full Text Available The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994-2011 data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems.

  5. Stereo-particle image velocimetry uncertainty quantification

    International Nuclear Information System (INIS)

    Bhattacharya, Sayantan; Vlachos, Pavlos P; Charonko, John J

    2017-01-01

    Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric

  6. The Stock Market: Risk vs. Uncertainty.

    Science.gov (United States)

    Griffitts, Dawn

    2002-01-01

    This economics education publication focuses on the U.S. stock market and the risk and uncertainty that an individual faces when investing in the market. The material explains that risk and uncertainty relate to the same underlying concept randomness. It defines and discusses both concepts and notes that although risk is quantifiable, uncertainty…

  7. Photometric Uncertainties

    Science.gov (United States)

    Zou, Xiao-Duan; Li, Jian-Yang; Clark, Beth Ellen; Golish, Dathon

    2018-01-01

    The OSIRIS-REx spacecraft, launched in September, 2016, will study the asteroid Bennu and return a sample from its surface to Earth in 2023. Bennu is a near-Earth carbonaceous asteroid which will provide insight into the formation and evolution of the solar system. OSIRIS-REx will first approach Bennu in August 2018 and will study the asteroid for approximately two years before sampling. OSIRIS-REx will develop its photometric model (including Lommel-Seelinger, ROLO, McEwen, Minnaert and Akimov) of Bennu with OCAM and OVIRS during the Detailed Survey mission phase. The model developed during this phase will be used to photometrically correct the OCAM and OVIRS data.Here we present the analysis of the error for the photometric corrections. Based on our testing data sets, we find:1. The model uncertainties is only correct when we use the covariance matrix to calculate, because the parameters are highly correlated.2. No evidence of domination of any parameter in each model.3. And both model error and the data error contribute to the final correction error comparably.4. We tested the uncertainty module on fake and real data sets, and find that model performance depends on the data coverage and data quality. These tests gave us a better understanding of how different model behave in different case.5. L-S model is more reliable than others. Maybe because the simulated data are based on L-S model. However, the test on real data (SPDIF) does show slight advantage of L-S, too. ROLO is not reliable to use when calculating bond albedo. The uncertainty of McEwen model is big in most cases. Akimov performs unphysical on SOPIE 1 data.6. Better use L-S as our default choice, this conclusion is based mainly on our test on SOPIE data and IPDIF.

  8. Suspensions of colloidal particles and aggregates

    CERN Document Server

    Babick, Frank

    2016-01-01

    This book addresses the properties of particles in colloidal suspensions. It has a focus on particle aggregates and the dependency of their physical behaviour on morphological parameters. For this purpose, relevant theories and methodological tools are reviewed and applied to selected examples. The book is divided into four main chapters. The first of them introduces important measurement techniques for the determination of particle size and interfacial properties in colloidal suspensions. A further chapter is devoted to the physico-chemical properties of colloidal particles—highlighting the interfacial phenomena and the corresponding interactions between particles. The book’s central chapter examines the structure-property relations of colloidal aggregates. This comprises concepts to quantify size and structure of aggregates, models and numerical tools for calculating the (light) scattering and hydrodynamic properties of aggregates, and a discussion on van-der-Waals and double layer interactions between ...

  9. Mechanisms and rates of bacterial colonization of sinking aggregates

    DEFF Research Database (Denmark)

    Kiørboe, Thomas; Grossart, H.P.; Ploug, H.

    2002-01-01

    Quantifying the rate at which bacteria colonize aggregates is a key to understanding microbial turnover of aggregates. We used encounter models based on random walk and advection-diffusion considerations to predict colonization rates from the bacteria's motility patterns (swimming speed, tumbling...

  10. Climate Projections and Uncertainty Communication.

    Science.gov (United States)

    Joslyn, Susan L; LeClerc, Jared E

    2016-01-01

    Lingering skepticism about climate change might be due in part to the way climate projections are perceived by members of the public. Variability between scientists' estimates might give the impression that scientists disagree about the fact of climate change rather than about details concerning the extent or timing. Providing uncertainty estimates might clarify that the variability is due in part to quantifiable uncertainty inherent in the prediction process, thereby increasing people's trust in climate projections. This hypothesis was tested in two experiments. Results suggest that including uncertainty estimates along with climate projections leads to an increase in participants' trust in the information. Analyses explored the roles of time, place, demographic differences (e.g., age, gender, education level, political party affiliation), and initial belief in climate change. Implications are discussed in terms of the potential benefit of adding uncertainty estimates to public climate projections. Copyright © 2015 Cognitive Science Society, Inc.

  11. Uncertainty analysis

    International Nuclear Information System (INIS)

    Thomas, R.E.

    1982-03-01

    An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software

  12. Model uncertainty in safety assessment

    International Nuclear Information System (INIS)

    Pulkkinen, U.; Huovinen, T.

    1996-01-01

    The uncertainty analyses are an essential part of any risk assessment. Usually the uncertainties of reliability model parameter values are described by probability distributions and the uncertainty is propagated through the whole risk model. In addition to the parameter uncertainties, the assumptions behind the risk models may be based on insufficient experimental observations and the models themselves may not be exact descriptions of the phenomena under analysis. The description and quantification of this type of uncertainty, model uncertainty, is the topic of this report. The model uncertainty is characterized and some approaches to model and quantify it are discussed. The emphasis is on so called mixture models, which have been applied in PSAs. Some of the possible disadvantages of the mixture model are addressed. In addition to quantitative analyses, also qualitative analysis is discussed shortly. To illustrate the models, two simple case studies on failure intensity and human error modeling are described. In both examples, the analysis is based on simple mixture models, which are observed to apply in PSA analyses. (orig.) (36 refs., 6 figs., 2 tabs.)

  13. Model uncertainty in safety assessment

    Energy Technology Data Exchange (ETDEWEB)

    Pulkkinen, U; Huovinen, T [VTT Automation, Espoo (Finland). Industrial Automation

    1996-01-01

    The uncertainty analyses are an essential part of any risk assessment. Usually the uncertainties of reliability model parameter values are described by probability distributions and the uncertainty is propagated through the whole risk model. In addition to the parameter uncertainties, the assumptions behind the risk models may be based on insufficient experimental observations and the models themselves may not be exact descriptions of the phenomena under analysis. The description and quantification of this type of uncertainty, model uncertainty, is the topic of this report. The model uncertainty is characterized and some approaches to model and quantify it are discussed. The emphasis is on so called mixture models, which have been applied in PSAs. Some of the possible disadvantages of the mixture model are addressed. In addition to quantitative analyses, also qualitative analysis is discussed shortly. To illustrate the models, two simple case studies on failure intensity and human error modeling are described. In both examples, the analysis is based on simple mixture models, which are observed to apply in PSA analyses. (orig.) (36 refs., 6 figs., 2 tabs.).

  14. Quantifiers and working memory

    NARCIS (Netherlands)

    Szymanik, J.; Zajenkowski, M.

    2010-01-01

    The paper presents a study examining the role of working memory in quantifier verification. We created situations similar to the span task to compare numerical quantifiers of low and high rank, parity quantifiers and proportional quantifiers. The results enrich and support the data obtained

  15. Quantifiers and working memory

    NARCIS (Netherlands)

    Szymanik, J.; Zajenkowski, M.

    2009-01-01

    The paper presents a study examining the role of working memory in quantifier verification. We created situations similar to the span task to compare numerical quantifiers of low and high rank, parity quantifiers and proportional quantifiers. The results enrich and support the data obtained

  16. Needs of the CSAU uncertainty method

    International Nuclear Information System (INIS)

    Prosek, A.; Mavko, B.

    2000-01-01

    The use of best estimate codes for safety analysis requires quantification of the uncertainties. These uncertainties are inherently linked to the chosen safety analysis methodology. Worldwide, various methods were proposed for this quantification. The purpose of this paper was to identify the needs of the Code Scaling, Applicability, and Uncertainty (CSAU) methodology and then to answer the needs. The specific procedural steps were combined from other methods for uncertainty evaluation and new tools and procedures were proposed. The uncertainty analysis approach and tools were then utilized for confirmatory study. The uncertainty was quantified for the RELAP5/MOD3.2 thermalhydraulic computer code. The results of the adapted CSAU approach to the small-break loss-of-coolant accident (SB LOCA) show that the adapted CSAU can be used for any thermal-hydraulic safety analysis with uncertainty evaluation. However, it was indicated that there are still some limitations in the CSAU approach that need to be resolved. (author)

  17. Methods to Quantify Uncertainty in Human Health Risk Assessment

    National Research Council Canada - National Science Library

    Aurelius, Lea

    1998-01-01

    ...) and other health professionals, such as the Bioenviroumental Engineer, to identify the appropriate use of probabilistic techniques for a site, and the methods by which probabilistic risk assessment...

  18. Quantifying Correlation Uncertainty Risk in Credit Derivatives Pricing

    Directory of Open Access Journals (Sweden)

    Colin Turfus

    2018-04-01

    Full Text Available We propose a simple but practical methodology for the quantification of correlation risk in the context of credit derivatives pricing and credit valuation adjustment (CVA, where the correlation between rates and credit is often uncertain or unmodelled. We take the rates model to be Hull–White (normal and the credit model to be Black–Karasinski (lognormal. We summarise recent work furnishing highly accurate analytic pricing formulae for credit default swaps (CDS including with defaultable Libor flows, extending this to the situation where they are capped and/or floored. We also consider the pricing of contingent CDS with an interest rate swap underlying. We derive therefrom explicit expressions showing how the dependence of model prices on the uncertain parameter(s can be captured in analytic formulae that are readily amenable to computation without recourse to Monte Carlo or lattice-based computation. In so doing, we crucially take into account the impact on model calibration of the uncertain (or unmodelled parameters.

  19. Quantifying uncertainty and sensitivity in sea ice models

    Energy Technology Data Exchange (ETDEWEB)

    Urrego Blanco, Jorge Rolando [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hunke, Elizabeth Clare [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Urban, Nathan Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-15

    The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.

  20. Quantifying uncertainty in estimation of global arthropod species richness

    Czech Academy of Sciences Publication Activity Database

    Hamilton, J. A.; Basset, Y.; Benke, K. K.; Grimbacher, P. S.; Miller, S. E.; Novotný, Vojtěch; Samuelson, G. A.; Stork, N. E.; Weiblen, G. D.; Yen, J. D. L.

    2010-01-01

    Roč. 176, č. 1 (2010), s. 90-95 ISSN 0003-0147 Institutional research plan: CEZ:AV0Z50070508 Keywords : Coleoptera * host specificity * Latin hypercube sampling Subject RIV: EH - Ecology, Behaviour Impact factor: 4.736, year: 2010

  1. Neural network stochastic simulation applied for quantifying uncertainties

    Directory of Open Access Journals (Sweden)

    N Foudil-Bey

    2016-09-01

    Full Text Available Generally the geostatistical simulation methods are used to generate several realizations of physical properties in the sub-surface, these methods are based on the variogram analysis and limited to measures correlation between variables at two locations only. In this paper, we propose a simulation of properties based on supervised Neural network training at the existing drilling data set. The major advantage is that this method does not require a preliminary geostatistical study and takes into account several points. As a result, the geological information and the diverse geophysical data can be combined easily. To do this, we used a neural network with multi-layer perceptron architecture like feed-forward, then we used the back-propagation algorithm with conjugate gradient technique to minimize the error of the network output. The learning process can create links between different variables, this relationship can be used for interpolation of the properties on the one hand, or to generate several possible distribution of physical properties on the other hand, changing at each time and a random value of the input neurons, which was kept constant until the period of learning. This method was tested on real data to simulate multiple realizations of the density and the magnetic susceptibility in three-dimensions at the mining camp of Val d'Or, Québec (Canada.

  2. Quantifying uncertainty contributions for fibre optic power meter calibrations

    CSIR Research Space (South Africa)

    Nel, M

    2009-09-01

    Full Text Available Contributions For Fibre Optic Power Meter Calibrations Speaker / Author: M. Nel* Co-author: B. Theron** *National Metrology Institute of South Africa Private Bag X34, Lynnwood Ridge, Pretoria, 0040, South Africa e-mail: MNel@nmisa.org Phone: 012 841...-tight” situation discussed above should therefore not be interpreted as gross “looseness” of the connection. It is possible that the connector-tightening effect contains a small contribution accounted for as part of the overall repeatability of the optical...

  3. Aggregate Measures of Watershed Health from Reconstructed ...

    Science.gov (United States)

    Risk-based indices such as reliability, resilience, and vulnerability (R-R-V), have the potential to serve as watershed health assessment tools. Recent research has demonstrated the applicability of such indices for water quality (WQ) constituents such as total suspended solids and nutrients on an individual basis. However, the calculations can become tedious when time-series data for several WQ constituents have to be evaluated individually. Also, comparisons between locations with different sets of constituent data can prove difficult. In this study, data reconstruction using relevance vector machine algorithm was combined with dimensionality reduction via variational Bayesian noisy principal component analysis to reconstruct and condense sparse multidimensional WQ data sets into a single time series. The methodology allows incorporation of uncertainty in both the reconstruction and dimensionality-reduction steps. The R-R-V values were calculated using the aggregate time series at multiple locations within two Indiana watersheds. Results showed that uncertainty present in the reconstructed WQ data set propagates to the aggregate time series and subsequently to the aggregate R-R-V values as well. serving as motivating examples. Locations with different WQ constituents and different standards for impairment were successfully combined to provide aggregate measures of R-R-V values. Comparisons with individual constituent R-R-V values showed that v

  4. Platelet activation and aggregation

    DEFF Research Database (Denmark)

    Jensen, Maria Sander; Larsen, O H; Christiansen, Kirsten

    2013-01-01

    This study introduces a new laboratory model of whole blood platelet aggregation stimulated by endogenously generated thrombin, and explores this aspect in haemophilia A in which impaired thrombin generation is a major hallmark. The method was established to measure platelet aggregation initiated...

  5. Aggregates from mineral wastes

    Directory of Open Access Journals (Sweden)

    Baic Ireneusz

    2016-01-01

    Full Text Available The problem concerning the growing demand for natural aggregates and the need to limit costs, including transportation from remote deposits, cause the increase in growth of interest in aggregates from mineral wastes as well as in technologies of their production and recovery. The paper presents the issue related to the group of aggregates other than natural. A common name is proposed for such material: “alternative aggregates”. The name seems to be fully justified due to adequacy of this term because of this raw materials origin and role, in comparison to the meaning of natural aggregates based on gravel and sand as well as crushed stones. The paper presents characteristics of the market and basic application of aggregates produced from mineral wastes, generated in the mining, power and metallurgical industries as well as material from demolished objects.

  6. Macro Expectations, Aggregate Uncertainty, and Expected Term Premia

    DEFF Research Database (Denmark)

    Dick, Christian D.; Schmeling, Maik; Schrimpf, Andreas

    Based on individual expectations from the Survey of Professional Forecasters, we construct a realtime proxy for expected term premium changes on long-term bonds. We empirically investigate the relation of these bond term premium expectations with expectations about key macroeconomic variables as ...

  7. How to: understanding SWAT model uncertainty relative to measured results

    Science.gov (United States)

    Watershed models are being relied upon to contribute to most policy-making decisions of watershed management, and the demand for an accurate accounting of complete model uncertainty is rising. Generalized likelihood uncertainty estimation (GLUE) is a widely used method for quantifying uncertainty i...

  8. Marine Synechococcus Aggregation

    Science.gov (United States)

    Neuer, S.; Deng, W.; Cruz, B. N.; Monks, L.

    2016-02-01

    Cyanobacteria are considered to play an important role in the oceanic biological carbon pump, especially in oligotrophic regions. But as single cells are too small to sink, their carbon export has to be mediated by aggregate formation and possible consumption by zooplankton producing sinking fecal pellets. Here we report results on the aggregation of the ubiquitous marine pico-cyanobacterium Synechococcus as a model organism. We first investigated the mechanism behind such aggregation by studying the potential role of transparent exopolymeric particles (TEP) and the effects of nutrient (nitrogen or phosphorus) limitation on the TEP production and aggregate formation of these pico-cyanobacteria. We further studied the aggregation and subsequent settling in roller tanks and investigated the effects of the clays kaolinite and bentonite in a series of concentrations. Our results show that despite of the lowered growth rates, Synechococcus in nutrient limited cultures had larger cell-normalized TEP production, formed a greater volume of aggregates, and resulted in higher settling velocities compared to results from replete cultures. In addition, we found that despite their small size and lack of natural ballasting minerals, Synechococcus cells could still form aggregates and sink at measureable velocities in seawater. Clay minerals increased the number and reduced the size of aggregates, and their ballasting effects increased the sinking velocity and carbon export potential of aggregates. In comparison with the Synechococcus, we will also present results of the aggregation of the pico-cyanobacterium Prochlorococcus in roller tanks. These results contribute to our understanding in the physiology of marine Synechococcus as well as their role in the ecology and biogeochemistry in oligotrophic oceans.

  9. Reusing recycled aggregates in structural concrete

    Science.gov (United States)

    Kou, Shicong

    The utilization of recycled aggregates in concrete can minimize environmental impact and reduce the consumption of natural resources in concrete applications. The aim of this thesis is to provide a scientific basis for the possible use of recycled aggregates in structure concrete by conducting a comprehensive programme of laboratory study to gain a better understanding of the mechanical, microstructure and durability properties of concrete produced with recycled aggregates. The study also explored possible techniques to of improve the properties of recycled aggregate concrete that is produced with high percentages (≧ 50%) of recycled aggregates. These techniques included: (a) using lower water-to-cement ratios in the concrete mix design; (b) using fly ash as a cement replacement or as an additional mineral admixture in the concrete mixes, and (c) precasting recycled aggregate concrete with steam curing regimes. The characteristics of the recycled aggregates produced both from laboratory and a commercially operated pilot construction and demolition (C&D) waste recycling plant were first studied. A mix proportioning procedure was then established to produce six series of concrete mixtures using different percentages of recycled coarse aggregates with and without the use of fly ash. The water-to-cement (binder) ratios of 0.55, 0.50, 0.45 and 0.40 were used. The fresh properties (including slump and bleeding) of recycled aggregate concrete (RAC) were then quantified. The effects of fly ash on the fresh and hardened properties of RAC were then studied and compared with those RAC prepared with no fly ash addition. Furthermore, the effects of steam curing on the hardened properties of RAC were investigated. For micro-structural properties, the interfacial transition zones of the aggregates and the mortar/cement paste were analyzed by SEM and EDX-mapping. Moreover, a detailed set of results on the fracture properties for RAC were obtained. Based on the experimental

  10. Do oil shocks predict economic policy uncertainty?

    Science.gov (United States)

    Rehman, Mobeen Ur

    2018-05-01

    Oil price fluctuations have influential role in global economic policies for developed as well as emerging countries. I investigate the role of international oil prices disintegrated into structural (i) oil supply shock, (ii) aggregate demand shock and (iii) oil market specific demand shocks, based on the work of Kilian (2009) using structural VAR framework on economic policies uncertainty of sampled markets. Economic policy uncertainty, due to its non-linear behavior is modeled in a regime switching framework with disintegrated structural oil shocks. Our results highlight that Indian, Spain and Japanese economic policy uncertainty responds to the global oil price shocks, however aggregate demand shocks fail to induce any change. Oil specific demand shocks are significant only for China and India in high volatility state.

  11. LOFT differential pressure uncertainty analysis

    International Nuclear Information System (INIS)

    Evans, R.P.; Biladeau, G.L.; Quinn, P.A.

    1977-03-01

    A performance analysis of the LOFT differential pressure (ΔP) measurement is presented. Along with completed descriptions of test programs and theoretical studies that have been conducted on the ΔP, specific sources of measurement uncertainty are identified, quantified, and combined to provide an assessment of the ability of this measurement to satisfy the SDD 1.4.1C (June 1975) requirement of measurement of differential pressure

  12. Assessing scenario and parametric uncertainties in risk analysis: a model uncertainty audit

    International Nuclear Information System (INIS)

    Tarantola, S.; Saltelli, A.; Draper, D.

    1999-01-01

    In the present study a process of model audit is addressed on a computational model used for predicting maximum radiological doses to humans in the field of nuclear waste disposal. Global uncertainty and sensitivity analyses are employed to assess output uncertainty and to quantify the contribution of parametric and scenario uncertainties to the model output. These tools are of fundamental importance for risk analysis and decision making purposes

  13. Aggregated Computational Toxicology Online Resource

    Data.gov (United States)

    U.S. Environmental Protection Agency — Aggregated Computational Toxicology Online Resource (AcTOR) is EPA's online aggregator of all the public sources of chemical toxicity data. ACToR aggregates data...

  14. Recycled aggregates concrete: aggregate and mix properties

    Directory of Open Access Journals (Sweden)

    González-Fonteboa, B.

    2005-09-01

    Full Text Available This study of structural concrete made with recycled concrete aggregate focuses on two issues: 1. The characterization of such aggregate on the Spanish market. This involved conducting standard tests to determine density, water absorption, grading, shape, flakiness and hardness. The results obtained show that, despite the considerable differences with respect to density and water absorption between these and natural aggregates, on the whole recycled aggregate is apt for use in concrete production. 2. Testing to determine the values of basic concrete properties: mix design parameters were established for structural concrete in non-aggressive environments. These parameters were used to produce conventional concrete, and then adjusted to manufacture recycled concrete aggregate (RCA concrete, in which 50% of the coarse aggregate was replaced by the recycled material. Tests were conducted to determine the physical (density of the fresh and hardened material, water absorption and mechanical (compressive strength, splitting tensile strength and modulus of elasticity properties. The results showed that, from the standpoint of its physical and mechanical properties, concrete in which RCA accounted for 50% of the coarse aggregate compared favourably to conventional concrete.

    Se aborda el estudio de hormigones estructurales fabricados con áridos reciclados procedentes de hormigón, incidiéndose en dos aspectos: 1. Caracterización de tales áridos, procedentes del mercado español. Para ello se llevan a cabo ensayos de densidad, absorción, granulometría, coeficiente de forma, índice de lajas y dureza. Los resultados obtenidos han puesto de manifiesto que, a pesar de que existen diferencias notables (sobre todo en cuanto a densidad y absorción con los áridos naturales, las características de los áridos hacen posible la fabricación de hormigones. 2. Ensayos sobre propiedades básicas de los hormigones: se establecen parámetros de dosificaci

  15. Protein Colloidal Aggregation Project

    Science.gov (United States)

    Oliva-Buisson, Yvette J. (Compiler)

    2014-01-01

    To investigate the pathways and kinetics of protein aggregation to allow accurate predictive modeling of the process and evaluation of potential inhibitors to prevalent diseases including cataract formation, chronic traumatic encephalopathy, Alzheimer's Disease, Parkinson's Disease and others.

  16. Sensitivity-Uncertainty Techniques for Nuclear Criticality Safety

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rising, Michael Evan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Alwin, Jennifer Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-07

    The sensitivity and uncertainty analysis course will introduce students to keff sensitivity data, cross-section uncertainty data, how keff sensitivity data and keff uncertainty data are generated and how they can be used. Discussion will include how sensitivity/uncertainty data can be used to select applicable critical experiments, to quantify a defensible margin to cover validation gaps and weaknesses, and in development of upper subcritical limits.

  17. Uncertainty and measurement

    International Nuclear Information System (INIS)

    Landsberg, P.T.

    1990-01-01

    This paper explores how the quantum mechanics uncertainty relation can be considered to result from measurements. A distinction is drawn between the uncertainties obtained by scrutinising experiments and the standard deviation type of uncertainty definition used in quantum formalism. (UK)

  18. Human perception and the uncertainty principle

    International Nuclear Information System (INIS)

    Harney, R.C.

    1976-01-01

    The concept of the uncertainty principle that position and momentum cannot be simultaneously specified to arbitrary accuracy is somewhat difficult to reconcile with experience. This note describes order-of-magnitude calculations which quantify the inadequacy of human perception with regards to direct observation of the breakdown of the trajectory concept implied by the uncertainty principle. Even with the best optical microscope, human vision is inadequate by three orders of magnitude. 1 figure

  19. Evaluation of uncertainty of adaptive radiation therapy

    International Nuclear Information System (INIS)

    Garcia Molla, R.; Gomez Martin, C.; Vidueira, L.; Juan-Senabre, X.; Garcia Gomez, R.

    2013-01-01

    This work is part of tests to perform to its acceptance in the clinical practice. The uncertainties of adaptive radiation, and which will separate the study, can be divided into two large parts: dosimetry in the CBCT and RDI. At each stage, their uncertainties are quantified and a level of action from which it would be reasonable to adapt the plan may be obtained with the total. (Author)

  20. Aiding alternatives assessment with an uncertainty-tolerant hazard scoring method.

    Science.gov (United States)

    Faludi, Jeremy; Hoang, Tina; Gorman, Patrick; Mulvihill, Martin

    2016-11-01

    This research developed a single-score system to simplify and clarify decision-making in chemical alternatives assessment, accounting for uncertainty. Today, assessing alternatives to hazardous constituent chemicals is a difficult task-rather than comparing alternatives by a single definitive score, many independent toxicological variables must be considered at once, and data gaps are rampant. Thus, most hazard assessments are only comprehensible to toxicologists, but business leaders and politicians need simple scores to make decisions. In addition, they must balance hazard against other considerations, such as product functionality, and they must be aware of the high degrees of uncertainty in chemical hazard data. This research proposes a transparent, reproducible method to translate eighteen hazard endpoints into a simple numeric score with quantified uncertainty, alongside a similar product functionality score, to aid decisions between alternative products. The scoring method uses Clean Production Action's GreenScreen as a guide, but with a different method of score aggregation. It provides finer differentiation between scores than GreenScreen's four-point scale, and it displays uncertainty quantitatively in the final score. Displaying uncertainty also illustrates which alternatives are early in product development versus well-defined commercial products. This paper tested the proposed assessment method through a case study in the building industry, assessing alternatives to spray polyurethane foam insulation containing methylene diphenyl diisocyanate (MDI). The new hazard scoring method successfully identified trade-offs between different alternatives, showing finer resolution than GreenScreen Benchmarking. Sensitivity analysis showed that different weighting schemes in hazard scores had almost no effect on alternatives ranking, compared to uncertainty from data gaps. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Quantifiers for quantum logic

    OpenAIRE

    Heunen, Chris

    2008-01-01

    We consider categorical logic on the category of Hilbert spaces. More generally, in fact, any pre-Hilbert category suffices. We characterise closed subobjects, and prove that they form orthomodular lattices. This shows that quantum logic is just an incarnation of categorical logic, enabling us to establish an existential quantifier for quantum logic, and conclude that there cannot be a universal quantifier.

  2. The uncertainties in estimating measurement uncertainties

    International Nuclear Information System (INIS)

    Clark, J.P.; Shull, A.H.

    1994-01-01

    All measurements include some error. Whether measurements are used for accountability, environmental programs or process support, they are of little value unless accompanied by an estimate of the measurements uncertainty. This fact is often overlooked by the individuals who need measurements to make decisions. This paper will discuss the concepts of measurement, measurements errors (accuracy or bias and precision or random error), physical and error models, measurement control programs, examples of measurement uncertainty, and uncertainty as related to measurement quality. Measurements are comparisons of unknowns to knowns, estimates of some true value plus uncertainty; and are no better than the standards to which they are compared. Direct comparisons of unknowns that match the composition of known standards will normally have small uncertainties. In the real world, measurements usually involve indirect comparisons of significantly different materials (e.g., measuring a physical property of a chemical element in a sample having a matrix that is significantly different from calibration standards matrix). Consequently, there are many sources of error involved in measurement processes that can affect the quality of a measurement and its associated uncertainty. How the uncertainty estimates are determined and what they mean is as important as the measurement. The process of calculating the uncertainty of a measurement itself has uncertainties that must be handled correctly. Examples of chemistry laboratory measurement will be reviewed in this report and recommendations made for improving measurement uncertainties

  3. Forecasting Uncertainty in Electricity Smart Meter Data by Boosting Additive Quantile Regression

    KAUST Repository

    Taieb, Souhaib Ben

    2016-03-02

    Smart electricity meters are currently deployed in millions of households to collect detailed individual electricity consumption data. Compared with traditional electricity data based on aggregated consumption, smart meter data are much more volatile and less predictable. There is a need within the energy industry for probabilistic forecasts of household electricity consumption to quantify the uncertainty of future electricity demand in order to undertake appropriate planning of generation and distribution. We propose to estimate an additive quantile regression model for a set of quantiles of the future distribution using a boosting procedure. By doing so, we can benefit from flexible and interpretable models, which include an automatic variable selection. We compare our approach with three benchmark methods on both aggregated and disaggregated scales using a smart meter data set collected from 3639 households in Ireland at 30-min intervals over a period of 1.5 years. The empirical results demonstrate that our approach based on quantile regression provides better forecast accuracy for disaggregated demand, while the traditional approach based on a normality assumption (possibly after an appropriate Box-Cox transformation) is a better approximation for aggregated demand. These results are particularly useful since more energy data will become available at the disaggregated level in the future.

  4. Observing Convective Aggregation

    Science.gov (United States)

    Holloway, Christopher E.; Wing, Allison A.; Bony, Sandrine; Muller, Caroline; Masunaga, Hirohiko; L'Ecuyer, Tristan S.; Turner, David D.; Zuidema, Paquita

    2017-11-01

    Convective self-aggregation, the spontaneous organization of initially scattered convection into isolated convective clusters despite spatially homogeneous boundary conditions and forcing, was first recognized and studied in idealized numerical simulations. While there is a rich history of observational work on convective clustering and organization, there have been only a few studies that have analyzed observations to look specifically for processes related to self-aggregation in models. Here we review observational work in both of these categories and motivate the need for more of this work. We acknowledge that self-aggregation may appear to be far-removed from observed convective organization in terms of time scales, initial conditions, initiation processes, and mean state extremes, but we argue that these differences vary greatly across the diverse range of model simulations in the literature and that these comparisons are already offering important insights into real tropical phenomena. Some preliminary new findings are presented, including results showing that a self-aggregation simulation with square geometry has too broad distribution of humidity and is too dry in the driest regions when compared with radiosonde records from Nauru, while an elongated channel simulation has realistic representations of atmospheric humidity and its variability. We discuss recent work increasing our understanding of how organized convection and climate change may interact, and how model discrepancies related to this question are prompting interest in observational comparisons. We also propose possible future directions for observational work related to convective aggregation, including novel satellite approaches and a ground-based observational network.

  5. Fly ash aggregates. Vliegaskunstgrind

    Energy Technology Data Exchange (ETDEWEB)

    1983-03-01

    A study has been carried out into artificial aggregates made from fly ash, 'fly ash aggregates'. Attention has been drawn to the production of fly ash aggregates in the Netherlands as a way to obviate the need of disposal of fly ash. Typical process steps for the manufacturing of fly ash aggregates are the agglomeration and the bonding of fly ash particles. Agglomeration techniques are subdivided into agitation and compaction, bonding methods into sintering, hydrothermal and 'cold' bonding. In sintering no bonding agent is used. The fly ash particles are more or less welded together. Sintering in general is performed at a temperature higher than 900 deg C. In hydrothermal processes lime reacts with fly ash to a crystalline hydrate at temperatures between 100 and 250 deg C at saturated steam pressure. As a lime source not only lime as such, but also portland cement can be used. Cold bonding processes rely on reaction of fly ash with lime or cement at temperatures between 0 and 100 deg C. The pozzolanic properties of fly ash are used. Where cement is applied, this bonding agent itself contributes also to the strength development of the artificial aggregate. Besides the use of lime and cement, several processes are known which make use of lime containing wastes such as spray dry absorption desulfurization residues or fluid bed coal combustion residues. (In Dutch)

  6. Managing project risks and uncertainties

    Directory of Open Access Journals (Sweden)

    Mike Mentis

    2015-01-01

    Full Text Available This article considers threats to a project slipping on budget, schedule and fit-for-purpose. Threat is used here as the collective for risks (quantifiable bad things that can happen and uncertainties (poorly or not quantifiable bad possible events. Based on experience with projects in developing countries this review considers that (a project slippage is due to uncertainties rather than risks, (b while eventuation of some bad things is beyond control, managed execution and oversight are still the primary means to keeping within budget, on time and fit-for-purpose, (c improving project delivery is less about bigger and more complex and more about coordinated focus, effectiveness and developing thought-out heuristics, and (d projects take longer and cost more partly because threat identification is inaccurate, the scope of identified threats is too narrow, and the threat assessment product is not integrated into overall project decision-making and execution. Almost by definition, what is poorly known is likely to cause problems. Yet it is not just the unquantifiability and intangibility of uncertainties causing project slippage, but that they are insufficiently taken into account in project planning and execution that cause budget and time overruns. Improving project performance requires purpose-driven and managed deployment of scarce seasoned professionals. This can be aided with independent oversight by deeply experienced panelists who contribute technical insights and can potentially show that diligence is seen to be done.

  7. Habitable zone dependence on stellar parameter uncertainties

    International Nuclear Information System (INIS)

    Kane, Stephen R.

    2014-01-01

    An important property of exoplanetary systems is the extent of the Habitable Zone (HZ), defined as that region where water can exist in a liquid state on the surface of a planet with sufficient atmospheric pressure. Both ground- and space-based observations have revealed a plethora of confirmed exoplanets and exoplanetary candidates, most notably from the Kepler mission using the transit detection technique. Many of these detected planets lie within the predicted HZ of their host star. However, as is the case with the derived properties of the planets themselves, the HZ boundaries depend on how well we understand the host star. Here we quantify the uncertainties of HZ boundaries on the parameter uncertainties of the host star. We examine the distribution of stellar parameter uncertainties from confirmed exoplanet hosts and Kepler candidate hosts and translate these into HZ boundary uncertainties. We apply this to several known systems with an HZ planet to determine the uncertainty in their HZ status.

  8. Habitable zone dependence on stellar parameter uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Kane, Stephen R., E-mail: skane@sfsu.edu [Department of Physics and Astronomy, San Francisco State University, 1600 Holloway Avenue, San Francisco, CA 94132 (United States)

    2014-02-20

    An important property of exoplanetary systems is the extent of the Habitable Zone (HZ), defined as that region where water can exist in a liquid state on the surface of a planet with sufficient atmospheric pressure. Both ground- and space-based observations have revealed a plethora of confirmed exoplanets and exoplanetary candidates, most notably from the Kepler mission using the transit detection technique. Many of these detected planets lie within the predicted HZ of their host star. However, as is the case with the derived properties of the planets themselves, the HZ boundaries depend on how well we understand the host star. Here we quantify the uncertainties of HZ boundaries on the parameter uncertainties of the host star. We examine the distribution of stellar parameter uncertainties from confirmed exoplanet hosts and Kepler candidate hosts and translate these into HZ boundary uncertainties. We apply this to several known systems with an HZ planet to determine the uncertainty in their HZ status.

  9. Fuzzy Uncertainty Evaluation for Fault Tree Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ki Beom; Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of); Jae, Moo Sung [Hanyang University, Seoul (Korea, Republic of)

    2015-05-15

    This traditional probabilistic approach can calculate relatively accurate results. However it requires a long time because of repetitive computation due to the MC method. In addition, when informative data for statistical analysis are not sufficient or some events are mainly caused by human error, the probabilistic approach may not be possible because uncertainties of these events are difficult to be expressed by probabilistic distributions. In order to reduce the computation time and quantify uncertainties of top events when basic events whose uncertainties are difficult to be expressed by probabilistic distributions exist, the fuzzy uncertainty propagation based on fuzzy set theory can be applied. In this paper, we develop a fuzzy uncertainty propagation code and apply the fault tree of the core damage accident after the large loss of coolant accident (LLOCA). The fuzzy uncertainty propagation code is implemented and tested for the fault tree of the radiation release accident. We apply this code to the fault tree of the core damage accident after the LLOCA in three cases and compare the results with those computed by the probabilistic uncertainty propagation using the MC method. The results obtained by the fuzzy uncertainty propagation can be calculated in relatively short time, covering the results obtained by the probabilistic uncertainty propagation.

  10. Platelet aggregation following trauma

    DEFF Research Database (Denmark)

    Windeløv, Nis A; Sørensen, Anne M; Perner, Anders

    2014-01-01

    We aimed to elucidate platelet function in trauma patients, as it is pivotal for hemostasis yet remains scarcely investigated in this population. We conducted a prospective observational study of platelet aggregation capacity in 213 adult trauma patients on admission to an emergency department (ED...... severity score (ISS) was 17; 14 (7%) patients received 10 or more units of red blood cells in the ED (massive transfusion); 24 (11%) patients died within 28 days of trauma: 17 due to cerebral injuries, four due to exsanguination, and three from other causes. No significant association was found between...... aggregation response and ISS. Higher TRAP values were associated with death due to cerebral injuries (P 

  11. Optical network scaling: roles of spectral and spatial aggregation.

    Science.gov (United States)

    Arık, Sercan Ö; Ho, Keang-Po; Kahn, Joseph M

    2014-12-01

    As the bit rates of routed data streams exceed the throughput of single wavelength-division multiplexing channels, spectral and spatial traffic aggregation become essential for optical network scaling. These aggregation techniques reduce network routing complexity by increasing spectral efficiency to decrease the number of fibers, and by increasing switching granularity to decrease the number of switching components. Spectral aggregation yields a modest decrease in the number of fibers but a substantial decrease in the number of switching components. Spatial aggregation yields a substantial decrease in both the number of fibers and the number of switching components. To quantify routing complexity reduction, we analyze the number of multi-cast and wavelength-selective switches required in a colorless, directionless and contentionless reconfigurable optical add-drop multiplexer architecture. Traffic aggregation has two potential drawbacks: reduced routing power and increased switching component size.

  12. Results from the Application of Uncertainty Methods in the CSNI Uncertainty Methods Study (UMS)

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    Within licensing procedures there is the incentive to replace the conservative requirements for code application by a - best estimate - concept supplemented by an uncertainty analysis to account for predictive uncertainties of code results. Methods have been developed to quantify these uncertainties. The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI, has compared five methods for calculating the uncertainty in the predictions of advanced -best estimate- thermal-hydraulic codes. Most of the methods identify and combine input uncertainties. The major differences between the predictions of the methods came from the choice of uncertain parameters and the quantification of the input uncertainties, i.e. the wideness of the uncertainty ranges. Therefore, suitable experimental and analytical information has to be selected to specify these uncertainty ranges or distributions. After the closure of the Uncertainty Method Study (UMS) and after the report was issued comparison calculations of experiment LSTF-SB-CL-18 were performed by University of Pisa using different versions of the RELAP 5 code. It turned out that the version used by two of the participants calculated a 170 K higher peak clad temperature compared with other versions using the same input deck. This may contribute to the differences of the upper limit of the uncertainty ranges.

  13. Uncertainty in social dilemmas

    OpenAIRE

    Kwaadsteniet, Erik Willem de

    2007-01-01

    This dissertation focuses on social dilemmas, and more specifically, on environmental uncertainty in these dilemmas. Real-life social dilemma situations are often characterized by uncertainty. For example, fishermen mostly do not know the exact size of the fish population (i.e., resource size uncertainty). Several researchers have therefore asked themselves the question as to how such uncertainty influences people’s choice behavior. These researchers have repeatedly concluded that uncertainty...

  14. Connected Car: Quantified Self becomes Quantified Car

    Directory of Open Access Journals (Sweden)

    Melanie Swan

    2015-02-01

    Full Text Available The automotive industry could be facing a situation of profound change and opportunity in the coming decades. There are a number of influencing factors such as increasing urban and aging populations, self-driving cars, 3D parts printing, energy innovation, and new models of transportation service delivery (Zipcar, Uber. The connected car means that vehicles are now part of the connected world, continuously Internet-connected, generating and transmitting data, which on the one hand can be helpfully integrated into applications, like real-time traffic alerts broadcast to smartwatches, but also raises security and privacy concerns. This paper explores the automotive connected world, and describes five killer QS (Quantified Self-auto sensor applications that link quantified-self sensors (sensors that measure the personal biometrics of individuals like heart rate and automotive sensors (sensors that measure driver and passenger biometrics or quantitative automotive performance metrics like speed and braking activity. The applications are fatigue detection, real-time assistance for parking and accidents, anger management and stress reduction, keyless authentication and digital identity verification, and DIY diagnostics. These kinds of applications help to demonstrate the benefit of connected world data streams in the automotive industry and beyond where, more fundamentally for human progress, the automation of both physical and now cognitive tasks is underway.

  15. Aggregation and pH-temperature phase behavior for aggregates of an IgG2 antibody.

    Science.gov (United States)

    Sahin, Erinc; Weiss, William F; Kroetsch, Andrew M; King, Kevin R; Kessler, R Kendall; Das, Tapan K; Roberts, Christopher J

    2012-05-01

    Monomer unfolding and thermally accelerated aggregation kinetics to produce soluble oligomers or insoluble macroscopic aggregates were characterized as a function of pH for an IgG2 antibody using differential scanning calorimetry (DSC) and size-exclusion chromatography (SEC). Aggregate size was quantified via laser light scattering, and aggregate solubility via turbidity and visual inspection. Interestingly, nonnative oligomers were soluble at pH 5.5 above approximately 15°C, but converted reversibly to visible/insoluble particles at lower temperatures. Lower pH values yielded only soluble aggregates, whereas higher pH resulted in insoluble aggregates, regardless of the solution temperature. Unlike the growing body of literature that supports the three-endotherm model of IgG1 unfolding in DSC, the results here also illustrate limitations of that model for other monoclonal antibodies. Comparison of DSC with monomer loss (via SEC) from samples during thermal scanning indicates that the least conformationally stable domain is not the most aggregation prone, and that a number of the domains remain intact within the constituent monomers of the resulting aggregates. This highlights continued challenges with predicting a priori which domain(s) or thermal transition(s) is(are) most relevant for product stability with respect to aggregation. Copyright © 2012 Wiley Periodicals, Inc.

  16. Explicitly integrating parameter, input, and structure uncertainties into Bayesian Neural Networks for probabilistic hydrologic forecasting

    KAUST Repository

    Zhang, Xuesong; Liang, Faming; Yu, Beibei; Zong, Ziliang

    2011-01-01

    Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow

  17. Erosion of dust aggregates

    NARCIS (Netherlands)

    Seizinger, A.; Krijt, S.; Kley, W.

    2013-01-01

    Aims: The aim of this work is to gain a deeper insight into how much different aggregate types are affected by erosion. Especially, it is important to study the influence of the velocity of the impacting projectiles. We also want to provide models for dust growth in protoplanetary disks with simple

  18. Aggregates, broccoli and cauliflower

    Science.gov (United States)

    Grey, Francois; Kjems, Jørgen K.

    1989-09-01

    Naturally grown structures with fractal characters like broccoli and cauliflower are discussed and compared with DLA-type aggregates. It is suggested that the branching density can be used to characterize the growth process and an experimental method to determine this parameter is proposed.

  19. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    Science.gov (United States)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    . This complex model then serves as the basis to compare simpler model structures. Through this approach, predictive uncertainty can be quantified relative to a known reference solution.

  20. Uncertainty vs. Information (Invited)

    Science.gov (United States)

    Nearing, Grey

    2017-04-01

    Information theory is the branch of logic that describes how rational epistemic states evolve in the presence of empirical data (Knuth, 2005), and any logic of science is incomplete without such a theory. Developing a formal philosophy of science that recognizes this fact results in essentially trivial solutions to several longstanding problems are generally considered intractable, including: • Alleviating the need for any likelihood function or error model. • Derivation of purely logical falsification criteria for hypothesis testing. • Specification of a general quantitative method for process-level model diagnostics. More generally, I make the following arguments: 1. Model evaluation should not proceed by quantifying and/or reducing error or uncertainty, and instead should be approached as a problem of ensuring that our models contain as much information as our experimental data. I propose that the latter is the only question a scientist actually has the ability to ask. 2. Instead of building geophysical models as solutions to differential equations that represent conservation laws, we should build models as maximum entropy distributions constrained by conservation symmetries. This will allow us to derive predictive probabilities directly from first principles. Knuth, K. H. (2005) 'Lattice duality: The origin of probability and entropy', Neurocomputing, 67, pp. 245-274.

  1. Aggregated wind power generation probabilistic forecasting based on particle filter

    International Nuclear Information System (INIS)

    Li, Pai; Guan, Xiaohong; Wu, Jiang

    2015-01-01

    Highlights: • A new method for probabilistic forecasting of aggregated wind power generation. • A dynamic system is established based on a numerical weather prediction model. • The new method handles the non-Gaussian and time-varying wind power uncertainties. • Particle filter is applied to forecast predictive densities of wind generation. - Abstract: Probability distribution of aggregated wind power generation in a region is one of important issues for power system daily operation. This paper presents a novel method to forecast the predictive densities of the aggregated wind power generation from several geographically distributed wind farms, considering the non-Gaussian and non-stationary characteristics in wind power uncertainties. Based on a mesoscale numerical weather prediction model, a dynamic system is established to formulate the relationship between the atmospheric and near-surface wind fields of geographically distributed wind farms. A recursively backtracking framework based on the particle filter is applied to estimate the atmospheric state with the near-surface wind power generation measurements, and to forecast the possible samples of the aggregated wind power generation. The predictive densities of the aggregated wind power generation are then estimated based on these predicted samples by a kernel density estimator. In case studies, the new method presented is tested on a 9 wind farms system in Midwestern United States. The testing results that the new method can provide competitive interval forecasts for the aggregated wind power generation with conventional statistical based models, which validates the effectiveness of the new method

  2. How should epistemic uncertainty in modelling water resources management problems shape evaluations of their operations?

    Science.gov (United States)

    Dobson, B.; Pianosi, F.; Reed, P. M.; Wagener, T.

    2017-12-01

    In previous work, we have found that water supply companies are typically hesitant to use reservoir operation tools to inform their release decisions. We believe that this is, in part, due to a lack of faith in the fidelity of the optimization exercise with regards to its ability to represent the real world. In an attempt to quantify this, recent literature has studied the impact on performance from uncertainty arising in: forcing (e.g. reservoir inflows), parameters (e.g. parameters for the estimation of evaporation rate) and objectives (e.g. worst first percentile or worst case). We suggest that there is also epistemic uncertainty in the choices made during model creation, for example in the formulation of an evaporation model or aggregating regional storages. We create `rival framings' (a methodology originally developed to demonstrate the impact of uncertainty arising from alternate objective formulations), each with different modelling choices, and determine their performance impacts. We identify the Pareto approximate set of policies for several candidate formulations and then make them compete with one another in a large ensemble re-evaluation in each other's modelled spaces. This enables us to distinguish the impacts of different structural changes in the model used to evaluate system performance in an effort to generalize the validity of the optimized performance expectations.

  3. Sans study of asphaltene aggregation

    Energy Technology Data Exchange (ETDEWEB)

    Overfield, R.E.; Sheu, E.Y.; Sinha, S.K.; Liang, K.S. (Esso Resources Canada Ltd., 339-50 Avenue S.E., Calgary, Alberta T2G 2B3 (CA))

    1988-06-01

    The colloidal properties of asphaltenes have long been recognized from peculiarities in their solubility and colligative properties. A layered micellar model or asphaltenes was proposed by others in which a highly condensed alkyl aromatic formed the central part, and molecules of decreasingly aromatic character (resins) clustered around them. Numerous studies, based on a variety of techniques such as ultracentrifugation and electron microscopy indicated a particulate nature for asphaltenes with size 20-40 A diameter. Others have proposed a refined model based on x-ray diffraction and small angle scattering. In this model, interactions between flat sheets of condensed aromatic rings form the central ''crystallite'' part of a spherical particle with the outer part being comprised of the aliphatic positions of the same molecules. These particles are bunched together with some degree of entanglement into ''micelles''. Concentration and solvent dependent radii of gyration, ranging from 30-50 A were reported. The aggregation creates a good deal of uncertainty as to the true molecular size or weight of asphaltenes. Neutron scattering offers novel contrast relative to light scattering (refractive index) and x-ray scattering (electron density). This is because the scattering length of proton is negative, whereas that from deuterium and other nuclei such as C, S, O, and N are positive. Thus by replacing hydrogen with deuterium in either the solvent or the scatterer the contrast can be varied, and different parts of the molecule can be highlighted.

  4. Uncertainty and sampling issues in tank characterization

    International Nuclear Information System (INIS)

    Liebetrau, A.M.; Pulsipher, B.A.; Kashporenko, D.M.

    1997-06-01

    A defensible characterization strategy must recognize that uncertainties are inherent in any measurement or estimate of interest and must employ statistical methods for quantifying and managing those uncertainties. Estimates of risk and therefore key decisions must incorporate knowledge about uncertainty. This report focuses statistical methods that should be employed to ensure confident decision making and appropriate management of uncertainty. Sampling is a major source of uncertainty that deserves special consideration in the tank characterization strategy. The question of whether sampling will ever provide the reliable information needed to resolve safety issues is explored. The issue of sample representativeness must be resolved before sample information is reliable. Representativeness is a relative term but can be defined in terms of bias and precision. Currently, precision can be quantified and managed through an effective sampling and statistical analysis program. Quantifying bias is more difficult and is not being addressed under the current sampling strategies. Bias could be bounded by (1) employing new sampling methods that can obtain samples from other areas in the tanks, (2) putting in new risers on some worst case tanks and comparing the results from existing risers with new risers, or (3) sampling tanks through risers under which no disturbance or activity has previously occurred. With some bound on bias and estimates of precision, various sampling strategies could be determined and shown to be either cost-effective or infeasible

  5. Quantification of Uncertainty in Thermal Building Simulation

    DEFF Research Database (Denmark)

    Brohus, Henrik; Haghighat, F.; Frier, Christian

    In order to quantify uncertainty in thermal building simulation stochastic modelling is applied on a building model. An application of stochastic differential equations is presented in Part 1 comprising a general heat balance for an arbitrary number of loads and zones in a building to determine...

  6. Sustainable aggregates production : green applications for aggregate by-products.

    Science.gov (United States)

    2015-06-01

    Increased emphasis in the construction industry on sustainability and recycling requires production of : aggregate gradations with lower dust (cleaner aggregates) and smaller maximum sizeshence, increased : amount of quarry by-products (QBs). QBs ...

  7. A fuzzy stochastic framework for managing hydro-environmental and socio-economic interactions under uncertainty

    Science.gov (United States)

    Subagadis, Yohannes Hagos; Schütze, Niels; Grundmann, Jens

    2014-05-01

    An amplified interconnectedness between a hydro-environmental and socio-economic system brings about profound challenges of water management decision making. In this contribution, we present a fuzzy stochastic approach to solve a set of decision making problems, which involve hydrologically, environmentally, and socio-economically motivated criteria subjected to uncertainty and ambiguity. The proposed methodological framework combines objective and subjective criteria in a decision making procedure for obtaining an acceptable ranking in water resources management alternatives under different type of uncertainty (subjective/objective) and heterogeneous information (quantitative/qualitative) simultaneously. The first step of the proposed approach involves evaluating the performance of alternatives with respect to different types of criteria. The ratings of alternatives with respect to objective and subjective criteria are evaluated by simulation-based optimization and fuzzy linguistic quantifiers, respectively. Subjective and objective uncertainties related to the input information are handled through linking fuzziness and randomness together. Fuzzy decision making helps entail the linguistic uncertainty and a Monte Carlo simulation process is used to map stochastic uncertainty. With this framework, the overall performance of each alternative is calculated using an Order Weighted Averaging (OWA) aggregation operator accounting for decision makers' experience and opinions. Finally, ranking is achieved by conducting pair-wise comparison of management alternatives. This has been done on the basis of the risk defined by the probability of obtaining an acceptable ranking and mean difference in total performance for the pair of management alternatives. The proposed methodology is tested in a real-world hydrosystem, to find effective and robust intervention strategies for the management of a coastal aquifer system affected by saltwater intrusion due to excessive groundwater

  8. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    Energy Technology Data Exchange (ETDEWEB)

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  9. Accounting for uncertainty in marine reserve design.

    Science.gov (United States)

    Halpern, Benjamin S; Regan, Helen M; Possingham, Hugh P; McCarthy, Michael A

    2006-01-01

    Ecosystems and the species and communities within them are highly complex systems that defy predictions with any degree of certainty. Managing and conserving these systems in the face of uncertainty remains a daunting challenge, particularly with respect to developing networks of marine reserves. Here we review several modelling frameworks that explicitly acknowledge and incorporate uncertainty, and then use these methods to evaluate reserve spacing rules given increasing levels of uncertainty about larval dispersal distances. Our approach finds similar spacing rules as have been proposed elsewhere - roughly 20-200 km - but highlights several advantages provided by uncertainty modelling over more traditional approaches to developing these estimates. In particular, we argue that uncertainty modelling can allow for (1) an evaluation of the risk associated with any decision based on the assumed uncertainty; (2) a method for quantifying the costs and benefits of reducing uncertainty; and (3) a useful tool for communicating to stakeholders the challenges in managing highly uncertain systems. We also argue that incorporating rather than avoiding uncertainty will increase the chances of successfully achieving conservation and management goals.

  10. Concrete produced with recycled aggregates

    Directory of Open Access Journals (Sweden)

    J. J. L. Tenório

    Full Text Available This paper presents the analysis of the mechanical and durable properties of recycled aggregate concrete (RAC for using in concrete. The porosity of recycled coarse aggregates is known to influence the fresh and hardened concrete properties and these properties are related to the specific mass of the recycled coarse aggregates, which directly influences the mechanical properties of the concrete. The recycled aggregates were obtained from construction and demolition wastes (CDW, which were divided into recycled sand (fine and coarse aggregates. Besides this, a recycled coarse aggregate of a specific mass with a greater density was obtained by mixing the recycled aggregates of the CDW with the recycled aggregates of concrete wastes (CW. The concrete was produced in laboratory by combining three water-cement ratios, the ratios were used in agreement with NBR 6118 for structural concretes, with each recycled coarse aggregates and recycled sand or river sand, and the reference concrete was produced with natural aggregates. It was observed that recycled aggregates can be used in concrete with properties for structural concrete. In general, the use of recycled coarse aggregate in combination with recycled sand did not provide good results; but when the less porous was used, or the recycled coarse aggregate of a specific mass with a greater density, the properties of the concrete showed better results. Some RAC reached bigger strengths than the reference concrete.

  11. Sources of uncertainty in flood inundation maps

    Science.gov (United States)

    Bales, J.D.; Wagner, C.R.

    2009-01-01

    Flood inundation maps typically have been used to depict inundated areas for floods having specific exceedance levels. The uncertainty associated with the inundation boundaries is seldom quantified, in part, because all of the sources of uncertainty are not recognized and because data available to quantify uncertainty seldom are available. Sources of uncertainty discussed in this paper include hydrologic data used for hydraulic model development and validation, topographic data, and the hydraulic model. The assumption of steady flow, which typically is made to produce inundation maps, has less of an effect on predicted inundation at lower flows than for higher flows because more time typically is required to inundate areas at high flows than at low flows. Difficulties with establishing reasonable cross sections that do not intersect and that represent water-surface slopes in tributaries contribute additional uncertainties in the hydraulic modelling. As a result, uncertainty in the flood inundation polygons simulated with a one-dimensional model increases with distance from the main channel.

  12. An index for quantifying flocking behavior.

    Science.gov (United States)

    Quera, Vicenç; Herrando, Salvador; Beltran, Francesc S; Salas, Laura; Miñano, Meritxell

    2007-12-01

    One of the classic research topics in adaptive behavior is the collective displacement of groups of organisms such as flocks of birds, schools of fish, herds of mammals, and crowds of people. However, most agent-based simulations of group behavior do not provide a quantitative index for determining the point at which the flock emerges. An index was developed of the aggregation of moving individuals in a flock and an example was provided of how it can be used to quantify the degree to which a group of moving individuals actually forms a flock.

  13. Instrument uncertainty predictions

    International Nuclear Information System (INIS)

    Coutts, D.A.

    1991-07-01

    The accuracy of measurements and correlations should normally be provided for most experimental activities. The uncertainty is a measure of the accuracy of a stated value or equation. The uncertainty term reflects a combination of instrument errors, modeling limitations, and phenomena understanding deficiencies. This report provides several methodologies to estimate an instrument's uncertainty when used in experimental work. Methods are shown to predict both the pretest and post-test uncertainty

  14. Aggregation Algorithms in Heterogeneous Tables

    Directory of Open Access Journals (Sweden)

    Titus Felix FURTUNA

    2006-01-01

    Full Text Available The heterogeneous tables are most used in the problem of aggregation. A solution for this problem is to standardize these tables of figures. In this paper, we proposed some methods of aggregation based on the hierarchical algorithms.

  15. Familial Aggregation of Insomnia.

    Science.gov (United States)

    Jarrin, Denise C; Morin, Charles M; Rochefort, Amélie; Ivers, Hans; Dauvilliers, Yves A; Savard, Josée; LeBlanc, Mélanie; Merette, Chantal

    2017-02-01

    There is little information about familial aggregation of insomnia; however, this type of information is important to (1) improve our understanding of insomnia risk factors and (2) to design more effective treatment and prevention programs. This study aimed to investigate evidence of familial aggregation of insomnia among first-degree relatives of probands with and without insomnia. Cases (n = 134) and controls (n = 145) enrolled in a larger epidemiological study were solicited to invite their first-degree relatives and spouses to complete a standardized sleep/insomnia survey. In total, 371 first-degree relatives (Mage = 51.9 years, SD = 18.0; 34.3% male) and 138 spouses (Mage = 55.5 years, SD = 12.2; 68.1% male) completed the survey assessing the nature, severity, and frequency of sleep disturbances. The dependent variable was insomnia in first-degree relatives and spouses. Familial aggregation was claimed if the risk of insomnia was significantly higher in the exposed (relatives of cases) compared to the unexposed cohort (relatives of controls). The risk of insomnia was also compared between spouses in the exposed (spouses of cases) and unexposed cohort (spouses of controls). The risk of insomnia in exposed and unexposed biological relatives was 18.6% and 10.4%, respectively, yielding a relative risk (RR) of 1.80 (p = .04) after controlling for age and sex. The risk of insomnia in exposed and unexposed spouses was 9.1% and 4.2%, respectively; however, corresponding RR of 2.13 (p = .28) did not differ significantly. Results demonstrate evidence of strong familial aggregation of insomnia. Additional research is warranted to further clarify and disentangle the relative contribution of genetic and environmental factors in insomnia. © Sleep Research Society 2016. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  16. Is Time Predictability Quantifiable?

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    Computer architects and researchers in the realtime domain start to investigate processors and architectures optimized for real-time systems. Optimized for real-time systems means time predictable, i.e., architectures where it is possible to statically derive a tight bound of the worst......-case execution time. To compare different approaches we would like to quantify time predictability. That means we need to measure time predictability. In this paper we discuss the different approaches for these measurements and conclude that time predictability is practically not quantifiable. We can only...... compare the worst-case execution time bounds of different architectures....

  17. Uncertainty in hydrological signatures for gauged and ungauged catchments

    Science.gov (United States)

    Westerberg, Ida K.; Wagener, Thorsten; Coxon, Gemma; McMillan, Hilary K.; Castellarin, Attilio; Montanari, Alberto; Freer, Jim

    2016-03-01

    Reliable information about hydrological behavior is needed for water-resource management and scientific investigations. Hydrological signatures quantify catchment behavior as index values, and can be predicted for ungauged catchments using a regionalization procedure. The prediction reliability is affected by data uncertainties for the gauged catchments used in prediction and by uncertainties in the regionalization procedure. We quantified signature uncertainty stemming from discharge data uncertainty for 43 UK catchments and propagated these uncertainties in signature regionalization, while accounting for regionalization uncertainty with a weighted-pooling-group approach. Discharge uncertainty was estimated using Monte Carlo sampling of multiple feasible rating curves. For each sampled rating curve, a discharge time series was calculated and used in deriving the gauged signature uncertainty distribution. We found that the gauged uncertainty varied with signature type, local measurement conditions and catchment behavior, with the highest uncertainties (median relative uncertainty ±30-40% across all catchments) for signatures measuring high- and low-flow magnitude and dynamics. Our regionalization method allowed assessing the role and relative magnitudes of the gauged and regionalized uncertainty sources in shaping the signature uncertainty distributions predicted for catchments treated as ungauged. We found that (1) if the gauged uncertainties were neglected there was a clear risk of overconditioning the regionalization inference, e.g., by attributing catchment differences resulting from gauged uncertainty to differences in catchment behavior, and (2) uncertainty in the regionalization results was lower for signatures measuring flow distribution (e.g., mean flow) than flow dynamics (e.g., autocorrelation), and for average flows (and then high flows) compared to low flows.

  18. Uncertainty quantification in nanomechanical measurements using the atomic force microscope

    Science.gov (United States)

    Ryan Wagner; Robert Moon; Jon Pratt; Gordon Shaw; Arvind Raman

    2011-01-01

    Quantifying uncertainty in measured properties of nanomaterials is a prerequisite for the manufacture of reliable nanoengineered materials and products. Yet, rigorous uncertainty quantification (UQ) is rarely applied for material property measurements with the atomic force microscope (AFM), a widely used instrument that can measure properties at nanometer scale...

  19. Uncertainty in simulating wheat yields under climate change : Letter

    NARCIS (Netherlands)

    Asseng, S.; Ewert, F.; Rosenzweig, C.; Jones, J.W.; Supit, I.

    2013-01-01

    Projections of climate change impacts on crop yields are inherently uncertain1. Uncertainty is often quantified when projecting future greenhouse gas emissions and their influence on climate2. However, multi-model uncertainty analysis of crop responses to climate change is rare because systematic

  20. Kinetic Behaviors of Catalysis-Driven Growth of Three-Species Aggregates on Base of Exchange-Driven Aggregations

    International Nuclear Information System (INIS)

    Sun Yunfei; Chen Dan; Lin Zhenquan; Ke Jianhong

    2009-01-01

    We propose a solvable aggregation model to mimic the evolution of population A, asset B, and the quantifiable resource C in a society. In this system, the population and asset aggregates themselves grow through self-exchanges with the rate kernels K 1 (k, j) = K 1 kj and K 2 (k, j) = K 2 kj, respectively. The actions of the population and asset aggregations on the aggregation evolution of resource aggregates are described by the population-catalyzed monomer death of resource aggregates and asset-catalyzed monomer birth of resource aggregates with the rate kernels J 1 (k, j) = J 1 k and J 2 (k, j) = J 2 k, respectively. Meanwhile, the asset and resource aggregates conjunctly catalyze the monomer birth of population aggregates with the rate kernel I 1 (k, i, j) = I 1 ki μ j η , and population and resource aggregates conjunctly catalyze the monomer birth of asset aggregates with the rate kernel I 2 (k, i, j) = I 2 ki v j η . The kinetic behaviors of species A, B, and C are investigated by means of the mean-field rate equation approach. The effects of the population-catalyzed death and asset-catalyzed birth on the evolution of resource aggregates based on the self-exchanges of population and asset appear in effective forms. The coefficients of the effective population-catalyzed death and the asset-catalyzed birth are expressed as J 1e = J 1 /K 1 and J 2e = J 2 /K 2 , respectively. The aggregate size distribution of C species is found to be crucially dominated by the competition between the effective death and the effective birth. It satisfies the conventional scaling form, generalized scaling form, and modified scaling form in the cases of J 1e 2e , J 1e = J 2e , and J 1e > J 2e , respectively. Meanwhile, we also find the aggregate size distributions of populations and assets both fall into two distinct categories for different parameters μ, ν, and η: (i) When μ = ν = η = 0 and μ = ν = 0, η = 1, the population and asset aggregates obey the generalized

  1. GENERAL: Kinetic Behaviors of Catalysis-Driven Growth of Three-Species Aggregates on Base of Exchange-Driven Aggregations

    Science.gov (United States)

    Sun, Yun-Fei; Chen, Dan; Lin, Zhen-Quan; Ke, Jian-Hong

    2009-06-01

    We propose a solvable aggregation model to mimic the evolution of population A, asset B, and the quantifiable resource C in a society. In this system, the population and asset aggregates themselves grow through self-exchanges with the rate kernels K1(k, j) = K1kj and K2(k, j) = K2kj, respectively. The actions of the population and asset aggregations on the aggregation evolution of resource aggregates are described by the population-catalyzed monomer death of resource aggregates and asset-catalyzed monomer birth of resource aggregates with the rate kernels J1(k, j) = J1k and J2(k, j) = J2k, respectively. Meanwhile, the asset and resource aggregates conjunctly catalyze the monomer birth of population aggregates with the rate kernel I1(k, i, j) = I1kiμjη, and population and resource aggregates conjunctly catalyze the monomer birth of asset aggregates with the rate kernel I2(k, i, j) = I2kivjη. The kinetic behaviors of species A, B, and C are investigated by means of the mean-field rate equation approach. The effects of the population-catalyzed death and asset-catalyzed birth on the evolution of resource aggregates based on the self-exchanges of population and asset appear in effective forms. The coefficients of the effective population-catalyzed death and the asset-catalyzed birth are expressed as J1e = J1/K1 and J2e = J2/K2, respectively. The aggregate size distribution of C species is found to be crucially dominated by the competition between the effective death and the effective birth. It satisfies the conventional scaling form, generalized scaling form, and modified scaling form in the cases of J1e J2e, respectively. Meanwhile, we also find the aggregate size distributions of populations and assets both fall into two distinct categories for different parameters μ, ν, and η: (i) When μ = ν = η = 0 and μ = ν = 0, η = 1, the population and asset aggregates obey the generalized scaling forms; and (ii) When μ = ν = 1, η = 0, and μ = ν = η = 1, the

  2. Proteins aggregation and human diseases

    Science.gov (United States)

    Hu, Chin-Kun

    2015-04-01

    Many human diseases and the death of most supercentenarians are related to protein aggregation. Neurodegenerative diseases include Alzheimer's disease (AD), Huntington's disease (HD), Parkinson's disease (PD), frontotemporallobar degeneration, etc. Such diseases are due to progressive loss of structure or function of neurons caused by protein aggregation. For example, AD is considered to be related to aggregation of Aβ40 (peptide with 40 amino acids) and Aβ42 (peptide with 42 amino acids) and HD is considered to be related to aggregation of polyQ (polyglutamine) peptides. In this paper, we briefly review our recent discovery of key factors for protein aggregation. We used a lattice model to study the aggregation rates of proteins and found that the probability for a protein sequence to appear in the conformation of the aggregated state can be used to determine the temperature at which proteins can aggregate most quickly. We used molecular dynamics and simple models of polymer chains to study relaxation and aggregation of proteins under various conditions and found that when the bending-angle dependent and torsion-angle dependent interactions are zero or very small, then protein chains tend to aggregate at lower temperatures. All atom models were used to identify a key peptide chain for the aggregation of insulin chains and to find that two polyQ chains prefer anti-parallel conformation. It is pointed out that in many cases, protein aggregation does not result from protein mis-folding. A potential drug from Chinese medicine was found for Alzheimer's disease.

  3. The characterisation and evaluation of uncertainty in probabilistic risk analysis

    International Nuclear Information System (INIS)

    Parry, G.W.; Winter, P.W.

    1980-10-01

    The sources of uncertainty in probabilistic risk analysis are discussed using the event/fault tree methodology as an example. The role of statistics in quantifying these uncertainties is investigated. A class of uncertainties is identified which is, at present, unquantifiable, using either classical or Bayesian statistics. It is argued that Bayesian statistics is the more appropriate vehicle for the probabilistic analysis of rare events and a short review is given with some discussion on the representation of ignorance. (author)

  4. Characterization uncertainty and its effects on models and performance

    International Nuclear Information System (INIS)

    Rautman, C.A.; Treadway, A.H.

    1991-01-01

    Geostatistical simulation is being used to develop multiple geologic models of rock properties at the proposed Yucca Mountain repository site. Because each replicate model contains the same known information, and is thus essentially indistinguishable statistically from others, the differences between models may be thought of as representing the uncertainty in the site description. The variability among performance measures, such as ground water travel time, calculated using these replicate models therefore quantifies the uncertainty in performance that arises from uncertainty in site characterization

  5. Thermosensory reversal effect quantified

    NARCIS (Netherlands)

    Bergmann Tiest, W.M.; Kappers, A.M.L.

    2008-01-01

    At room temperature, some materials feel colder than others due to differences in thermal conductivity, heat capacity and geometry. When the ambient temperature is well above skin temperature, the roles of 'cold' and 'warm' materials are reversed. In this paper, this effect is quantified by

  6. Thermosensory reversal effect quantified

    NARCIS (Netherlands)

    Bergmann Tiest, W.M.; Kappers, A.M.L.

    2008-01-01

    At room temperature, some materials feel colder than others due to differences in thermal conductivity, heat capacity and geometry. When the ambient temperature is well above skin temperature, the roles of ‘cold’ and ‘warm’ materials are reversed. In this paper, this effect is quantified by

  7. Quantifying requirements volatility effects

    NARCIS (Netherlands)

    Kulk, G.P.; Verhoef, C.

    2008-01-01

    In an organization operating in the bancassurance sector we identified a low-risk IT subportfolio of 84 IT projects comprising together 16,500 function points, each project varying in size and duration, for which we were able to quantify its requirements volatility. This representative portfolio

  8. The quantified relationship

    NARCIS (Netherlands)

    Danaher, J.; Nyholm, S.R.; Earp, B.

    2018-01-01

    The growth of self-tracking and personal surveillance has given rise to the Quantified Self movement. Members of this movement seek to enhance their personal well-being, productivity, and self-actualization through the tracking and gamification of personal data. The technologies that make this

  9. Quantifying IT estimation risks

    NARCIS (Netherlands)

    Kulk, G.P.; Peters, R.J.; Verhoef, C.

    2009-01-01

    A statistical method is proposed for quantifying the impact of factors that influence the quality of the estimation of costs for IT-enabled business projects. We call these factors risk drivers as they influence the risk of the misestimation of project costs. The method can effortlessly be

  10. Quantification of aggregate grain shape characteristics using 3-D laser scanning technology

    CSIR Research Space (South Africa)

    Mgangira, Martin B

    2013-07-01

    Full Text Available to identify the differences between individual aggregates. It was possible to quantify differences in particle shape characteristics at the small particle scale. The study has demonstrated the advantages of the innovative 3-D laser scanning technology...

  11. Aggregations of brittle stars can perform similar ecological roles as mussel reefs

    KAUST Repository

    Geraldi, NR; Bertolini, C; Emmerson, MC; Roberts, D; Sigwart, JD; O’ Connor, NE

    2016-01-01

    considered. We quantified the abundance of sessile horse mussels Modiolus modiolus and aggregating brittle stars Ophiothrix fragilis and tested for correlations between the density of mussels (live and dead) and brittle stars each with (1) abundance, biomass

  12. Uncertainty analysis guide

    International Nuclear Information System (INIS)

    Andres, T.H.

    2002-05-01

    This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)

  13. Uncertainty analysis guide

    Energy Technology Data Exchange (ETDEWEB)

    Andres, T.H

    2002-05-01

    This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)

  14. Uncertainty and Cognitive Control

    Directory of Open Access Journals (Sweden)

    Faisal eMushtaq

    2011-10-01

    Full Text Available A growing trend of neuroimaging, behavioural and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1 There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2 There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3 The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the need for control; (4 Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders.

  15. Uncertainty quantification for environmental models

    Science.gov (United States)

    Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming

    2012-01-01

    Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10

  16. Uncertainty analysis in Monte Carlo criticality computations

    International Nuclear Information System (INIS)

    Qi Ao

    2011-01-01

    Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.

  17. Uncertainties in fission-product decay-heat calculations

    Energy Technology Data Exchange (ETDEWEB)

    Oyamatsu, K.; Ohta, H.; Miyazono, T.; Tasaka, K. [Nagoya Univ. (Japan)

    1997-03-01

    The present precision of the aggregate decay heat calculations is studied quantitatively for 50 fissioning systems. In this evaluation, nuclear data and their uncertainty data are taken from ENDF/B-VI nuclear data library and those which are not available in this library are supplemented by a theoretical consideration. An approximate method is proposed to simplify the evaluation of the uncertainties in the aggregate decay heat calculations so that we can point out easily nuclei which cause large uncertainties in the calculated decay heat values. In this paper, we attempt to clarify the justification of the approximation which was not very clear at the early stage of the study. We find that the aggregate decay heat uncertainties for minor actinides such as Am and Cm isotopes are 3-5 times as large as those for {sup 235}U and {sup 239}Pu. The recommended values by Atomic Energy Society of Japan (AESJ) were given for 3 major fissioning systems, {sup 235}U(t), {sup 239}Pu(t) and {sup 238}U(f). The present results are consistent with the AESJ values for these systems although the two evaluations used different nuclear data libraries and approximations. Therefore, the present results can also be considered to supplement the uncertainty values for the remaining 17 fissioning systems in JNDC2, which were not treated in the AESJ evaluation. Furthermore, we attempt to list nuclear data which cause large uncertainties in decay heat calculations for the future revision of decay and yield data libraries. (author)

  18. Probabilistic Accident Consequence Uncertainty Analysis of the Food Chain Module in the COSYMA Package (invited paper)

    International Nuclear Information System (INIS)

    Brown, J.; Jones, J.A.

    2000-01-01

    This paper describes the uncertainty analysis of the food chain module of COSYMA and the uncertainty distributions on the input parameter values for the food chain model provided by the expert panels that were used for the analysis. Two expert panels were convened, covering the areas of soil and plant transfer processes and transfer to and through animals. The aggregated uncertainty distributions from the experts for the elicited variables were used in an uncertainty analysis of the food chain module of COSYMA. The main aim of the module analysis was to identify those parameters whose uncertainty makes large contributions to the overall uncertainty and so should be included in the overall analysis. (author)

  19. A model for bacterial colonization of sinking aggregates.

    Science.gov (United States)

    Bearon, R N

    2007-01-01

    Sinking aggregates provide important nutrient-rich environments for marine bacteria. Quantifying the rate at which motile bacteria colonize such aggregations is important in understanding the microbial loop in the pelagic food web. In this paper, a simple analytical model is presented to predict the rate at which bacteria undergoing a random walk encounter a sinking aggregate. The model incorporates the flow field generated by the sinking aggregate, the swimming behavior of the bacteria, and the interaction of the flow with the swimming behavior. An expression for the encounter rate is computed in the limit of large Péclet number when the random walk can be approximated by a diffusion process. Comparison with an individual-based numerical simulation is also given.

  20. Concrete = aggregate, cement, water?

    International Nuclear Information System (INIS)

    Jelinek, J.

    1990-01-01

    Concrete for the Temelin nuclear power plant is produced to about 70 different formulae. For quality production, homogeneous properties of aggregates, accurate proportioning devices, technological discipline and systematic inspections and tests should be assured. The results are reported of measuring compression strength after 28 days for different concrete samples. The results of such tests allow reducing the proportion of cement, which brings about considerable savings. Reduction in cement quantities can also be achieved by adding ash to the concrete mixes. Ligoplast, a plasticizer addition is used for improving workability. (M.D). 8 figs

  1. Quantifying light pollution

    International Nuclear Information System (INIS)

    Cinzano, P.; Falchi, F.

    2014-01-01

    In this paper we review new available indicators useful to quantify and monitor light pollution, defined as the alteration of the natural quantity of light in the night environment due to introduction of manmade light. With the introduction of recent radiative transfer methods for the computation of light pollution propagation, several new indicators become available. These indicators represent a primary step in light pollution quantification, beyond the bare evaluation of the night sky brightness, which is an observational effect integrated along the line of sight and thus lacking the three-dimensional information. - Highlights: • We review new available indicators useful to quantify and monitor light pollution. • These indicators are a primary step in light pollution quantification. • These indicators allow to improve light pollution mapping from a 2D to a 3D grid. • These indicators allow carrying out a tomography of light pollution. • We show an application of this technique to an Italian region

  2. Taurine and platelet aggregation

    International Nuclear Information System (INIS)

    Nauss-Karol, C.; VanderWende, C.; Gaut, Z.N.

    1986-01-01

    Taurine is a putative neurotransmitter or neuromodulator. The endogenous taurine concentration in human platelets, determined by amino acid analysis, is 15 μM/g. In spite of this high level, taurine is actively accumulated. Uptake is saturable, Na + and temperature dependent, and suppressed by metabolic inhibitors, structural analogues, and several classes of centrally active substances. High, medium and low affinity transport processes have been characterized, and the platelet may represent a model system for taurine transport in the CNS. When platelets were incubated with 14 C-taurine for 30 minutes, then resuspended in fresh medium and reincubated for one hour, essentially all of the taurine was retained within the cells. Taurine, at concentrations ranging from 10-1000 μM, had no effect on platelet aggregation induced by ADP or epinephrine. However, taurine may have a role in platelet aggregation since 35-39% of the taurine taken up by human platelets appears to be secreted during the release reaction induced by low concentrations of either epinephrine or ADP, respectively. This release phenomenon would imply that part of the taurine taken up is stored directly in the dense bodies of the platelet

  3. Research on Judgment Aggregation Based on Logic

    Directory of Open Access Journals (Sweden)

    Li Dai

    2014-05-01

    Full Text Available Preference aggregation and judgment aggregation are two basic research models of group decision making. And preference aggregation has been deeply studied in social choice theory. However, researches of social choice theory gradually focus on judgment aggregation which appears recently. Judgment aggregation focuses on how to aggregate many consistent logical formulas into one, from the perspective of logic. We try to start with judgment aggregation model based on logic and then explore different solutions to problem of judgment aggregation.

  4. DS02 uncertainty analysis

    International Nuclear Information System (INIS)

    Kaul, Dean C.; Egbert, Stephen D.; Woolson, William A.

    2005-01-01

    In order to avoid the pitfalls that so discredited DS86 and its uncertainty estimates, and to provide DS02 uncertainties that are both defensible and credible, this report not only presents the ensemble uncertainties assembled from uncertainties in individual computational elements and radiation dose components but also describes how these relate to comparisons between observed and computed quantities at critical intervals in the computational process. These comparisons include those between observed and calculated radiation free-field components, where observations include thermal- and fast-neutron activation and gamma-ray thermoluminescence, which are relevant to the estimated systematic uncertainty for DS02. The comparisons also include those between calculated and observed survivor shielding, where the observations consist of biodosimetric measurements for individual survivors, which are relevant to the estimated random uncertainty for DS02. (J.P.N.)

  5. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....

  6. Model uncertainty and probability

    International Nuclear Information System (INIS)

    Parry, G.W.

    1994-01-01

    This paper discusses the issue of model uncertainty. The use of probability as a measure of an analyst's uncertainty as well as a means of describing random processes has caused some confusion, even though the two uses are representing different types of uncertainty with respect to modeling a system. The importance of maintaining the distinction between the two types is illustrated with a simple example

  7. Uncertainty in artificial intelligence

    CERN Document Server

    Kanal, LN

    1986-01-01

    How to deal with uncertainty is a subject of much controversy in Artificial Intelligence. This volume brings together a wide range of perspectives on uncertainty, many of the contributors being the principal proponents in the controversy.Some of the notable issues which emerge from these papers revolve around an interval-based calculus of uncertainty, the Dempster-Shafer Theory, and probability as the best numeric model for uncertainty. There remain strong dissenting opinions not only about probability but even about the utility of any numeric method in this context.

  8. Uncertainties in hydrogen combustion

    International Nuclear Information System (INIS)

    Stamps, D.W.; Wong, C.C.; Nelson, L.S.

    1988-01-01

    Three important areas of hydrogen combustion with uncertainties are identified: high-temperature combustion, flame acceleration and deflagration-to-detonation transition, and aerosol resuspension during hydrogen combustion. The uncertainties associated with high-temperature combustion may affect at least three different accident scenarios: the in-cavity oxidation of combustible gases produced by core-concrete interactions, the direct containment heating hydrogen problem, and the possibility of local detonations. How these uncertainties may affect the sequence of various accident scenarios is discussed and recommendations are made to reduce these uncertainties. 40 references

  9. Exploring Heterogeneous Multicore Architectures for Advanced Embedded Uncertainty Quantification.

    Energy Technology Data Exchange (ETDEWEB)

    Phipps, Eric T.; Edwards, Harold C.; Hu, Jonathan J.

    2014-09-01

    We explore rearrangements of classical uncertainty quantification methods with the aim of achieving higher aggregate performance for uncertainty quantification calculations on emerging multicore and manycore architectures. We show a rearrangement of the stochastic Galerkin method leads to improved performance and scalability on several computational architectures whereby un- certainty information is propagated at the lowest levels of the simulation code improving memory access patterns, exposing new dimensions of fine grained parallelism, and reducing communica- tion. We also develop a general framework for implementing such rearrangements for a diverse set of uncertainty quantification algorithms as well as computational simulation codes to which they are applied.

  10. A Multi-objective Model for Transmission Planning Under Uncertainties

    DEFF Research Database (Denmark)

    Zhang, Chunyu; Wang, Qi; Ding, Yi

    2014-01-01

    The significant growth of distributed energy resources (DERs) associated with smart grid technologies has prompted excessive uncertainties in the transmission system. The most representative is the novel notation of commercial aggregator who has lighted a bright way for DERs to participate power...

  11. Uncertainty quantification in nanomechanical measurements using the atomic force microscope

    International Nuclear Information System (INIS)

    Wagner, Ryan; Raman, Arvind; Moon, Robert; Pratt, Jon; Shaw, Gordon

    2011-01-01

    Quantifying uncertainty in measured properties of nanomaterials is a prerequisite for the manufacture of reliable nanoengineered materials and products. Yet, rigorous uncertainty quantification (UQ) is rarely applied for material property measurements with the atomic force microscope (AFM), a widely used instrument that can measure properties at nanometer scale resolution of both inorganic and biological surfaces and nanomaterials. We present a framework to ascribe uncertainty to local nanomechanical properties of any nanoparticle or surface measured with the AFM by taking into account the main uncertainty sources inherent in such measurements. We demonstrate the framework by quantifying uncertainty in AFM-based measurements of the transverse elastic modulus of cellulose nanocrystals (CNCs), an abundant, plant-derived nanomaterial whose mechanical properties are comparable to Kevlar fibers. For a single, isolated CNC the transverse elastic modulus was found to have a mean of 8.1 GPa and a 95% confidence interval of 2.7–20 GPa. A key result is that multiple replicates of force–distance curves do not sample the important sources of uncertainty, which are systematic in nature. The dominant source of uncertainty is the nondimensional photodiode sensitivity calibration rather than the cantilever stiffness or Z-piezo calibrations. The results underscore the great need for, and open a path towards, quantifying and minimizing uncertainty in AFM-based material property measurements of nanoparticles, nanostructured surfaces, thin films, polymers and biomaterials.

  12. Quantifying linguistic coordination

    DEFF Research Database (Denmark)

    Fusaroli, Riccardo; Tylén, Kristian

    task (Bahrami et al 2010, Fusaroli et al. 2012) we extend to linguistic coordination dynamical measures of recurrence employed in the analysis of sensorimotor coordination (such as heart-rate (Konvalinka et al 2011), postural sway (Shockley 2005) and eye-movements (Dale, Richardson and Kirkham 2012......). We employ nominal recurrence analysis (Orsucci et al 2005, Dale et al 2011) on the decision-making conversations between the participants. We report strong correlations between various indexes of recurrence and collective performance. We argue this method allows us to quantify the qualities...

  13. Proteins aggregation and human diseases

    International Nuclear Information System (INIS)

    Hu, Chin-Kun

    2015-01-01

    Many human diseases and the death of most supercentenarians are related to protein aggregation. Neurodegenerative diseases include Alzheimer's disease (AD), Huntington's disease (HD), Parkinson's disease (PD), frontotemporallobar degeneration, etc. Such diseases are due to progressive loss of structure or function of neurons caused by protein aggregation. For example, AD is considered to be related to aggregation of Aβ40 (peptide with 40 amino acids) and Aβ42 (peptide with 42 amino acids) and HD is considered to be related to aggregation of polyQ (polyglutamine) peptides. In this paper, we briefly review our recent discovery of key factors for protein aggregation. We used a lattice model to study the aggregation rates of proteins and found that the probability for a protein sequence to appear in the conformation of the aggregated state can be used to determine the temperature at which proteins can aggregate most quickly. We used molecular dynamics and simple models of polymer chains to study relaxation and aggregation of proteins under various conditions and found that when the bending-angle dependent and torsion-angle dependent interactions are zero or very small, then protein chains tend to aggregate at lower temperatures. All atom models were used to identify a key peptide chain for the aggregation of insulin chains and to find that two polyQ chains prefer anti-parallel conformation. It is pointed out that in many cases, protein aggregation does not result from protein mis-folding. A potential drug from Chinese medicine was found for Alzheimer's disease. (paper)

  14. Information Aggregation and Investment Decisions

    OpenAIRE

    Christian Hellwig; Aleh Tsyvinski; Elias Albagli

    2010-01-01

    This paper studies an environment in which information aggregation interacts with investment decisions. The first contribution of the paper is to develop a tractable model of such interactions. The second contribution is to solve the model in closed form and derive a series of implications that result from the interplay between information aggregation and the value of market information for the firms' decision problem. We show that the model generates an information aggregation wedge between ...

  15. Uncertainty Propagation in Monte Carlo Depletion Analysis

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Kim, Yeong-il; Park, Ho Jin; Joo, Han Gyu; Kim, Chang Hyo

    2008-01-01

    A new formulation aimed at quantifying uncertainties of Monte Carlo (MC) tallies such as k eff and the microscopic reaction rates of nuclides and nuclide number densities in MC depletion analysis and examining their propagation behaviour as a function of depletion time step (DTS) is presented. It is shown that the variance of a given MC tally used as a measure of its uncertainty in this formulation arises from four sources; the statistical uncertainty of the MC tally, uncertainties of microscopic cross sections and nuclide number densities, and the cross correlations between them and the contribution of the latter three sources can be determined by computing the correlation coefficients between the uncertain variables. It is also shown that the variance of any given nuclide number density at the end of each DTS stems from uncertainties of the nuclide number densities (NND) and microscopic reaction rates (MRR) of nuclides at the beginning of each DTS and they are determined by computing correlation coefficients between these two uncertain variables. To test the viability of the formulation, we conducted MC depletion analysis for two sample depletion problems involving a simplified 7x7 fuel assembly (FA) and a 17x17 PWR FA, determined number densities of uranium and plutonium isotopes and their variances as well as k ∞ and its variance as a function of DTS, and demonstrated the applicability of the new formulation for uncertainty propagation analysis that need be followed in MC depletion computations. (authors)

  16. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  17. Uncertainties and reliability theories for reactor safety

    International Nuclear Information System (INIS)

    Veneziano, D.

    1975-01-01

    What makes the safety problem of nuclear reactors particularly challenging is the demand for high levels of reliability and the limitation of statistical information. The latter is an unfortunate circumstance, which forces deductive theories of reliability to use models and parameter values with weak factual support. The uncertainty about probabilistic models and parameters which are inferred from limited statistical evidence can be quantified and incorporated rationally into inductive theories of reliability. In such theories, the starting point is the information actually available, as opposed to an estimated probabilistic model. But, while the necessity of introducing inductive uncertainty into reliability theories has been recognized by many authors, no satisfactory inductive theory is presently available. The paper presents: a classification of uncertainties and of reliability models for reactor safety; a general methodology to include these uncertainties into reliability analysis; a discussion about the relative advantages and the limitations of various reliability theories (specifically, of inductive and deductive, parametric and nonparametric, second-moment and full-distribution theories). For example, it is shown that second-moment theories, which were originally suggested to cope with the scarcity of data, and which have been proposed recently for the safety analysis of secondary containment vessels, are the least capable of incorporating statistical uncertainty. The focus is on reliability models for external threats (seismic accelerations and tornadoes). As an application example, the effect of statistical uncertainty on seismic risk is studied using parametric full-distribution models

  18. Uncertainty in social dilemmas

    NARCIS (Netherlands)

    Kwaadsteniet, Erik Willem de

    2007-01-01

    This dissertation focuses on social dilemmas, and more specifically, on environmental uncertainty in these dilemmas. Real-life social dilemma situations are often characterized by uncertainty. For example, fishermen mostly do not know the exact size of the fish population (i.e., resource size

  19. Uncertainty and Climate Change

    OpenAIRE

    Berliner, L. Mark

    2003-01-01

    Anthropogenic, or human-induced, climate change is a critical issue in science and in the affairs of humankind. Though the target of substantial research, the conclusions of climate change studies remain subject to numerous uncertainties. This article presents a very brief review of the basic arguments regarding anthropogenic climate change with particular emphasis on uncertainty.

  20. Deterministic uncertainty analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1987-01-01

    Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig

  1. Uncertainty and simulation

    International Nuclear Information System (INIS)

    Depres, B.; Dossantos-Uzarralde, P.

    2009-01-01

    More than 150 researchers and engineers from universities and the industrial world met to discuss on the new methodologies developed around assessing uncertainty. About 20 papers were presented and the main topics were: methods to study the propagation of uncertainties, sensitivity analysis, nuclear data covariances or multi-parameter optimisation. This report gathers the contributions of CEA researchers and engineers

  2. Flocculation kinetics and aggregate structure of kaolinite mixtures in laminar tube flow.

    Science.gov (United States)

    Vaezi G, Farid; Sanders, R Sean; Masliyah, Jacob H

    2011-03-01

    Flocculation is commonly used in various solid-liquid separation processes in chemical and mineral industries to separate desired products or to treat waste streams. This paper presents an experimental technique to study flocculation processes in laminar tube flow. This approach allows for more realistic estimation of the shear rate to which an aggregate is exposed, as compared to more complicated shear fields (e.g. stirred tanks). A direct sampling method is used to minimize the effect of sampling on the aggregate structure. A combination of aggregate settling velocity and image analysis was used to quantify the structure of the aggregate. Aggregate size, density, and fractal dimension were found to be the most important aggregate structural parameters. The two methods used to determine aggregate fractal dimension were in good agreement. The effects of advective flow through an aggregate's porous structure and transition-regime drag coefficient on the evaluation of aggregate density were considered. The technique was applied to investigate the flocculation kinetics and the evolution of the aggregate structure of kaolin particles with an anionic flocculant under conditions similar to those of oil sands fine tailings. Aggregates were formed using a well controlled two-stage aggregation process. Detailed statistical analysis was performed to investigate the establishment of dynamic equilibrium condition in terms of aggregate size and density evolution. An equilibrium steady state condition was obtained within 90 s of the start of flocculation; after which no further change in aggregate structure was observed. Although longer flocculation times inside the shear field could conceivably cause aggregate structure conformation, statistical analysis indicated that this did not occur for the studied conditions. The results show that the technique and experimental conditions employed here produce aggregates having a well-defined, reproducible structure. Copyright © 2011

  3. CSAU (Code Scaling, Applicability and Uncertainty)

    International Nuclear Information System (INIS)

    Wilson, G.E.; Boyack, B.E.

    1989-01-01

    Best Estimate computer codes have been accepted by the U.S. Nuclear Regulatory Commission as an optional tool for performing safety analysis related to the licensing and regulation of current nuclear reactors producing commercial electrical power, providing their uncertainty is quantified. In support of this policy change, the NRC and its contractors and consultants have developed and demonstrated an uncertainty quantification methodology called CSAU. The primary use of the CSAU methodology is to quantify safety margins for existing designs; however, the methodology can also serve an equally important role in advanced reactor research for plants not yet built. This paper describes the CSAU methodology, at the generic process level, and provides the general principles whereby it may be applied to evaluations of advanced reactor designs

  4. Conditional uncertainty principle

    Science.gov (United States)

    Gour, Gilad; Grudka, Andrzej; Horodecki, Michał; Kłobus, Waldemar; Łodyga, Justyna; Narasimhachar, Varun

    2018-04-01

    We develop a general operational framework that formalizes the concept of conditional uncertainty in a measure-independent fashion. Our formalism is built upon a mathematical relation which we call conditional majorization. We define conditional majorization and, for the case of classical memory, we provide its thorough characterization in terms of monotones, i.e., functions that preserve the partial order under conditional majorization. We demonstrate the application of this framework by deriving two types of memory-assisted uncertainty relations, (1) a monotone-based conditional uncertainty relation and (2) a universal measure-independent conditional uncertainty relation, both of which set a lower bound on the minimal uncertainty that Bob has about Alice's pair of incompatible measurements, conditioned on arbitrary measurement that Bob makes on his own system. We next compare the obtained relations with their existing entropic counterparts and find that they are at least independent.

  5. Physical Uncertainty Bounds (PUB)

    Energy Technology Data Exchange (ETDEWEB)

    Vaughan, Diane Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Dean L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-03-19

    This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.

  6. Measurement uncertainty and probability

    CERN Document Server

    Willink, Robin

    2013-01-01

    A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.

  7. Estimating uncertainty of inference for validation

    Energy Technology Data Exchange (ETDEWEB)

    Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  8. Understanding Climate Uncertainty with an Ocean Focus

    Science.gov (United States)

    Tokmakian, R. T.

    2009-12-01

    Uncertainty in climate simulations arises from various aspects of the end-to-end process of modeling the Earth’s climate. First, there is uncertainty from the structure of the climate model components (e.g. ocean/ice/atmosphere). Even the most complex models are deficient, not only in the complexity of the processes they represent, but in which processes are included in a particular model. Next, uncertainties arise from the inherent error in the initial and boundary conditions of a simulation. Initial conditions are the state of the weather or climate at the beginning of the simulation and other such things, and typically come from observations. Finally, there is the uncertainty associated with the values of parameters in the model. These parameters may represent physical constants or effects, such as ocean mixing, or non-physical aspects of modeling and computation. The uncertainty in these input parameters propagates through the non-linear model to give uncertainty in the outputs. The models in 2020 will no doubt be better than today’s models, but they will still be imperfect, and development of uncertainty analysis technology is a critical aspect of understanding model realism and prediction capability. Smith [2002] and Cox and Stephenson [2007] discuss the need for methods to quantify the uncertainties within complicated systems so that limitations or weaknesses of the climate model can be understood. In making climate predictions, we need to have available both the most reliable model or simulation and a methods to quantify the reliability of a simulation. If quantitative uncertainty questions of the internal model dynamics are to be answered with complex simulations such as AOGCMs, then the only known path forward is based on model ensembles that characterize behavior with alternative parameter settings [e.g. Rougier, 2007]. The relevance and feasibility of using "Statistical Analysis of Computer Code Output" (SACCO) methods for examining uncertainty in

  9. Long Memory, Fractional Integration, and Cross-Sectional Aggregation

    DEFF Research Database (Denmark)

    Haldrup, Niels; Vera-Valdés, Eduardo

    under certain conditions and that the aggregated series will have an autocorrelation function that exhibits hyperbolic decay. In this paper, we further analyze this phenomenon. We demonstrate that the aggregation argument leading to long memory is consistent with a wide range of definitions of long...... memory. In a simulation study we seek to quantify Granger's result and find that indeed both the time series and cross-sectional dimensions have to be rather significant to reflect the theoretical asymptotic results. Long memory can result even for moderate T,N dimensions but can vary considerably from...

  10. Characterization of Diesel Soot Aggregates by Scattering and Extinction Methods

    Science.gov (United States)

    Kamimoto, Takeyuki

    2006-07-01

    Characteristics of diesel soot particles sampled from diesel exhaust of a common-rail turbo-charged diesel engine are quantified by scattering and extinction diagnostics using newly build two laser-based instruments. The radius of gyration representing the aggregates size is measured by the angular distribution of scattering intensity, while the soot mass concentration is measured by a two-wavelength extinction method. An approach to estimate the refractive index of diesel soot by an analysis of the extinction and scattering data using an aggregates scattering theory is proposed.

  11. Characterization of Diesel Soot Aggregates by Scattering and Extinction Methods

    International Nuclear Information System (INIS)

    Kamimoto, Takeyuki

    2006-01-01

    Characteristics of diesel soot particles sampled from diesel exhaust of a common-rail turbo-charged diesel engine are quantified by scattering and extinction diagnostics using newly build two laser-based instruments. The radius of gyration representing the aggregates size is measured by the angular distribution of scattering intensity, while the soot mass concentration is measured by a two-wavelength extinction method. An approach to estimate the refractive index of diesel soot by an analysis of the extinction and scattering data using an aggregates scattering theory is proposed

  12. Development and application of objective uncertainty measures for nuclear power plant transient analysis[Dissertation 3897

    Energy Technology Data Exchange (ETDEWEB)

    Vinai, P

    2007-10-15

    ) plant transient in which the void feedback mechanism plays an important role. In all three cases, it has been shown that a more detailed, realistic and accurate representation of output uncertainty can he achieved with the proposed methodology, than is possible based on an 'expert-opinion' approach. Moreover, the importance of state space partitioning has been clearly brought out, by comparing results with those obtained assuming a single pdf for the entire database. The analysis of the Omega integral test has demonstrated that the drift-flux model's uncertainty remains important even while introducing other representative uncertainties. The developed methodology well retains its advantageous features during consideration of different uncertainty sources. The Peach Bottom turbine trip study represents a valuable demonstration of the applicability of the developed methodology to NPP transient analysis. In this application, the novel density estimator was also employed for estimating the pdf that underlies the uncertainty of the maximum power during the transient. The results obtained have been found to provide more detailed insights than can be gained from the 'classical' approach. Another feature of the turbine trip analysis has been a qualitative study of the impact of possible neutronics cross-section uncertainties on the power calculation. Besides the important influence of the uncertainty in void fraction predictions on the accuracy of the coupled transient's simulation, the uncertainties in neutronics parameters and models can be crucial as well. This points at the need for quantifying uncertainties in neutronics calculations and to aggregate them with those assessed for the thermal-hydraulic phenomena for the simulation of such multi-physics transients.

  13. Development and application of objective uncertainty measures for nuclear power plant transient analysis

    International Nuclear Information System (INIS)

    Vinai, P.

    2007-10-01

    an important role. In all three cases, it has been shown that a more detailed, realistic and accurate representation of output uncertainty can he achieved with the proposed methodology, than is possible based on an 'expert-opinion' approach. Moreover, the importance of state space partitioning has been clearly brought out, by comparing results with those obtained assuming a single pdf for the entire database. The analysis of the Omega integral test has demonstrated that the drift-flux model's uncertainty remains important even while introducing other representative uncertainties. The developed methodology well retains its advantageous features during consideration of different uncertainty sources. The Peach Bottom turbine trip study represents a valuable demonstration of the applicability of the developed methodology to NPP transient analysis. In this application, the novel density estimator was also employed for estimating the pdf that underlies the uncertainty of the maximum power during the transient. The results obtained have been found to provide more detailed insights than can be gained from the 'classical' approach. Another feature of the turbine trip analysis has been a qualitative study of the impact of possible neutronics cross-section uncertainties on the power calculation. Besides the important influence of the uncertainty in void fraction predictions on the accuracy of the coupled transient's simulation, the uncertainties in neutronics parameters and models can be crucial as well. This points at the need for quantifying uncertainties in neutronics calculations and to aggregate them with those assessed for the thermal-hydraulic phenomena for the simulation of such multi-physics transients

  14. Uncertainty of block estimates introduced by mis-allocation of point samples: on the example of spatial indoor Radon data

    Energy Technology Data Exchange (ETDEWEB)

    Bossew, P. [European Commission (Ecuador), Joint Research Centre (JRC), Institute for Environment and Sustainability (IES), TP441, Via Fermi 1, I-21020 Ispra (Italy)], E-mail: peter.bossew@jrc.it

    2009-03-15

    The European indoor Radon map which is currently under production is based on gridded data supplied by the contributing countries. Each grid node represents the arithmetic mean (among other statistics) of the individual measurements within 10 x 10 km{sup 2}, called cells, pixels or blocks, which are aligned to a common metric coordinate system. During work the question emerged, if uncertainty in the geo-referencing of individual data might affect the aggregated 'block' statistics to an extent that the statistics have an unpredictably high additional uncertainty, which makes them unusable. In this note we try to quantify the effect, based on simulations. The overall result is that the relevant statistics should not be affected too badly in most cases, in particular if the rate of mis-allocations, and the mean uncertainty of coordinates are not too high, so that also cell statistics which are to some degree distorted by mis-allocated data, can still be used for the purpose of the European Radon map.

  15. Marketable pollution permits with uncertainty and transaction costs

    International Nuclear Information System (INIS)

    Montero, Juan-Pablo

    1998-01-01

    Increasing interest in the use of marketable permits for pollution control has become evident in recent years. Concern regarding their performance still remains because empirical evidence has shown transaction costs and uncertainty to be significant in past and existing marketable permits programs. In this paper we develop theoretical and numerical models that include transaction costs and uncertainty (in trade approval) to show their effects on market performance (i.e., equilibrium price of permits and trading volume) and aggregate control costs. We also show that in the presence of transaction costs and uncertainty the initial allocation of permits may not be neutral in terms of efficiency. Furthermore, using a numerical model for a hypothetical NO x trading program in which participants have discrete control technology choices, we find that aggregate control costs and the equilibrium price of permits are sensitive to the initial allocation of permits, even for constant marginal transaction costs and certainty

  16. Managing uncertainty in flood protection planning with climate projections

    Science.gov (United States)

    Dittes, Beatrice; Špačková, Olga; Schoppa, Lukas; Straub, Daniel

    2018-04-01

    Technical flood protection is a necessary part of integrated strategies to protect riverine settlements from extreme floods. Many technical flood protection measures, such as dikes and protection walls, are costly to adapt after their initial construction. This poses a challenge to decision makers as there is large uncertainty in how the required protection level will change during the measure lifetime, which is typically many decades long. Flood protection requirements should account for multiple future uncertain factors: socioeconomic, e.g., whether the population and with it the damage potential grows or falls; technological, e.g., possible advancements in flood protection; and climatic, e.g., whether extreme discharge will become more frequent or not. This paper focuses on climatic uncertainty. Specifically, we devise methodology to account for uncertainty associated with the use of discharge projections, ultimately leading to planning implications. For planning purposes, we categorize uncertainties as either visible, if they can be quantified from available catchment data, or hidden, if they cannot be quantified from catchment data and must be estimated, e.g., from the literature. It is vital to consider the hidden uncertainty, since in practical applications only a limited amount of information (e.g., a finite projection ensemble) is available. We use a Bayesian approach to quantify the visible uncertainties and combine them with an estimate of the hidden uncertainties to learn a joint probability distribution of the parameters of extreme discharge. The methodology is integrated into an optimization framework and applied to a pre-alpine case study to give a quantitative, cost-optimal recommendation on the required amount of flood protection. The results show that hidden uncertainty ought to be considered in planning, but the larger the uncertainty already present, the smaller the impact of adding more. The recommended planning is robust to moderate changes in

  17. Small break LOCA RELAP5/MOD3 uncertainty quantification: Bias and uncertainty evaluation for important phenomena

    International Nuclear Information System (INIS)

    Ortiz, M.G.; Ghan, L.S.; Vogl, J.

    1991-01-01

    The Nuclear Regulatory Commission (NRC) revised the Emergency Core Cooling System (ECCS) licensing rule to allow the use of Best Estimate (BE) computer codes, provided the uncertainty of the calculations are quantified and used in the licensing and regulation process. The NRC developed a generic methodology called Code Scaling, Applicability and Uncertainty (CSAU) to evaluate BE code uncertainties. The CSAU methodology was demonstrated with a specific application to a pressurized water reactor (PWR), experiencing a postulated large break loss-of-coolant accident (LBLOCA). The current work is part of an effort to adapt and demonstrate the CSAU methodology to a small break (SB) LOCA in a PWR of B and W design using RELAP5/MOD3 as the simulation tool. The subject of this paper is the Assessment and Ranging of Parameters (Element 2 of the CSAU methodology), which determines the contribution to uncertainty of specific models in the code

  18. Summary from the epistemic uncertainty workshop: consensus amid diversity

    International Nuclear Information System (INIS)

    Ferson, Scott; Joslyn, Cliff A.; Helton, Jon C.; Oberkampf, William L.; Sentz, Kari

    2004-01-01

    The 'Epistemic Uncertainty Workshop' sponsored by Sandia National Laboratories was held in Albuquerque, New Mexico, on 6-7 August 2002. The workshop was organized around a set of Challenge Problems involving both epistemic and aleatory uncertainty that the workshop participants were invited to solve and discuss. This concluding article in a special issue of Reliability Engineering and System Safety based on the workshop discusses the intent of the Challenge Problems, summarizes some discussions from the workshop, and provides a technical comparison among the papers in this special issue. The Challenge Problems were computationally simple models that were intended as vehicles for the illustration and comparison of conceptual and numerical techniques for use in analyses that involve: (i) epistemic uncertainty, (ii) aggregation of multiple characterizations of epistemic uncertainty, (iii) combination of epistemic and aleatory uncertainty, and (iv) models with repeated parameters. There was considerable diversity of opinion at the workshop about both methods and fundamental issues, and yet substantial consensus about what the answers to the problems were, and even about how each of the four issues should be addressed. Among the technical approaches advanced were probability theory, Dempster-Shafer evidence theory, random sets, sets of probability measures, imprecise coherent probabilities, coherent lower previsions, probability boxes, possibility theory, fuzzy sets, joint distribution tableaux, polynomial chaos expansions, and info-gap models. Although some participants maintained that a purely probabilistic approach is fully capable of accounting for all forms of uncertainty, most agreed that the treatment of epistemic uncertainty introduces important considerations and that the issues underlying the Challenge Problems are legitimate and significant. Topics identified as meriting additional research include elicitation of uncertainty representations, aggregation of

  19. Quantifying global exergy resources

    International Nuclear Information System (INIS)

    Hermann, Weston A.

    2006-01-01

    Exergy is used as a common currency to assess and compare the reservoirs of theoretically extractable work we call energy resources. Resources consist of matter or energy with properties different from the predominant conditions in the environment. These differences can be classified as physical, chemical, or nuclear exergy. This paper identifies the primary exergy reservoirs that supply exergy to the biosphere and quantifies the intensive and extensive exergy of their derivative secondary reservoirs, or resources. The interconnecting accumulations and flows among these reservoirs are illustrated to show the path of exergy through the terrestrial system from input to its eventual natural or anthropogenic destruction. The results are intended to assist in evaluation of current resource utilization, help guide fundamental research to enable promising new energy technologies, and provide a basis for comparing the resource potential of future energy options that is independent of technology and cost

  20. Uncertainty Detection for NIF Normal Pointing Images

    International Nuclear Information System (INIS)

    Awwal, A S; Law, C; Ferguson, S W

    2007-01-01

    The National Ignition Facility at the Lawrence Livermore National Laboratory when completed in 2009, will deliver 192-beams aligned precisely at the center of the target chamber producing extreme energy densities and pressures. Video images of laser beams along the beam path are used by automatic alignment algorithms to determine the position of the beams for alignment purposes. However, noise and other optical effects may affect the accuracy of the calculated beam location. Realistic estimation of the uncertainty is necessary to assure that the beam is monitored within the clear optical path. When the uncertainty is above a certain threshold the automated alignment operation is suspended and control of the beam is transferred to a human operator. This work describes our effort to quantify the uncertainty of measurement of the most common alignment beam

  1. Development of a Dynamic Lidar Uncertainty Framework

    Energy Technology Data Exchange (ETDEWEB)

    Newman, Jennifer [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Clifton, Andrew [WindForS; Bonin, Timothy [CIRES/NOAA ESRL; Choukulkar, Aditya [CIRES/NOAA ESRL; Brewer, W. Alan [NOAA ESRL; Delgado, Ruben [University of Maryland Baltimore County

    2017-08-07

    As wind turbine sizes increase and wind energy expands to more complex and remote sites, remote-sensing devices such as lidars are expected to play a key role in wind resource assessment and power performance testing. The switch to remote-sensing devices represents a paradigm shift in the way the wind industry typically obtains and interprets measurement data for wind energy. For example, the measurement techniques and sources of uncertainty for a remote-sensing device are vastly different from those associated with a cup anemometer on a meteorological tower. Current IEC standards for quantifying remote sensing device uncertainty for power performance testing consider uncertainty due to mounting, calibration, and classification of the remote sensing device, among other parameters. Values of the uncertainty are typically given as a function of the mean wind speed measured by a reference device and are generally fixed, leading to climatic uncertainty values that apply to the entire measurement campaign. However, real-world experience and a consideration of the fundamentals of the measurement process have shown that lidar performance is highly dependent on atmospheric conditions, such as wind shear, turbulence, and aerosol content. At present, these conditions are not directly incorporated into the estimated uncertainty of a lidar device. In this presentation, we describe the development of a new dynamic lidar uncertainty framework that adapts to current flow conditions and more accurately represents the actual uncertainty inherent in lidar measurements under different conditions. In this new framework, sources of uncertainty are identified for estimation of the line-of-sight wind speed and reconstruction of the three-dimensional wind field. These sources are then related to physical processes caused by the atmosphere and lidar operating conditions. The framework is applied to lidar data from a field measurement site to assess the ability of the framework to predict

  2. Uncertainty information in climate data records from Earth observation

    Science.gov (United States)

    Merchant, Christopher J.; Paul, Frank; Popp, Thomas; Ablain, Michael; Bontemps, Sophie; Defourny, Pierre; Hollmann, Rainer; Lavergne, Thomas; Laeng, Alexandra; de Leeuw, Gerrit; Mittaz, Jonathan; Poulsen, Caroline; Povey, Adam C.; Reuter, Max; Sathyendranath, Shubha; Sandven, Stein; Sofieva, Viktoria F.; Wagner, Wolfgang

    2017-07-01

    The question of how to derive and present uncertainty information in climate data records (CDRs) has received sustained attention within the European Space Agency Climate Change Initiative (CCI), a programme to generate CDRs addressing a range of essential climate variables (ECVs) from satellite data. Here, we review the nature, mathematics, practicalities, and communication of uncertainty information in CDRs from Earth observations. This review paper argues that CDRs derived from satellite-based Earth observation (EO) should include rigorous uncertainty information to support the application of the data in contexts such as policy, climate modelling, and numerical weather prediction reanalysis. Uncertainty, error, and quality are distinct concepts, and the case is made that CDR products should follow international metrological norms for presenting quantified uncertainty. As a baseline for good practice, total standard uncertainty should be quantified per datum in a CDR, meaning that uncertainty estimates should clearly discriminate more and less certain data. In this case, flags for data quality should not duplicate uncertainty information, but instead describe complementary information (such as the confidence in the uncertainty estimate provided or indicators of conditions violating the retrieval assumptions). The paper discusses the many sources of error in CDRs, noting that different errors may be correlated across a wide range of timescales and space scales. Error effects that contribute negligibly to the total uncertainty in a single-satellite measurement can be the dominant sources of uncertainty in a CDR on the large space scales and long timescales that are highly relevant for some climate applications. For this reason, identifying and characterizing the relevant sources of uncertainty for CDRs is particularly challenging. The characterization of uncertainty caused by a given error effect involves assessing the magnitude of the effect, the shape of the

  3. Topics in Probabilistic Judgment Aggregation

    Science.gov (United States)

    Wang, Guanchun

    2011-01-01

    This dissertation is a compilation of several studies that are united by their relevance to probabilistic judgment aggregation. In the face of complex and uncertain events, panels of judges are frequently consulted to provide probabilistic forecasts, and aggregation of such estimates in groups often yield better results than could have been made…

  4. Collective Rationality in Graph Aggregation

    NARCIS (Netherlands)

    Endriss, U.; Grandi, U.; Schaub, T.; Friedrich, G.; O'Sullivan, B.

    2014-01-01

    Suppose a number of agents each provide us with a directed graph over a common set of vertices. Graph aggregation is the problem of computing a single “collective” graph that best represents the information inherent in this profile of individual graphs. We consider this aggregation problem from the

  5. Probability and uncertainty in nuclear safety decisions

    International Nuclear Information System (INIS)

    Pate-Cornell, M.E.

    1986-01-01

    In this paper, we examine some problems posed by the use of probabilities in Nuclear Safety decisions. We discuss some of the theoretical difficulties due to the collective nature of regulatory decisions, and, in particular, the calibration and the aggregation of risk information (e.g., experts opinions). We argue that, if one chooses numerical safety goals as a regulatory basis, one can reduce the constraints to an individual safety goal and a cost-benefit criterion. We show the relevance of risk uncertainties in this kind of regulatory framework. We conclude that, whereas expected values of future failure frequencies are adequate to show compliance with economic constraints, the use of a fractile (e.g., 95%) to be specified by the regulatory agency is justified to treat hazard uncertainties for the individual safety goal. (orig.)

  6. Uncertainty Propagation in OMFIT

    Science.gov (United States)

    Smith, Sterling; Meneghini, Orso; Sung, Choongki

    2017-10-01

    A rigorous comparison of power balance fluxes and turbulent model fluxes requires the propagation of uncertainties in the kinetic profiles and their derivatives. Making extensive use of the python uncertainties package, the OMFIT framework has been used to propagate covariant uncertainties to provide an uncertainty in the power balance calculation from the ONETWO code, as well as through the turbulent fluxes calculated by the TGLF code. The covariant uncertainties arise from fitting 1D (constant on flux surface) density and temperature profiles and associated random errors with parameterized functions such as a modified tanh. The power balance and model fluxes can then be compared with quantification of the uncertainties. No effort is made at propagating systematic errors. A case study will be shown for the effects of resonant magnetic perturbations on the kinetic profiles and fluxes at the top of the pedestal. A separate attempt at modeling the random errors with Monte Carlo sampling will be compared to the method of propagating the fitting function parameter covariant uncertainties. Work supported by US DOE under DE-FC02-04ER54698, DE-FG2-95ER-54309, DE-SC 0012656.

  7. Uncertainty Propagation in Hypersonic Vehicle Aerothermoelastic Analysis

    Science.gov (United States)

    Lamorte, Nicolas Etienne

    Hypersonic vehicles face a challenging flight environment. The aerothermoelastic analysis of its components requires numerous simplifying approximations. Identifying and quantifying the effect of uncertainties pushes the limits of the existing deterministic models, and is pursued in this work. An uncertainty quantification framework is used to propagate the effects of identified uncertainties on the stability margins and performance of the different systems considered. First, the aeroelastic stability of a typical section representative of a control surface on a hypersonic vehicle is examined. Variability in the uncoupled natural frequencies of the system is modeled to mimic the effect of aerodynamic heating. Next, the stability of an aerodynamically heated panel representing a component of the skin of a generic hypersonic vehicle is considered. Uncertainty in the location of transition from laminar to turbulent flow and the heat flux prediction is quantified using CFD. In both cases significant reductions of the stability margins are observed. A loosely coupled airframe--integrated scramjet engine is considered next. The elongated body and cowl of the engine flow path are subject to harsh aerothermodynamic loading which causes it to deform. Uncertainty associated with deformation prediction is propagated to the engine performance analysis. The cowl deformation is the main contributor to the sensitivity of the propulsion system performance. Finally, a framework for aerothermoelastic stability boundary calculation for hypersonic vehicles using CFD is developed. The usage of CFD enables one to consider different turbulence conditions, laminar or turbulent, and different models of the air mixture, in particular real gas model which accounts for dissociation of molecules at high temperature. The system is found to be sensitive to turbulence modeling as well as the location of the transition from laminar to turbulent flow. Real gas effects play a minor role in the

  8. Orthogonal flexible Rydberg aggregates

    Science.gov (United States)

    Leonhardt, K.; Wüster, S.; Rost, J. M.

    2016-02-01

    We study the link between atomic motion and exciton transport in flexible Rydberg aggregates, assemblies of highly excited light alkali-metal atoms, for which motion due to dipole-dipole interaction becomes relevant. In two one-dimensional atom chains crossing at a right angle adiabatic exciton transport is affected by a conical intersection of excitonic energy surfaces, which induces controllable nonadiabatic effects. A joint exciton-motion pulse that is initially governed by a single energy surface is coherently split into two modes after crossing the intersection. The modes induce strongly different atomic motion, leading to clear signatures of nonadiabatic effects in atomic density profiles. We have shown how this scenario can be exploited as an exciton switch, controlling direction and coherence properties of the joint pulse on the second of the chains [K. Leonhardt et al., Phys. Rev. Lett. 113, 223001 (2014), 10.1103/PhysRevLett.113.223001]. In this article we discuss the underlying complex dynamics in detail, characterize the switch, and derive our isotropic interaction model from a realistic anisotropic one with the addition of a magnetic bias field.

  9. Verification of uncertainty budgets

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Madsen, B.S.

    2005-01-01

    , and therefore it is essential that the applicability of the overall uncertainty budget to actual measurement results be verified on the basis of current experimental data. This should be carried out by replicate analysis of samples taken in accordance with the definition of the measurand, but representing...... the full range of matrices and concentrations for which the budget is assumed to be valid. In this way the assumptions made in the uncertainty budget can be experimentally verified, both as regards sources of variability that are assumed negligible, and dominant uncertainty components. Agreement between...

  10. Uncertainties in projecting climate-change impacts in marine ecosystems

    DEFF Research Database (Denmark)

    Payne, Mark; Barange, Manuel; Cheung, William W. L.

    2016-01-01

    with a projection and building confidence in its robustness. We review how uncertainties in such projections are handled in marine science. We employ an approach developed in climate modelling by breaking uncertainty down into (i) structural (model) uncertainty, (ii) initialization and internal variability......Projections of the impacts of climate change on marine ecosystems are a key prerequisite for the planning of adaptation strategies, yet they are inevitably associated with uncertainty. Identifying, quantifying, and communicating this uncertainty is key to both evaluating the risk associated...... and highlight the opportunities and challenges associated with doing a better job. We find that even within a relatively small field such as marine science, there are substantial differences between subdisciplines in the degree of attention given to each type of uncertainty. We find that initialization...

  11. A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Campos, E [Argonne National Lab. (ANL), Argonne, IL (United States); Sisterson, Douglas [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-12-01

    The U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility is observationally based, and quantifying the uncertainty of its measurements is critically important. With over 300 widely differing instruments providing over 2,500 datastreams, concise expression of measurement uncertainty is quite challenging. The ARM Facility currently provides data and supporting metadata (information about the data or data quality) to its users through a number of sources. Because the continued success of the ARM Facility depends on the known quality of its measurements, the Facility relies on instrument mentors and the ARM Data Quality Office (DQO) to ensure, assess, and report measurement quality. Therefore, an easily accessible, well-articulated estimate of ARM measurement uncertainty is needed. Note that some of the instrument observations require mathematical algorithms (retrievals) to convert a measured engineering variable into a useful geophysical measurement. While those types of retrieval measurements are identified, this study does not address particular methods for retrieval uncertainty. As well, the ARM Facility also provides engineered data products, or value-added products (VAPs), based on multiple instrument measurements. This study does not include uncertainty estimates for those data products. We propose here that a total measurement uncertainty should be calculated as a function of the instrument uncertainty (calibration factors), the field uncertainty (environmental factors), and the retrieval uncertainty (algorithm factors). The study will not expand on methods for computing these uncertainties. Instead, it will focus on the practical identification, characterization, and inventory of the measurement uncertainties already available in the ARM community through the ARM instrument mentors and their ARM instrument handbooks. As a result, this study will address the first steps towards reporting ARM measurement uncertainty

  12. Uncertainties in risk assessment and decision making

    International Nuclear Information System (INIS)

    Starzec, Peter; Purucker, Tom; Stewart, Robert

    2008-02-01

    The general concept for risk assessment in accordance with the Swedish model for contaminated soil implies that the toxicological reference value for a given receptor is first back-calculated to a corresponding concentration of a compound in soil and (if applicable) then modified with respect to e.g. background levels, acute toxicity, and factor of safety. This result in a guideline value that is subsequently compared to the observed concentration levels. Many sources of uncertainty exist when assessing whether the risk for a receptor is significant or not. In this study, the uncertainty aspects have been addressed from three standpoints: 1. Uncertainty in the comparison between the level of contamination (source) and a given risk criterion (e.g. a guideline value) and possible implications on subsequent decisions. This type of uncertainty is considered to be most important in situations where a contaminant is expected to be spatially heterogeneous without any tendency to form isolated clusters (hotspots) that can be easily delineated, i.e. where mean values are appropriate to compare to the risk criterion. 2. Uncertainty in spatial distribution of a contaminant. Spatial uncertainty should be accounted for when hotspots are to be delineated and the volume of soil contaminated with levels above a stated decision criterion has to be assessed (quantified). 3. Uncertainty in an ecological exposure model with regard to the moving pattern of a receptor in relation to spatial distribution of contaminant in question. The study points out that the choice of methodology to characterize the relation between contaminant concentration and a pre-defined risk criterion is governed by a conceptual perception of the contaminant's spatial distribution and also depends on the structure of collected data (observations). How uncertainty in transition from contaminant concentration into risk criterion can be quantified was demonstrated by applying hypothesis tests and the concept of

  13. Evaluating prediction uncertainty

    International Nuclear Information System (INIS)

    McKay, M.D.

    1995-03-01

    The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented

  14. Quantifying Anthropogenic Dust Emissions

    Science.gov (United States)

    Webb, Nicholas P.; Pierre, Caroline

    2018-02-01

    Anthropogenic land use and land cover change, including local environmental disturbances, moderate rates of wind-driven soil erosion and dust emission. These human-dust cycle interactions impact ecosystems and agricultural production, air quality, human health, biogeochemical cycles, and climate. While the impacts of land use activities and land management on aeolian processes can be profound, the interactions are often complex and assessments of anthropogenic dust loads at all scales remain highly uncertain. Here, we critically review the drivers of anthropogenic dust emission and current evaluation approaches. We then identify and describe opportunities to: (1) develop new conceptual frameworks and interdisciplinary approaches that draw on ecological state-and-transition models to improve the accuracy and relevance of assessments of anthropogenic dust emissions; (2) improve model fidelity and capacity for change detection to quantify anthropogenic impacts on aeolian processes; and (3) enhance field research and monitoring networks to support dust model applications to evaluate the impacts of disturbance processes on local to global-scale wind erosion and dust emissions.

  15. Quantifying loopy network architectures.

    Directory of Open Access Journals (Sweden)

    Eleni Katifori

    Full Text Available Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of approaches have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes from the metric topology (connectivity and edge weight and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs.

  16. Uncertainties and climatic change

    International Nuclear Information System (INIS)

    De Gier, A.M.; Opschoor, J.B.; Van de Donk, W.B.H.J.; Hooimeijer, P.; Jepma, J.; Lelieveld, J.; Oerlemans, J.; Petersen, A.

    2008-01-01

    Which processes in the climate system are misunderstood? How are scientists dealing with uncertainty about climate change? What will be done with the conclusions of the recently published synthesis report of the IPCC? These and other questions were answered during the meeting 'Uncertainties and climate change' that was held on Monday 26 November 2007 at the KNAW in Amsterdam. This report is a compilation of all the presentations and provides some conclusions resulting from the discussions during this meeting. [mk] [nl

  17. Mechanics and uncertainty

    CERN Document Server

    Lemaire, Maurice

    2014-01-01

    Science is a quest for certainty, but lack of certainty is the driving force behind all of its endeavors. This book, specifically, examines the uncertainty of technological and industrial science. Uncertainty and Mechanics studies the concepts of mechanical design in an uncertain setting and explains engineering techniques for inventing cost-effective products. Though it references practical applications, this is a book about ideas and potential advances in mechanical science.

  18. Uncertainty: lotteries and risk

    OpenAIRE

    Ávalos, Eloy

    2011-01-01

    In this paper we develop the theory of uncertainty in a context where the risks assumed by the individual are measurable and manageable. We primarily use the definition of lottery to formulate the axioms of the individual's preferences, and its representation through the utility function von Neumann - Morgenstern. We study the expected utility theorem and its properties, the paradoxes of choice under uncertainty and finally the measures of risk aversion with monetary lotteries.

  19. Uncertainty calculations made easier

    International Nuclear Information System (INIS)

    Hogenbirk, A.

    1994-07-01

    The results are presented of a neutron cross section sensitivity/uncertainty analysis performed in a complicated 2D model of the NET shielding blanket design inside the ITER torus design, surrounded by the cryostat/biological shield as planned for ITER. The calculations were performed with a code system developed at ECN Petten, with which sensitivity/uncertainty calculations become relatively simple. In order to check the deterministic neutron transport calculations (performed with DORT), calculations were also performed with the Monte Carlo code MCNP. Care was taken to model the 2.0 cm wide gaps between two blanket segments, as the neutron flux behind the vacuum vessel is largely determined by neutrons streaming through these gaps. The resulting neutron flux spectra are in excellent agreement up to the end of the cryostat. It is noted, that at this position the attenuation of the neutron flux is about 1 l orders of magnitude. The uncertainty in the energy integrated flux at the beginning of the vacuum vessel and at the beginning of the cryostat was determined in the calculations. The uncertainty appears to be strongly dependent on the exact geometry: if the gaps are filled with stainless steel, the neutron spectrum changes strongly, which results in an uncertainty of 70% in the energy integrated flux at the beginning of the cryostat in the no-gap-geometry, compared to an uncertainty of only 5% in the gap-geometry. Therefore, it is essential to take into account the exact geometry in sensitivity/uncertainty calculations. Furthermore, this study shows that an improvement of the covariance data is urgently needed in order to obtain reliable estimates of the uncertainties in response parameters in neutron transport calculations. (orig./GL)

  20. Kinetics of aggregation with choice.

    Science.gov (United States)

    Ben-Naim, E; Krapivsky, P L

    2016-12-01

    We generalize the ordinary aggregation process to allow for choice. In ordinary aggregation, two random clusters merge and form a larger aggregate. In our implementation of choice, a target cluster and two candidate clusters are randomly selected and the target cluster merges with the larger of the two candidate clusters. We study the long-time asymptotic behavior and find that as in ordinary aggregation, the size density adheres to the standard scaling form. However, aggregation with choice exhibits a number of different features. First, the density of the smallest clusters exhibits anomalous scaling. Second, both the small-size and the large-size tails of the density are overpopulated, at the expense of the density of moderate-size clusters. We also study the complementary case where the smaller candidate cluster participates in the aggregation process and find an abundance of moderate clusters at the expense of small and large clusters. Additionally, we investigate aggregation processes with choice among multiple candidate clusters and a symmetric implementation where the choice is between two pairs of clusters.

  1. Economic impacts of climate change: Methods of estimating impacts at an aggregate level using Hordaland as an illustration

    International Nuclear Information System (INIS)

    Aaheim, Asbjoern

    2003-01-01

    This report discusses methods for calculating economic impacts of climate change, and uses Hordaland county in Norway as an illustrative example. The calculations are based on estimated climate changes from the RegClim project. This study draws from knowledge of the relationship between economic activity and climate at a disaggregate level and calculates changes in production of and demand for goods and services within aggregate sectors, which are specified in the county budget for Hordaland. Total impacts for the county thus are expressed through known values from the national budget, such as the county's ''national product'', total consumption, and investments. The estimates of impacts of climate changes at a disaggregate level in Hordaland are quantified only to small degree. The calculations made in this report can thus only be considered appropriate for illustrating methods and interpretations. In terms of relative economic significance for the county, however, it is likely that the hydropower sector will be the most affected. Increased precipitation will result in greater production potential, but profitability will largely depend on projected energy prices and investment costs associated with expansion. Agriculture and forestry will increase their production potential, but they are relatively small sectors in the county. Compared with the uncertainty about how climate change will affect production, however, the uncertainty about changes in demand is far greater. The demand for personal transportation and construction in particular can have significant consequences for the county's economy. (author)

  2. Hail formation triggers rapid ash aggregation in volcanic plumes.

    Science.gov (United States)

    Van Eaton, Alexa R; Mastin, Larry G; Herzog, Michael; Schwaiger, Hans F; Schneider, David J; Wallace, Kristi L; Clarke, Amanda B

    2015-08-03

    During explosive eruptions, airborne particles collide and stick together, accelerating the fallout of volcanic ash and climate-forcing aerosols. This aggregation process remains a major source of uncertainty both in ash dispersal forecasting and interpretation of eruptions from the geological record. Here we illuminate the mechanisms and timescales of particle aggregation from a well-characterized 'wet' eruption. The 2009 eruption of Redoubt Volcano, Alaska, incorporated water from the surface (in this case, a glacier), which is a common occurrence during explosive volcanism worldwide. Observations from C-band weather radar, fall deposits and numerical modelling demonstrate that hail-forming processes in the eruption plume triggered aggregation of ∼95% of the fine ash and stripped much of the erupted mass out of the atmosphere within 30 min. Based on these findings, we propose a mechanism of hail-like ash aggregation that contributes to the anomalously rapid fallout of fine ash and occurrence of concentrically layered aggregates in volcanic deposits.

  3. Uncertainty quantification using evidence theory in multidisciplinary design optimization

    International Nuclear Information System (INIS)

    Agarwal, Harish; Renaud, John E.; Preston, Evan L.; Padmanabhan, Dhanesh

    2004-01-01

    Advances in computational performance have led to the development of large-scale simulation tools for design. Systems generated using such simulation tools can fail in service if the uncertainty of the simulation tool's performance predictions is not accounted for. In this research an investigation of how uncertainty can be quantified in multidisciplinary systems analysis subject to epistemic uncertainty associated with the disciplinary design tools and input parameters is undertaken. Evidence theory is used to quantify uncertainty in terms of the uncertain measures of belief and plausibility. To illustrate the methodology, multidisciplinary analysis problems are introduced as an extension to the epistemic uncertainty challenge problems identified by Sandia National Laboratories. After uncertainty has been characterized mathematically the designer seeks the optimum design under uncertainty. The measures of uncertainty provided by evidence theory are discontinuous functions. Such non-smooth functions cannot be used in traditional gradient-based optimizers because the sensitivities of the uncertain measures are not properly defined. In this research surrogate models are used to represent the uncertain measures as continuous functions. A sequential approximate optimization approach is used to drive the optimization process. The methodology is illustrated in application to multidisciplinary example problems

  4. Uncertainty Quantification in High Throughput Screening ...

    Science.gov (United States)

    Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of biochemical and cellular processes, including endocrine disruption, cytotoxicity, and zebrafish development. Over 2.6 million concentration response curves are fit to models to extract parameters related to potency and efficacy. Models built on ToxCast results are being used to rank and prioritize the toxicological risk of tested chemicals and to predict the toxicity of tens of thousands of chemicals not yet tested in vivo. However, the data size also presents challenges. When fitting the data, the choice of models, model selection strategy, and hit call criteria must reflect the need for computational efficiency and robustness, requiring hard and somewhat arbitrary cutoffs. When coupled with unavoidable noise in the experimental concentration response data, these hard cutoffs cause uncertainty in model parameters and the hit call itself. The uncertainty will then propagate through all of the models built on the data. Left unquantified, this uncertainty makes it difficult to fully interpret the data for risk assessment. We used bootstrap resampling methods to quantify the uncertainty in fitting models to the concentration response data. Bootstrap resampling determines confidence intervals for

  5. Information theoretic quantification of diagnostic uncertainty.

    Science.gov (United States)

    Westover, M Brandon; Eiseman, Nathaniel A; Cash, Sydney S; Bianchi, Matt T

    2012-01-01

    Diagnostic test interpretation remains a challenge in clinical practice. Most physicians receive training in the use of Bayes' rule, which specifies how the sensitivity and specificity of a test for a given disease combine with the pre-test probability to quantify the change in disease probability incurred by a new test result. However, multiple studies demonstrate physicians' deficiencies in probabilistic reasoning, especially with unexpected test results. Information theory, a branch of probability theory dealing explicitly with the quantification of uncertainty, has been proposed as an alternative framework for diagnostic test interpretation, but is even less familiar to physicians. We have previously addressed one key challenge in the practical application of Bayes theorem: the handling of uncertainty in the critical first step of estimating the pre-test probability of disease. This essay aims to present the essential concepts of information theory to physicians in an accessible manner, and to extend previous work regarding uncertainty in pre-test probability estimation by placing this type of uncertainty within a principled information theoretic framework. We address several obstacles hindering physicians' application of information theoretic concepts to diagnostic test interpretation. These include issues of terminology (mathematical meanings of certain information theoretic terms differ from clinical or common parlance) as well as the underlying mathematical assumptions. Finally, we illustrate how, in information theoretic terms, one can understand the effect on diagnostic uncertainty of considering ranges instead of simple point estimates of pre-test probability.

  6. Nickel aggregates produced by radiolysis

    International Nuclear Information System (INIS)

    Marignier, J.L.; Belloni, J.

    1988-01-01

    Nickel aggregates with subcolloidal size and stable in water have been synthesized by inhibiting the corrosion by the medium. The protective effect of the surfactant is discussed in relation with the characteristics of various types of polyvinyl alcohol studied. The reactivity of aggregates towards oxidizing compounds, nitro blue tetrazolium, methylene blue, silver ions, oxygen, methylviologen, enables an estimation of the redox potential of nickel aggregates (E = - 04 ± 0.05 V). It has been applied to quantitative analysis of the particles in presence of nickel ions. 55 refs [fr

  7. Aggregating and Disaggregating Flexibility Objects

    DEFF Research Database (Denmark)

    Siksnys, Laurynas; Valsomatzis, Emmanouil; Hose, Katja

    2015-01-01

    In many scientific and commercial domains we encounter flexibility objects, i.e., objects with explicit flexibilities in a time and an amount dimension (e.g., energy or product amount). Applications of flexibility objects require novel and efficient techniques capable of handling large amounts...... and aiming at energy balancing during aggregation. In more detail, this paper considers the complete life cycle of flex-objects: aggregation, disaggregation, associated requirements, efficient incremental computation, and balance aggregation techniques. Extensive experiments based on real-world data from...

  8. Justification for recommended uncertainties

    International Nuclear Information System (INIS)

    Pronyaev, V.G.; Badikov, S.A.; Carlson, A.D.

    2007-01-01

    The uncertainties obtained in an earlier standards evaluation were considered to be unrealistically low by experts of the US Cross Section Evaluation Working Group (CSEWG). Therefore, the CSEWG Standards Subcommittee replaced the covariance matrices of evaluated uncertainties by expanded percentage errors that were assigned to the data over wide energy groups. There are a number of reasons that might lead to low uncertainties of the evaluated data: Underestimation of the correlations existing between the results of different measurements; The presence of unrecognized systematic uncertainties in the experimental data can lead to biases in the evaluated data as well as to underestimations of the resulting uncertainties; Uncertainties for correlated data cannot only be characterized by percentage uncertainties or variances. Covariances between evaluated value at 0.2 MeV and other points obtained in model (RAC R matrix and PADE2 analytical expansion) and non-model (GMA) fits of the 6 Li(n,t) TEST1 data and the correlation coefficients are presented and covariances between the evaluated value at 0.045 MeV and other points (along the line or column of the matrix) as obtained in EDA and RAC R matrix fits of the data available for reactions that pass through the formation of the 7 Li system are discussed. The GMA fit with the GMA database is shown for comparison. The following diagrams are discussed: Percentage uncertainties of the evaluated cross section for the 6 Li(n,t) reaction and the for the 235 U(n,f) reaction; estimation given by CSEWG experts; GMA result with full GMA database, including experimental data for the 6 Li(n,t), 6 Li(n,n) and 6 Li(n,total) reactions; uncertainties in the GMA combined fit for the standards; EDA and RAC R matrix results, respectively. Uncertainties of absolute and 252 Cf fission spectrum averaged cross section measurements, and deviations between measured and evaluated values for 235 U(n,f) cross-sections in the neutron energy range 1

  9. Uncertainties in life cycle assessment of waste management systems

    DEFF Research Database (Denmark)

    Clavreul, Julie; Christensen, Thomas Højlund

    2011-01-01

    Life cycle assessment has been used to assess environmental performances of waste management systems in many studies. The uncertainties inherent to its results are often pointed out but not always quantified, which should be the case to ensure a good decisionmaking process. This paper proposes...... a method to assess all parameter uncertainties and quantify the overall uncertainty of the assessment. The method is exemplified in a case study, where the goal is to determine if anaerobic digestion of organic waste is more beneficial than incineration in Denmark, considering only the impact on global...... warming. The sensitivity analysis pointed out ten parameters particularly highly influencing the result of the study. In the uncertainty analysis, the distributions of these ten parameters were used in a Monte Carlo analysis, which concluded that incineration appeared more favourable than anaerobic...

  10. Optimal planning and operation of aggregated distributed energy resources with market participation

    International Nuclear Information System (INIS)

    Calvillo, C.F.; Sánchez-Miralles, A.; Villar, J.; Martín, F.

    2016-01-01

    Highlights: • Price-maker optimization model for planning and operation of aggregated DER. • 3 Case studies are proposed, considering different electricity pricing scenarios. • Analysis of benefits and effect on electricity prices produced by DER aggregation. • Results showed considerable benefits even for relatively small aggregations. • Results suggest that the impact on prices should not be overlooked. - Abstract: This paper analyzes the optimal planning and operation of aggregated distributed energy resources (DER) with participation in the electricity market. Aggregators manage their portfolio of resources in order to obtain the maximum benefit from the grid, while participating in the day-ahead wholesale electricity market. The goal of this paper is to propose a model for aggregated DER systems planning, considering its participation in the electricity market and its impact on the market price. The results are the optimal planning and management of DER systems, and the appropriate energy transactions for the aggregator in the wholesale day-ahead market according to the size of its aggregated resources. A price-maker approach based on representing the market competitors with residual demand curves is followed, and the impact on the price is assessed to help in the decision of using price-maker or price-taker approaches depending on the size of the aggregated resources. A deterministic programming problem with two case studies (the average scenario and the most likely scenario from the stochastic ones), and a stochastic one with a case study to account for the market uncertainty are described. For both models, market scenarios have been built from historical data of the Spanish system. The results suggest that when the aggregated resources have enough size to follow a price-maker approach and the uncertainty of the markets is considered in the planning process, the DER systems can achieve up to 50% extra economic benefits, depending on the market

  11. Probabilistic accident consequence uncertainty analysis: Food chain uncertainty assessment. Volume 1: Main report

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J. [National Radiological Protection Board (United Kingdom); Goossens, L.H.J.; Kraan, B.C.P. [Delft Univ. of Technology (Netherlands)] [and others

    1997-06-01

    This volume is the first of a two-volume document that summarizes a joint project conducted by the US Nuclear Regulatory Commission and the European Commission to assess uncertainties in the MACCS and COSYMA probabilistic accident consequence codes. These codes were developed primarily for estimating the risks presented by nuclear reactors based on postulated frequencies and magnitudes of potential accidents. This document reports on an ongoing project to assess uncertainty in the MACCS and COSYMA calculations for the offsite consequences of radionuclide releases by hypothetical nuclear power plant accidents. A panel of sixteen experts was formed to compile credible and traceable uncertainty distributions for food chain variables that affect calculations of offsite consequences. The expert judgment elicitation procedure and its outcomes are described in these volumes. Other panels were formed to consider uncertainty in other aspects of the codes. Their results are described in companion reports. Volume 1 contains background information and a complete description of the joint consequence uncertainty study. Volume 2 contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures for both panels, (3) the rationales and results for the panels on soil and plant transfer and animal transfer, (4) short biographies of the experts, and (5) the aggregated results of their responses.

  12. Probabilistic accident consequence uncertainty analysis: Food chain uncertainty assessment. Volume 1: Main report

    International Nuclear Information System (INIS)

    Brown, J.; Goossens, L.H.J.; Kraan, B.C.P.

    1997-06-01

    This volume is the first of a two-volume document that summarizes a joint project conducted by the US Nuclear Regulatory Commission and the European Commission to assess uncertainties in the MACCS and COSYMA probabilistic accident consequence codes. These codes were developed primarily for estimating the risks presented by nuclear reactors based on postulated frequencies and magnitudes of potential accidents. This document reports on an ongoing project to assess uncertainty in the MACCS and COSYMA calculations for the offsite consequences of radionuclide releases by hypothetical nuclear power plant accidents. A panel of sixteen experts was formed to compile credible and traceable uncertainty distributions for food chain variables that affect calculations of offsite consequences. The expert judgment elicitation procedure and its outcomes are described in these volumes. Other panels were formed to consider uncertainty in other aspects of the codes. Their results are described in companion reports. Volume 1 contains background information and a complete description of the joint consequence uncertainty study. Volume 2 contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures for both panels, (3) the rationales and results for the panels on soil and plant transfer and animal transfer, (4) short biographies of the experts, and (5) the aggregated results of their responses

  13. Synchronization as Aggregation: Cluster Kinetics of Pulse-Coupled Oscillators.

    Science.gov (United States)

    O'Keeffe, Kevin P; Krapivsky, P L; Strogatz, Steven H

    2015-08-07

    We consider models of identical pulse-coupled oscillators with global interactions. Previous work showed that under certain conditions such systems always end up in sync, but did not quantify how small clusters of synchronized oscillators progressively coalesce into larger ones. Using tools from the study of aggregation phenomena, we obtain exact results for the time-dependent distribution of cluster sizes as the system evolves from disorder to synchrony.

  14. Quantifying multi-dimensional attributes of human activities at various geographic scales based on smartphone tracking.

    Science.gov (United States)

    Zhou, Xiaolu; Li, Dongying

    2018-05-09

    Advancement in location-aware technologies, and information and communication technology in the past decades has furthered our knowledge of the interaction between human activities and the built environment. An increasing number of studies have collected data regarding individual activities to better understand how the environment shapes human behavior. Despite this growing interest, some challenges exist in collecting and processing individual's activity data, e.g., capturing people's precise environmental contexts and analyzing data at multiple spatial scales. In this study, we propose and implement an innovative system that integrates smartphone-based step tracking with an app and the sequential tile scan techniques to collect and process activity data. We apply the OpenStreetMap tile system to aggregate positioning points at various scales. We also propose duration, step and probability surfaces to quantify the multi-dimensional attributes of activities. Results show that, by running the app in the background, smartphones can measure multi-dimensional attributes of human activities, including space, duration, step, and location uncertainty at various spatial scales. By coordinating Global Positioning System (GPS) sensor with accelerometer sensor, this app can save battery which otherwise would be drained by GPS sensor quickly. Based on a test dataset, we were able to detect the recreational center and sports center as the space where the user was most active, among other places visited. The methods provide techniques to address key issues in analyzing human activity data. The system can support future studies on behavioral and health consequences related to individual's environmental exposure.

  15. Modelling and propagation of uncertainties in the German Risk Study

    International Nuclear Information System (INIS)

    Hofer, E.; Krzykacz, B.

    1982-01-01

    Risk assessments are generally subject to uncertainty considerations. This is because of the various estimates that are involved. The paper points out those estimates in the so-called phase A of the German Risk Study, for which uncertainties were quantified. It explains the probabilistic models applied in the assessment to their impact on the findings of the study. Finally the resulting subjective confidence intervals of the study results are presented and their sensitivity to these probabilistic models is investigated

  16. Uncertainty analysis of thermal quantities measurement in a centrifugal compressor

    Science.gov (United States)

    Hurda, Lukáš; Matas, Richard

    2017-09-01

    Compressor performance characteristics evaluation process based on the measurement of pressure, temperature and other quantities is examined to find uncertainties for directly measured and derived quantities. CFD is used as a tool to quantify the influences of different sources of uncertainty of measurements for single- and multi-thermocouple total temperature probes. The heat conduction through the body of the thermocouple probe and the heat-up of the air in the intake piping are the main phenomena of interest.

  17. Aggregated Computational Toxicology Resource (ACTOR)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aggregated Computational Toxicology Resource (ACTOR) is a database on environmental chemicals that is searchable by chemical name and other identifiers, and by...

  18. Laser characterization of fine aggregate.

    Science.gov (United States)

    2012-12-01

    This report describes the results of a research effort to establish the feasibility of using a laser monitoring system to provide real-time data to characterize aggregate properties in a laboratory or field environment. This was accomplished by using...

  19. Investigation of laboratory test procedures for assessing the structural capacity of geogrid-reinforced aggregate base materials.

    Science.gov (United States)

    2015-04-01

    The objective of this research was to identify a laboratory test method that can be used to quantify improvements in structural capacity of aggregate base materials reinforced with geogrid. For this research, National Cooperative Highway Research Pro...

  20. Assessing uncertainty in SRTM elevations for global flood modelling

    Science.gov (United States)

    Hawker, L. P.; Rougier, J.; Neal, J. C.; Bates, P. D.

    2017-12-01

    The SRTM DEM is widely used as the topography input to flood models in data-sparse locations. Understanding spatial error in the SRTM product is crucial in constraining uncertainty about elevations and assessing the impact of these upon flood prediction. Assessment of SRTM error was carried out by Rodriguez et al (2006), but this did not explicitly quantify the spatial structure of vertical errors in the DEM, and nor did it distinguish between errors over different types of landscape. As a result, there is a lack of information about spatial structure of vertical errors of the SRTM in the landscape that matters most to flood models - the floodplain. Therefore, this study attempts this task by comparing SRTM, an error corrected SRTM product (The MERIT DEM of Yamazaki et al., 2017) and near truth LIDAR elevations for 3 deltaic floodplains (Mississippi, Po, Wax Lake) and a large lowland region (the Fens, UK). Using the error covariance function, calculated by comparing SRTM elevations to the near truth LIDAR, perturbations of the 90m SRTM DEM were generated, producing a catalogue of plausible DEMs. This allows modellers to simulate a suite of plausible DEMs at any aggregated block size above native SRTM resolution. Finally, the generated DEM's were input into a hydrodynamic model of the Mekong Delta, built using the LISFLOOD-FP hydrodynamic model, to assess how DEM error affects the hydrodynamics and inundation extent across the domain. The end product of this is an inundation map with the probability of each pixel being flooded based on the catalogue of DEMs. In a world of increasing computer power, but a lack of detailed datasets, this powerful approach can be used throughout natural hazard modelling to understand how errors in the SRTM DEM can impact the hazard assessment.

  1. Aggregate Supply and Potential Output

    OpenAIRE

    Razin, Assaf

    2004-01-01

    The New-Keynesian aggregate supply derives from micro-foundations an inflation-dynamics model very much like the tradition in the monetary literature. Inflation is primarily affected by: (i) economic slack; (ii) expectations; (iii) supply shocks; and (iv) inflation persistence. This paper extends the New Keynesian aggregate supply relationship to include also fluctuations in potential output, as an additional determinant of the relationship. Implications for monetary rules and to the estimati...

  2. CHARACTERIZATION OF BIOGENIC, INTERMEDIATE AND PHYSICOGENIC SOIL AGGREGATES OF AREAS IN THE BRAZILIAN ATLANTIC FOREST

    Directory of Open Access Journals (Sweden)

    JÚLIO CÉSAR FEITOSA FERNANDES

    2017-01-01

    Full Text Available Aggregate formation and stability are related to soil quality, contributing significantly to the carbon storage and nutrient maintenance capacities of the soil. Soil aggregates are formed by two different process: physicogenic, related to moistening and drying cycles and input of organic matter; and biogenic, related to the action of macrofauna organisms and roots. The objective this work was to classify aggregates according to their formation process, quantify and compare organic carbon contents in humic substances and assess the stability of aggregates formed by different processes, in areas with different coverage in the Mid Paraiba Valley, Pinheiral, State of Rio de Janeiro, Brazil. Aggregated soil samples were collected at a depth of 0-10 cm, in a Cambisol (Cambissolo Háplico Tb Distrófico under four plant covers: secondary forest in advanced (SFAS, medium (SFMS and initial (SFIS successional stages and managed mixed pasture (MMP. Aggregates were classified and identified into three morphological classes (physicogenic, biogenic and intermediate. The variables evaluated were mean weight diameter (MWD and geometric mean diameter (GMD of aggregates, chemical fractions of organic matter, total organic carbon (TOC and humic substances: humin (C-HUM humic acid (C-FAH and fulvic acid (C-FAF. Biogenic aggregates were found in smaller quantities and showed higher TOC, C-HUM and C-FAH, compared to intermediate and physicogenic aggregates. Thus, biogenic aggregates have potential to be used as soil quality indicators for structured environments, which are able to maintain its intrinsic formation processes.

  3. Integrating uncertainties for climate change mitigation

    Science.gov (United States)

    Rogelj, Joeri; McCollum, David; Reisinger, Andy; Meinshausen, Malte; Riahi, Keywan

    2013-04-01

    The target of keeping global average temperature increase to below 2°C has emerged in the international climate debate more than a decade ago. In response, the scientific community has tried to estimate the costs of reaching such a target through modelling and scenario analysis. Producing such estimates remains a challenge, particularly because of relatively well-known, but ill-quantified uncertainties, and owing to limited integration of scientific knowledge across disciplines. The integrated assessment community, on one side, has extensively assessed the influence of technological and socio-economic uncertainties on low-carbon scenarios and associated costs. The climate modelling community, on the other side, has worked on achieving an increasingly better understanding of the geophysical response of the Earth system to emissions of greenhouse gases (GHG). This geophysical response remains a key uncertainty for the cost of mitigation scenarios but has only been integrated with assessments of other uncertainties in a rudimentary manner, i.e., for equilibrium conditions. To bridge this gap between the two research communities, we generate distributions of the costs associated with limiting transient global temperature increase to below specific temperature limits, taking into account uncertainties in multiple dimensions: geophysical, technological, social and political. In other words, uncertainties resulting from our incomplete knowledge about how the climate system precisely reacts to GHG emissions (geophysical uncertainties), about how society will develop (social uncertainties and choices), which technologies will be available (technological uncertainty and choices), when we choose to start acting globally on climate change (political choices), and how much money we are or are not willing to spend to achieve climate change mitigation. We find that political choices that delay mitigation have the largest effect on the cost-risk distribution, followed by

  4. Glycation precedes lens crystallin aggregation

    International Nuclear Information System (INIS)

    Swamy, M.S.; Perry, R.E.; Abraham, E.C.

    1987-01-01

    Non-enzymatic glycosylation (glycation) seems to have the potential to alter the structure of crystallins and make them susceptible to thiol oxidation leading to disulfide-linked high molecular weight (HMW) aggregate formation. They used streptozotocin diabetic rats during precataract and cataract stages and long-term cell-free glycation of bovine lens crystallins to study the relationship between glycation and lens crystallin aggregation. HMW aggregates and other protein components of the water-soluble (WS) and urea-soluble (US) fractions were separated by molecular sieve high performance liquid chromatography. Glycation was estimated by both [ 3 H]NaBH 4 reduction and phenylboronate agarose affinity chromatography. Levels of total glycated protein (GP) in the US fractions were about 2-fold higher than in the WS fractions and there was a linear increase in GP in both WS and US fractions. This increase was parallelled by a corresponding increase in HMW aggregates. Total GP extracted by the affinity method from the US fraction showed a predominance of HMW aggregates and vice versa. Cell-free glycation studies with bovine crystallins confirmed the results of the animals studies. Increasing glycation caused a corresponding increase in protein insolubilization and the insoluble fraction thus formed also contained more glycated protein. It appears that lens protein glycation, HMW aggregate formation, and protein insolubilization are interrelated

  5. Model for amorphous aggregation processes

    Science.gov (United States)

    Stranks, Samuel D.; Ecroyd, Heath; van Sluyter, Steven; Waters, Elizabeth J.; Carver, John A.; von Smekal, Lorenz

    2009-11-01

    The amorphous aggregation of proteins is associated with many phenomena, ranging from the formation of protein wine haze to the development of cataract in the eye lens and the precipitation of recombinant proteins during their expression and purification. While much literature exists describing models for linear protein aggregation, such as amyloid fibril formation, there are few reports of models which address amorphous aggregation. Here, we propose a model to describe the amorphous aggregation of proteins which is also more widely applicable to other situations where a similar process occurs, such as in the formation of colloids and nanoclusters. As first applications of the model, we have tested it against experimental turbidimetry data of three proteins relevant to the wine industry and biochemistry, namely, thaumatin, a thaumatinlike protein, and α -lactalbumin. The model is very robust and describes amorphous experimental data to a high degree of accuracy. Details about the aggregation process, such as shape parameters of the aggregates and rate constants, can also be extracted.

  6. Web-based access, aggregation, and visualization of future climate projections with emphasis on agricultural assessments

    Science.gov (United States)

    Villoria, Nelson B.; Elliott, Joshua; Müller, Christoph; Shin, Jaewoo; Zhao, Lan; Song, Carol

    2018-01-01

    Access to climate and spatial datasets by non-specialists is restricted by technical barriers involving hardware, software and data formats. We discuss an open-source online tool that facilitates downloading the climate data from the global circulation models used by the Inter-Sectoral Impacts Model Intercomparison Project. The tool also offers temporal and spatial aggregation capabilities for incorporating future climate scenarios in applications where spatial aggregation is important. We hope that streamlined access to these data facilitates analysis of climate related issues while considering the uncertainties derived from future climate projections and temporal aggregation choices.

  7. Dealing with exploration uncertainties

    International Nuclear Information System (INIS)

    Capen, E.

    1992-01-01

    Exploration for oil and gas should fulfill the most adventurous in their quest for excitement and surprise. This paper tries to cover that tall order. The authors will touch on the magnitude of the uncertainty (which is far greater than in most other businesses), the effects of not knowing target sizes very well, how to build uncertainty into analyses naturally, how to tie reserves and chance estimates to economics, and how to look at the portfolio effect of an exploration program. With no apologies, the authors will be using a different language for some readers - the language of uncertainty, which means probability and statistics. These tools allow one to combine largely subjective exploration information with the more analytical data from the engineering and economic side

  8. The Fallacy of Quantifying Risk

    Science.gov (United States)

    2012-09-01

    Defense AT&L: September–October 2012 18 The Fallacy of Quantifying Risk David E. Frick, Ph.D. Frick is a 35-year veteran of the Department of...a key to risk analysis was “choosing the right technique” of quantifying risk . The weakness in this argument stems not from the assertion that one...of information about the enemy), yet achiev- ing great outcomes. Attempts at quantifying risk are not, in and of themselves, objectionable. Prudence

  9. On Hesitant Fuzzy Reducible Weighted Bonferroni Mean and Its Generalized Form for Multicriteria Aggregation

    Directory of Open Access Journals (Sweden)

    Wei Zhou

    2014-01-01

    Full Text Available Due to convenience and powerfulness in dealing with vagueness and uncertainty of real situation, hesitant fuzzy set has received more and more attention and has been a hot research topic recently. To differently process and effectively aggregate hesitant fuzzy information and capture their interrelationship, in this paper, we propose the hesitant fuzzy reducible weighted Bonferroni mean (HFRWBM and present its four prominent characteristics, namely, reductibility, monotonicity, boundedness, and idempotency. Then, we further investigate its generalized form, that is, the generalized hesitant fuzzy reducible weighted Bonferroni mean (GHFRWBM. Based on the discussion of model parameters, some special cases of the HFRWBM and GHFRWBM are studied in detail. In addition, to deal with the situation that multicriteria have connections in hesitant fuzzy information aggregation, a three-step aggregation approach has been proposed on the basis of the HFRWBM and GHFRWBM. In the end, we apply the proposed aggregation operators to multicriteria aggregation and give an example to illustrate our results.

  10. Statistical Uncertainty Quantification of Physical Models during Reflood of LBLOCA

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Deog Yeon; Seul, Kwang Won; Woo, Sweng Woong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2015-05-15

    The use of the best-estimate (BE) computer codes in safety analysis for loss-of-coolant accident (LOCA) is the major trend in many countries to reduce the significant conservatism. A key feature of this BE evaluation requires the licensee to quantify the uncertainty of the calculations. So, it is very important how to determine the uncertainty distribution before conducting the uncertainty evaluation. Uncertainty includes those of physical model and correlation, plant operational parameters, and so forth. The quantification process is often performed mainly by subjective expert judgment or obtained from reference documents of computer code. In this respect, more mathematical methods are needed to reasonably determine the uncertainty ranges. The first uncertainty quantification are performed with the various increments for two influential uncertainty parameters to get the calculated responses and their derivatives. The different data set with two influential uncertainty parameters for FEBA tests, are chosen applying more strict criteria for selecting responses and their derivatives, which may be considered as the user’s effect in the CIRCÉ applications. Finally, three influential uncertainty parameters are considered to study the effect on the number of uncertainty parameters due to the limitation of CIRCÉ method. With the determined uncertainty ranges, uncertainty evaluations for FEBA tests are performed to check whether the experimental responses such as the cladding temperature or pressure drop are inside the limits of calculated uncertainty bounds. A confirmation step will be performed to evaluate the quality of the information in the case of the different reflooding PERICLES experiments. The uncertainty ranges of physical model in MARS-KS thermal-hydraulic code during the reflooding were quantified by CIRCÉ method using FEBA experiment tests, instead of expert judgment. Also, through the uncertainty evaluation for FEBA and PERICLES tests, it was confirmed

  11. Uncertainty in artificial intelligence

    CERN Document Server

    Levitt, TS; Lemmer, JF; Shachter, RD

    1990-01-01

    Clearly illustrated in this volume is the current relationship between Uncertainty and AI.It has been said that research in AI revolves around five basic questions asked relative to some particular domain: What knowledge is required? How can this knowledge be acquired? How can it be represented in a system? How should this knowledge be manipulated in order to provide intelligent behavior? How can the behavior be explained? In this volume, all of these questions are addressed. From the perspective of the relationship of uncertainty to the basic questions of AI, the book divides naturally i

  12. Sensitivity and uncertainty analysis

    CERN Document Server

    Cacuci, Dan G; Navon, Ionel Michael

    2005-01-01

    As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c

  13. Delayed neutron spectra and their uncertainties in fission product summation calculations

    Energy Technology Data Exchange (ETDEWEB)

    Miyazono, T.; Sagisaka, M.; Ohta, H.; Oyamatsu, K.; Tamaki, M. [Nagoya Univ. (Japan)

    1997-03-01

    Uncertainties in delayed neutron summation calculations are evaluated with ENDF/B-VI for 50 fissioning systems. As the first step, uncertainty calculations are performed for the aggregate delayed neutron activity with the same approximate method as proposed previously for the decay heat uncertainty analyses. Typical uncertainty values are about 6-14% for {sup 238}U(F) and about 13-23% for {sup 243}Am(F) at cooling times 0.1-100 (s). These values are typically 2-3 times larger than those in decay heat at the same cooling times. For aggregate delayed neutron spectra, the uncertainties would be larger than those for the delayed neutron activity because much more information about the nuclear structure is still necessary. (author)

  14. Mapping Soil Transmitted Helminths and Schistosomiasis under Uncertainty: A Systematic Review and Critical Appraisal of Evidence.

    Directory of Open Access Journals (Sweden)

    Andrea L Araujo Navas

    2016-12-01

    Full Text Available Spatial modelling of STH and schistosomiasis epidemiology is now commonplace. Spatial epidemiological studies help inform decisions regarding the number of people at risk as well as the geographic areas that need to be targeted with mass drug administration; however, limited attention has been given to propagated uncertainties, their interpretation, and consequences for the mapped values. Using currently published literature on the spatial epidemiology of helminth infections we identified: (1 the main uncertainty sources, their definition and quantification and (2 how uncertainty is informative for STH programme managers and scientists working in this domain.We performed a systematic literature search using the Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA protocol. We searched Web of Knowledge and PubMed using a combination of uncertainty, geographic and disease terms. A total of 73 papers fulfilled the inclusion criteria for the systematic review. Only 9% of the studies did not address any element of uncertainty, while 91% of studies quantified uncertainty in the predicted morbidity indicators and 23% of studies mapped it. In addition, 57% of the studies quantified uncertainty in the regression coefficients but only 7% incorporated it in the regression response variable (morbidity indicator. Fifty percent of the studies discussed uncertainty in the covariates but did not quantify it. Uncertainty was mostly defined as precision, and quantified using credible intervals by means of Bayesian approaches.None of the studies considered adequately all sources of uncertainties. We highlighted the need for uncertainty in the morbidity indicator and predictor variable to be incorporated into the modelling framework. Study design and spatial support require further attention and uncertainty associated with Earth observation data should be quantified. Finally, more attention should be given to mapping and interpreting

  15. Uncertainty in simulating wheat yields under climate change

    DEFF Research Database (Denmark)

    Asseng, A; Ewert, F; Rosenzweig, C

    2013-01-01

    of environments, particularly if the input information is sufficient. However, simulated climate change impacts vary across models owing to differences in model structures and parameter values. A greater proportion of the uncertainty in climate change impact projections was due to variations among crop models...... than to variations among downscaled general circulation models. Uncertainties in simulated impacts increased with CO2 concentrations and associated warming. These impact uncertainties can be reduced by improving temperature and CO2 relationships in models and better quantified through use of multi...

  16. Uncertainty Analyses and Strategy

    International Nuclear Information System (INIS)

    Kevin Coppersmith

    2001-01-01

    The DOE identified a variety of uncertainties, arising from different sources, during its assessment of the performance of a potential geologic repository at the Yucca Mountain site. In general, the number and detail of process models developed for the Yucca Mountain site, and the complex coupling among those models, make the direct incorporation of all uncertainties difficult. The DOE has addressed these issues in a number of ways using an approach to uncertainties that is focused on producing a defensible evaluation of the performance of a potential repository. The treatment of uncertainties oriented toward defensible assessments has led to analyses and models with so-called ''conservative'' assumptions and parameter bounds, where conservative implies lower performance than might be demonstrated with a more realistic representation. The varying maturity of the analyses and models, and uneven level of data availability, result in total system level analyses with a mix of realistic and conservative estimates (for both probabilistic representations and single values). That is, some inputs have realistically represented uncertainties, and others are conservatively estimated or bounded. However, this approach is consistent with the ''reasonable assurance'' approach to compliance demonstration, which was called for in the U.S. Nuclear Regulatory Commission's (NRC) proposed 10 CFR Part 63 regulation (64 FR 8640 [DIRS 101680]). A risk analysis that includes conservatism in the inputs will result in conservative risk estimates. Therefore, the approach taken for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) provides a reasonable representation of processes and conservatism for purposes of site recommendation. However, mixing unknown degrees of conservatism in models and parameter representations reduces the transparency of the analysis and makes the development of coherent and consistent probability statements about projected repository

  17. Avoiding climate change uncertainties in Strategic Environmental Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, Sanne Vammen, E-mail: sannevl@plan.aau.dk [The Danish Centre for Environmental Assessment, Aalborg University-Copenhagen, A.C. Meyers Vænge 15, 2450 København SV (Denmark); Kørnøv, Lone, E-mail: lonek@plan.aau.dk [The Danish Centre for Environmental Assessment, Aalborg University, Skibbrogade 5, 1. Sal, 9000 Aalborg (Denmark); Driscoll, Patrick, E-mail: patrick@plan.aau.dk [The Danish Centre for Environmental Assessment, Aalborg University-Copenhagen, A.C. Meyers Vænge 15, 2450 København SV (Denmark)

    2013-11-15

    This article is concerned with how Strategic Environmental Assessment (SEA) practice handles climate change uncertainties within the Danish planning system. First, a hypothetical model is set up for how uncertainty is handled and not handled in decision-making. The model incorporates the strategies ‘reduction’ and ‘resilience’, ‘denying’, ‘ignoring’ and ‘postponing’. Second, 151 Danish SEAs are analysed with a focus on the extent to which climate change uncertainties are acknowledged and presented, and the empirical findings are discussed in relation to the model. The findings indicate that despite incentives to do so, climate change uncertainties were systematically avoided or downplayed in all but 5 of the 151 SEAs that were reviewed. Finally, two possible explanatory mechanisms are proposed to explain this: conflict avoidance and a need to quantify uncertainty.

  18. Economic uncertainty and its impact on the Croatian economy

    Directory of Open Access Journals (Sweden)

    Petar Soric

    2017-12-01

    Full Text Available The aim of this paper is to quantify institutional (political and fiscal and non-institutional uncertainty (economic policy uncertainty, Economists’ recession index, natural disasters-related uncertainty, and several disagreement measures. The stated indicators are based on articles from highly popular Croatian news portals, the repository of law amendments (Narodne novine, and Business and Consumer Surveys. We also introduce a composite uncertainty indicator, obtained by the principal components method. The analysis of a structural VAR model of the Croatian economy (both with fixed and time-varying parameters has showed that a vast part of the analysed indicators are significant predictors of economic activity. It is demonstrated that their impact on industrial production is the strongest in the onset of a crisis. On the other hand, the influence of fiscal uncertainty exhibits just the opposite tendencies. It strengthens with the intensification of economic activity, which partially exculpates the possible utilization of fiscal expansion as a counter-crisis tool.

  19. Uncertainties in extreme precipitation under climate change conditions

    DEFF Research Database (Denmark)

    Sunyer Pinya, Maria Antonia

    of adaptation strategies, but these changes are subject to uncertainties. The focus of this PhD thesis is the quantification of uncertainties in changes in extreme precipitation. It addresses two of the main sources of uncertainty in climate change impact studies: regional climate models (RCMs) and statistical...... downscaling methods (SDMs). RCMs provide information on climate change at the regional scale. SDMs are used to bias-correct and downscale the outputs of the RCMs to the local scale of interest in adaptation strategies. In the first part of the study, a multi-model ensemble of RCMs from the European ENSEMBLES...... project was used to quantify the uncertainty in RCM projections over Denmark. Three aspects of the RCMs relevant for the uncertainty quantification were first identified and investigated. These are: the interdependency of the RCMs; the performance in current climate; and the change in the performance...

  20. Avoiding climate change uncertainties in Strategic Environmental Assessment

    International Nuclear Information System (INIS)

    Larsen, Sanne Vammen; Kørnøv, Lone; Driscoll, Patrick

    2013-01-01

    This article is concerned with how Strategic Environmental Assessment (SEA) practice handles climate change uncertainties within the Danish planning system. First, a hypothetical model is set up for how uncertainty is handled and not handled in decision-making. The model incorporates the strategies ‘reduction’ and ‘resilience’, ‘denying’, ‘ignoring’ and ‘postponing’. Second, 151 Danish SEAs are analysed with a focus on the extent to which climate change uncertainties are acknowledged and presented, and the empirical findings are discussed in relation to the model. The findings indicate that despite incentives to do so, climate change uncertainties were systematically avoided or downplayed in all but 5 of the 151 SEAs that were reviewed. Finally, two possible explanatory mechanisms are proposed to explain this: conflict avoidance and a need to quantify uncertainty

  1. Estimating the measurement uncertainty in forensic blood alcohol analysis.

    Science.gov (United States)

    Gullberg, Rod G

    2012-04-01

    For many reasons, forensic toxicologists are being asked to determine and report their measurement uncertainty in blood alcohol analysis. While understood conceptually, the elements and computations involved in determining measurement uncertainty are generally foreign to most forensic toxicologists. Several established and well-documented methods are available to determine and report the uncertainty in blood alcohol measurement. A straightforward bottom-up approach is presented that includes: (1) specifying the measurand, (2) identifying the major components of uncertainty, (3) quantifying the components, (4) statistically combining the components and (5) reporting the results. A hypothetical example is presented that employs reasonable estimates for forensic blood alcohol analysis assuming headspace gas chromatography. These computations are easily employed in spreadsheet programs as well. Determining and reporting measurement uncertainty is an important element in establishing fitness-for-purpose. Indeed, the demand for such computations and information from the forensic toxicologist will continue to increase.

  2. Assessing Groundwater Model Uncertainty for the Central Nevada Test Area

    International Nuclear Information System (INIS)

    Pohll, Greg; Pohlmann, Karl; Hassan, Ahmed; Chapman, Jenny; Mihevc, Todd

    2002-01-01

    The purpose of this study is to quantify the flow and transport model uncertainty for the Central Nevada Test Area (CNTA). Six parameters were identified as uncertain, including the specified head boundary conditions used in the flow model, the spatial distribution of the underlying welded tuff unit, effective porosity, sorption coefficients, matrix diffusion coefficient, and the geochemical release function which describes nuclear glass dissolution. The parameter uncertainty was described by assigning prior statistical distributions for each of these parameters. Standard Monte Carlo techniques were used to sample from the parameter distributions to determine the full prediction uncertainty. Additional analysis is performed to determine the most cost-beneficial characterization activities. The maximum radius of the tritium and strontium-90 contaminant boundary was used as the output metric for evaluation of prediction uncertainty. The results indicate that combining all of the uncertainty in the parameters listed above propagates to a prediction uncertainty in the maximum radius of the contaminant boundary of 234 to 308 m and 234 to 302 m, for tritium and strontium-90, respectively. Although the uncertainty in the input parameters is large, the prediction uncertainty in the contaminant boundary is relatively small. The relatively small prediction uncertainty is primarily due to the small transport velocities such that large changes in the uncertain input parameters causes small changes in the contaminant boundary. This suggests that the model is suitable in terms of predictive capability for the contaminant boundary delineation

  3. Facing uncertainty in ecosystem services-based resource management.

    Science.gov (United States)

    Grêt-Regamey, Adrienne; Brunner, Sibyl H; Altwegg, Jürg; Bebi, Peter

    2013-09-01

    The concept of ecosystem services is increasingly used as a support for natural resource management decisions. While the science for assessing ecosystem services is improving, appropriate methods to address uncertainties in a quantitative manner are missing. Ignoring parameter uncertainties, modeling uncertainties and uncertainties related to human-environment interactions can modify decisions and lead to overlooking important management possibilities. In this contribution, we present a new approach for mapping the uncertainties in the assessment of multiple ecosystem services. The spatially explicit risk approach links Bayesian networks to a Geographic Information System for forecasting the value of a bundle of ecosystem services and quantifies the uncertainties related to the outcomes in a spatially explicit manner. We demonstrate that mapping uncertainties in ecosystem services assessments provides key information for decision-makers seeking critical areas in the delivery of ecosystem services in a case study in the Swiss Alps. The results suggest that not only the total value of the bundle of ecosystem services is highly dependent on uncertainties, but the spatial pattern of the ecosystem services values changes substantially when considering uncertainties. This is particularly important for the long-term management of mountain forest ecosystems, which have long rotation stands and are highly sensitive to pressing climate and socio-economic changes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Uncertainties in repository modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, J.R.

    1996-12-31

    The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling.

  5. Uncertainties in repository modeling

    International Nuclear Information System (INIS)

    Wilson, J.R.

    1996-01-01

    The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling

  6. Risks, uncertainty, vagueness

    International Nuclear Information System (INIS)

    Haefele, W.; Renn, O.; Erdmann, G.

    1990-01-01

    The notion of 'risk' is discussed in its social and technological contexts, leading to an investigation of the terms factuality, hypotheticality, uncertainty, and vagueness, and to the problems of acceptance and acceptability especially in the context of political decision finding. (DG) [de

  7. Uncertainty visualisation in the Model Web

    Science.gov (United States)

    Gerharz, L. E.; Autermann, C.; Hopmann, H.; Stasch, C.; Pebesma, E.

    2012-04-01

    Visualisation of geospatial data as maps is a common way to communicate spatially distributed information. If temporal and furthermore uncertainty information are included in the data, efficient visualisation methods are required. For uncertain spatial and spatio-temporal data, numerous visualisation methods have been developed and proposed, but only few tools for visualisation of data in a standardised way exist. Furthermore, usually they are realised as thick clients, and lack functionality of handling data coming from web services as it is envisaged in the Model Web. We present an interactive web tool for visualisation of uncertain spatio-temporal data developed in the UncertWeb project. The client is based on the OpenLayers JavaScript library. OpenLayers provides standard map windows and navigation tools, i.e. pan, zoom in/out, to allow interactive control for the user. Further interactive methods are implemented using jStat, a JavaScript library for statistics plots developed in UncertWeb, and flot. To integrate the uncertainty information into existing standards for geospatial data, the Uncertainty Markup Language (UncertML) was applied in combination with OGC Observations&Measurements 2.0 and JavaScript Object Notation (JSON) encodings for vector and NetCDF for raster data. The client offers methods to visualise uncertain vector and raster data with temporal information. Uncertainty information considered for the tool are probabilistic and quantified attribute uncertainties which can be provided as realisations or samples, full probability distributions functions and statistics. Visualisation is supported for uncertain continuous and categorical data. In the client, the visualisation is realised using a combination of different methods. Based on previously conducted usability studies, a differentiation between expert (in statistics or mapping) and non-expert users has been indicated as useful. Therefore, two different modes are realised together in the tool

  8. Multidominance, ellipsis, and quantifier scope

    NARCIS (Netherlands)

    Temmerman, Tanja Maria Hugo

    2012-01-01

    This dissertation provides a novel perspective on the interaction between quantifier scope and ellipsis. It presents a detailed investigation of the scopal interaction between English negative indefinites, modals, and quantified phrases in ellipsis. One of the crucial observations is that a negative

  9. Development of a Prototype Model-Form Uncertainty Knowledge Base

    Science.gov (United States)

    Green, Lawrence L.

    2016-01-01

    Uncertainties are generally classified as either aleatory or epistemic. Aleatory uncertainties are those attributed to random variation, either naturally or through manufacturing processes. Epistemic uncertainties are generally attributed to a lack of knowledge. One type of epistemic uncertainty is called model-form uncertainty. The term model-form means that among the choices to be made during a design process within an analysis, there are different forms of the analysis process, which each give different results for the same configuration at the same flight conditions. Examples of model-form uncertainties include the grid density, grid type, and solver type used within a computational fluid dynamics code, or the choice of the number and type of model elements within a structures analysis. The objectives of this work are to identify and quantify a representative set of model-form uncertainties and to make this information available to designers through an interactive knowledge base (KB). The KB can then be used during probabilistic design sessions, so as to enable the possible reduction of uncertainties in the design process through resource investment. An extensive literature search has been conducted to identify and quantify typical model-form uncertainties present within aerospace design. An initial attempt has been made to assemble the results of this literature search into a searchable KB, usable in real time during probabilistic design sessions. A concept of operations and the basic structure of a model-form uncertainty KB are described. Key operations within the KB are illustrated. Current limitations in the KB, and possible workarounds are explained.

  10. Aggregated recommendation through random forests.

    Science.gov (United States)

    Zhang, Heng-Ru; Min, Fan; He, Xu

    2014-01-01

    Aggregated recommendation refers to the process of suggesting one kind of items to a group of users. Compared to user-oriented or item-oriented approaches, it is more general and, therefore, more appropriate for cold-start recommendation. In this paper, we propose a random forest approach to create aggregated recommender systems. The approach is used to predict the rating of a group of users to a kind of items. In the preprocessing stage, we merge user, item, and rating information to construct an aggregated decision table, where rating information serves as the decision attribute. We also model the data conversion process corresponding to the new user, new item, and both new problems. In the training stage, a forest is built for the aggregated training set, where each leaf is assigned a distribution of discrete rating. In the testing stage, we present four predicting approaches to compute evaluation values based on the distribution of each tree. Experiments results on the well-known MovieLens dataset show that the aggregated approach maintains an acceptable level of accuracy.

  11. Strategy under uncertainty.

    Science.gov (United States)

    Courtney, H; Kirkland, J; Viguerie, P

    1997-01-01

    At the heart of the traditional approach to strategy lies the assumption that by applying a set of powerful analytic tools, executives can predict the future of any business accurately enough to allow them to choose a clear strategic direction. But what happens when the environment is so uncertain that no amount of analysis will allow us to predict the future? What makes for a good strategy in highly uncertain business environments? The authors, consultants at McKinsey & Company, argue that uncertainty requires a new way of thinking about strategy. All too often, they say, executives take a binary view: either they underestimate uncertainty to come up with the forecasts required by their companies' planning or capital-budging processes, or they overestimate it, abandon all analysis, and go with their gut instinct. The authors outline a new approach that begins by making a crucial distinction among four discrete levels of uncertainty that any company might face. They then explain how a set of generic strategies--shaping the market, adapting to it, or reserving the right to play at a later time--can be used in each of the four levels. And they illustrate how these strategies can be implemented through a combination of three basic types of actions: big bets, options, and no-regrets moves. The framework can help managers determine which analytic tools can inform decision making under uncertainty--and which cannot. At a broader level, it offers executives a discipline for thinking rigorously and systematically about uncertainty and its implications for strategy.

  12. Three-dimensional shape analysis of coarse aggregates: New techniques for and preliminary results on several different coarse aggregates and reference rocks

    International Nuclear Information System (INIS)

    Erdogan, S.T.; Quiroga, P.N.; Fowler, D.W.; Saleh, H.A.; Livingston, R.A.; Garboczi, E.J.; Ketcham, P.M.; Hagedorn, J.G.; Satterfield, S.G.

    2006-01-01

    The shape of aggregates used in concrete is an important parameter that helps determine many concrete properties, especially the rheology of fresh concrete and early-age mechanical properties. This paper discusses the sample preparation and image analysis techniques necessary for obtaining an aggregate particle image in 3-D, using X-ray computed tomography, which is then suitable for spherical harmonic analysis. The shapes of three reference rocks are analyzed for uncertainty determination via direct comparison to the geometry of their reconstructed images. A Virtual Reality Modeling Language technique is demonstrated that can give quick and accurate 3-D views of aggregates. Shape data on several different kinds of coarse aggregates are compared and used to illustrate potential mathematical shape analyses made possible by the spherical harmonic information

  13. Fractal Aggregates in Tennis Ball Systems

    Science.gov (United States)

    Sabin, J.; Bandin, M.; Prieto, G.; Sarmiento, F.

    2009-01-01

    We present a new practical exercise to explain the mechanisms of aggregation of some colloids which are otherwise not easy to understand. We have used tennis balls to simulate, in a visual way, the aggregation of colloids under reaction-limited colloid aggregation (RLCA) and diffusion-limited colloid aggregation (DLCA) regimes. We have used the…

  14. A Functional Reference Architecture for Aggregators

    DEFF Research Database (Denmark)

    Bondy, Daniel Esteban Morales; Heussen, Kai; Gehrke, Oliver

    2015-01-01

    Aggregators are considered to be a key enabling technology for harvesting power system services from distributed energy resources (DER). As a precondition for more widespread use of aggregators in power systems, methods for comparing and validating aggregator designs must be established. This paper...... proposes a functional reference architecture for aggregators to address this requirement....

  15. Aggregation of carbon dioxide sequestration storage assessment units

    Science.gov (United States)

    Blondes, Madalyn S.; Schuenemeyer, John H.; Olea, Ricardo A.; Drew, Lawrence J.

    2013-01-01

    The U.S. Geological Survey is currently conducting a national assessment of carbon dioxide (CO2) storage resources, mandated by the Energy Independence and Security Act of 2007. Pre-emission capture and storage of CO2 in subsurface saline formations is one potential method to reduce greenhouse gas emissions and the negative impact of global climate change. Like many large-scale resource assessments, the area under investigation is split into smaller, more manageable storage assessment units (SAUs), which must be aggregated with correctly propagated uncertainty to the basin, regional, and national scales. The aggregation methodology requires two types of data: marginal probability distributions of storage resource for each SAU, and a correlation matrix obtained by expert elicitation describing interdependencies between pairs of SAUs. Dependencies arise because geologic analogs, assessment methods, and assessors often overlap. The correlation matrix is used to induce rank correlation, using a Cholesky decomposition, among the empirical marginal distributions representing individually assessed SAUs. This manuscript presents a probabilistic aggregation method tailored to the correlations and dependencies inherent to a CO2 storage assessment. Aggregation results must be presented at the basin, regional, and national scales. A single stage approach, in which one large correlation matrix is defined and subsets are used for different scales, is compared to a multiple stage approach, in which new correlation matrices are created to aggregate intermediate results. Although the single-stage approach requires determination of significantly more correlation coefficients, it captures geologic dependencies among similar units in different basins and it is less sensitive to fluctuations in low correlation coefficients than the multiple stage approach. Thus, subsets of one single-stage correlation matrix are used to aggregate to basin, regional, and national scales.

  16. Uncertainty contributions to low flow projections in Austria

    Science.gov (United States)

    Parajka, J.; Blaschke, A. P.; Blöschl, G.; Haslinger, K.; Hepp, G.; Laaha, G.; Schöner, W.; Trautvetter, H.; Viglione, A.; Zessner, M.

    2015-11-01

    The main objective of the paper is to understand the contributions to the uncertainty in low flow projections resulting from hydrological model uncertainty and climate projection uncertainty. Model uncertainty is quantified by different parameterizations of a conceptual semi-distributed hydrologic model (TUWmodel) using 11 objective functions in three different decades (1976-1986, 1987-1997, 1998-2008), which allows disentangling the effect of modeling uncertainty and temporal stability of model parameters. Climate projection uncertainty is quantified by four future climate scenarios (ECHAM5-A1B, A2, B1 and HADCM3-A1B) using a delta change approach. The approach is tested for 262 basins in Austria. The results indicate that the seasonality of the low flow regime is an important factor affecting the performance of model calibration in the reference period and the uncertainty of Q95 low flow projections in the future period. In Austria, the calibration uncertainty in terms of Q95 is larger in basins with summer low flow regime than in basins with winter low flow regime. Using different calibration periods may result in a range of up to 60 % in simulated Q95 low flows. The low flow projections show an increase of low flows in the Alps, typically in the range of 10-30 % and a decrease in the south-eastern part of Austria mostly in the range -5 to -20 % for the period 2021-2050 relative the reference period 1976-2008. The change in seasonality varies between scenarios, but there is a tendency for earlier low flows in the Northern Alps and later low flows in Eastern Austria. In 85 % of the basins, the uncertainty in Q95 from model calibration is larger than the uncertainty from different climate scenarios. The total uncertainty of Q95 projections is the largest in basins with winter low flow regime and, in some basins, exceeds 60 %. In basins with summer low flows and the total uncertainty is mostly less than 20 %. While the calibration uncertainty dominates over climate

  17. Risk Assessment Uncertainties in Cybersecurity Investments

    Directory of Open Access Journals (Sweden)

    Andrew Fielder

    2018-06-01

    Full Text Available When undertaking cybersecurity risk assessments, it is important to be able to assign numeric values to metrics to compute the final expected loss that represents the risk that an organization is exposed to due to cyber threats. Even if risk assessment is motivated by real-world observations and data, there is always a high chance of assigning inaccurate values due to different uncertainties involved (e.g., evolving threat landscape, human errors and the natural difficulty of quantifying risk. Existing models empower organizations to compute optimal cybersecurity strategies given their financial constraints, i.e., available cybersecurity budget. Further, a general game-theoretic model with uncertain payoffs (probability-distribution-valued payoffs shows that such uncertainty can be incorporated in the game-theoretic model by allowing payoffs to be random. This paper extends previous work in the field to tackle uncertainties in risk assessment that affect cybersecurity investments. The findings from simulated examples indicate that although uncertainties in cybersecurity risk assessment lead, on average, to different cybersecurity strategies, they do not play a significant role in the final expected loss of the organization when utilising a game-theoretic model and methodology to derive these strategies. The model determines robust defending strategies even when knowledge regarding risk assessment values is not accurate. As a result, it is possible to show that the cybersecurity investments’ tool is capable of providing effective decision support.

  18. Parametric uncertainty in optical image modeling

    Science.gov (United States)

    Potzick, James; Marx, Egon; Davidson, Mark

    2006-10-01

    Optical photomask feature metrology and wafer exposure process simulation both rely on optical image modeling for accurate results. While it is fair to question the accuracies of the available models, model results also depend on several input parameters describing the object and imaging system. Errors in these parameter values can lead to significant errors in the modeled image. These parameters include wavelength, illumination and objective NA's, magnification, focus, etc. for the optical system, and topography, complex index of refraction n and k, etc. for the object. In this paper each input parameter is varied over a range about its nominal value and the corresponding images simulated. Second order parameter interactions are not explored. Using the scenario of the optical measurement of photomask features, these parametric sensitivities are quantified by calculating the apparent change of the measured linewidth for a small change in the relevant parameter. Then, using reasonable values for the estimated uncertainties of these parameters, the parametric linewidth uncertainties can be calculated and combined to give a lower limit to the linewidth measurement uncertainty for those parameter uncertainties.

  19. Uncertainty and validation. Effect of user interpretation on uncertainty estimates

    International Nuclear Information System (INIS)

    Kirchner, G.; Peterson, R.

    1996-11-01

    Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the

  20. Uncertainty and validation. Effect of user interpretation on uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Kirchner, G. [Univ. of Bremen (Germany); Peterson, R. [AECL, Chalk River, ON (Canada)] [and others

    1996-11-01

    Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the

  1. An exact approach for aggregated formulations

    DEFF Research Database (Denmark)

    Gamst, Mette; Spoorendonk, Simon; Røpke, Stefan

    Aggregating formulations is a powerful approach for problems to take on tractable forms. Aggregation may lead to loss of information, i.e. the aggregated formulation may be an approximation of the original problem. In branch-and-bound context, aggregation can also complicate branching, e.g. when...... optimality cannot be guaranteed by branching on aggregated variables. We present a generic exact solution method to remedy the drawbacks of aggregation. It combines the original and aggregated formulations and applies Benders' decomposition. We apply the method to the Split Delivery Vehicle Routing Problem....

  2. Balancing energy flexibilities through aggregation

    DEFF Research Database (Denmark)

    Valsomatzis, Emmanouil; Hose, Katja; Pedersen, Torben Bach

    2014-01-01

    One of the main goals of recent developments in the Smart Grid area is to increase the use of renewable energy sources. These sources are characterized by energy fluctuations that might lead to energy imbalances and congestions in the electricity grid. Exploiting inherent flexibilities, which exist...... in both energy production and consumption, is the key to solving these problems. Flexibilities can be expressed as flex-offers, which due to their high number need to be aggregated to reduce the complexity of energy scheduling. In this paper, we discuss balance aggregation techniques that already during...... aggregation aim at balancing flexibilities in production and consumption to reduce the probability of congestions and reduce the complexity of scheduling. We present results of our extensive experiments....

  3. Turbulent breakage of ductile aggregates.

    Science.gov (United States)

    Marchioli, Cristian; Soldati, Alfredo

    2015-05-01

    In this paper we study breakage rate statistics of small colloidal aggregates in nonhomogeneous anisotropic turbulence. We use pseudospectral direct numerical simulation of turbulent channel flow and Lagrangian tracking to follow the motion of the aggregates, modeled as sub-Kolmogorov massless particles. We focus specifically on the effects produced by ductile rupture: This rupture is initially activated when fluctuating hydrodynamic stresses exceed a critical value, σ>σ(cr), and is brought to completion when the energy absorbed by the aggregate meets the critical breakage value. We show that ductile rupture breakage rates are significantly reduced with respect to the case of instantaneous brittle rupture (i.e., breakage occurs as soon as σ>σ(cr)). These discrepancies are due to the different energy values at play as well as to the statistical features of energy distribution in the anisotropic turbulence case examined.

  4. A method for minimum risk portfolio optimization under hybrid uncertainty

    Science.gov (United States)

    Egorova, Yu E.; Yazenin, A. V.

    2018-03-01

    In this paper, we investigate a minimum risk portfolio model under hybrid uncertainty when the profitability of financial assets is described by fuzzy random variables. According to Feng, the variance of a portfolio is defined as a crisp value. To aggregate fuzzy information the weakest (drastic) t-norm is used. We construct an equivalent stochastic problem of the minimum risk portfolio model and specify the stochastic penalty method for solving it.

  5. Estimates of bias and uncertainty in recorded external dose

    International Nuclear Information System (INIS)

    Fix, J.J.; Gilbert, E.S.; Baumgartner, W.V.

    1994-10-01

    A study is underway to develop an approach to quantify bias and uncertainty in recorded dose estimates for workers at the Hanford Site based on personnel dosimeter results. This paper focuses on selected experimental studies conducted to better define response characteristics of Hanford dosimeters. The study is more extensive than the experimental studies presented in this paper and includes detailed consideration and evaluation of other sources of bias and uncertainty. Hanford worker dose estimates are used in epidemiologic studies of nuclear workers. A major objective of these studies is to provide a direct assessment of the carcinogenic risk of exposure to ionizing radiation at low doses and dose rates. Considerations of bias and uncertainty in the recorded dose estimates are important in the conduct of this work. The method developed for use with Hanford workers can be considered an elaboration of the approach used to quantify bias and uncertainty in estimated doses for personnel exposed to radiation as a result of atmospheric testing of nuclear weapons between 1945 and 1962. This approach was first developed by a National Research Council (NRC) committee examining uncertainty in recorded film badge doses during atmospheric tests (NRC 1989). It involved quantifying both bias and uncertainty from three sources (i.e., laboratory, radiological, and environmental) and then combining them to obtain an overall assessment. Sources of uncertainty have been evaluated for each of three specific Hanford dosimetry systems (i.e., the Hanford two-element film dosimeter, 1944-1956; the Hanford multi-element film dosimeter, 1957-1971; and the Hanford multi-element TLD, 1972-1993) used to estimate personnel dose throughout the history of Hanford operations. Laboratory, radiological, and environmental sources of bias and uncertainty have been estimated based on historical documentation and, for angular response, on selected laboratory measurements

  6. Quantifying the efficiency of river regulation

    Directory of Open Access Journals (Sweden)

    R. Rödel

    2005-01-01

    Full Text Available Dam-affected hydrologic time series give rise to uncertainties when they are used for calibrating large-scale hydrologic models or for analysing runoff records. It is therefore necessary to identify and to quantify the impact of impoundments on runoff time series. Two different approaches were employed. The first, classic approach compares the volume of the dams that are located upstream from a station with the annual discharge. The catchment areas of the stations are calculated and then related to geo-referenced dam attributes. The paper introduces a data set of geo-referenced dams linked with 677 gauging stations in Europe. Second, the intensity of the impoundment impact on runoff times series can be quantified more exactly and directly when long-term runoff records are available. Dams cause a change in the variability of flow regimes. This effect can be measured using the model of linear single storage. The dam-caused storage change ΔS can be assessed through the volume of the emptying process between two flow regimes. As an example, the storage change ΔS is calculated for regulated long-term series of the Luleälven in northern Sweden.

  7. SHAPE CHARACTERIZATION OF CONCRETE AGGREGATE

    Directory of Open Access Journals (Sweden)

    Jing Hu

    2011-05-01

    Full Text Available As a composite material, the performance of concrete materials can be expected to depend on the properties of the interfaces between its two major components, aggregate and cement paste. The microstructure at the interfacial transition zone (ITZ is assumed to be different from the bulk material. In general, properties of conventional concrete have been found favoured by optimum packing density of the aggregate. Particle size is a common denominator in such studies. Size segregation in the ITZ among the binder particles in the fresh state, observed in simulation studies by concurrent algorithm-based SPACE system, additionally governs density as well as physical bonding capacity inside these shell-like zones around aggregate particles. These characteristics have been demonstrated qualitatively pertaining also after maturation of the concrete. Such properties of the ITZs have direct impact on composite properties. Despite experimental approaches revealed effects of aggregate grain shape on different features of material structure (among which density, and as a consequence on mechanical properties, it is still an underrated factor in laboratory studies, probably due to the general feeling that a suitable methodology for shape characterization is not available. A scientific argument hindering progress is the interconnected nature of size and shape. Presently, a practical problem preventing shape effects to be emphasized is the limitation of most computer simulation systems in concrete technology to spherical particles. New developments at Delft University of Technology will make it possible in the near future to generate jammed states, or other high-density fresh particle mixtures of non-spherical particles, which thereupon can be subjected to hydration algorithms. This paper will sketch the outlines of a methodological approach for shape assessment of loose (non-embedded aggregate grains, and demonstrate its use for two types of aggregate, allowing

  8. Customer Aggregation: An Opportunity for Green Power?

    Energy Technology Data Exchange (ETDEWEB)

    Holt, E.; Bird, L.

    2001-02-26

    We undertook research into the experience of aggregation groups to determine whether customer aggregation offers an opportunity to bring green power choices to more customers. The objectives of this report, therefore, are to (1) identify the different types of aggregation that are occurring today, (2) learn whether aggregation offers an opportunity to advance sales of green power, and (3) share these concepts and approaches with potential aggregators and green power advocates.

  9. Environmentalism and natural aggregate mining

    Science.gov (United States)

    Drew, L.J.; Langer, W.H.; Sachs, J.S.

    2002-01-01

    Sustaining a developed economy and expanding a developing one require the use of large volumes of natural aggregate. Almost all human activity (commercial, recreational, or leisure) is transacted in or on facilities constructed from natural aggregate. In our urban and suburban worlds, we are almost totally dependent on supplies of water collected behind dams and transported through aqueducts made from concrete. Natural aggregate is essential to the facilities that produce energy-hydroelectric dams and coal-fired powerplants. Ironically, the utility created for mankind by the use of natural aggregate is rarely compared favorably with the environmental impacts of mining it. Instead, the empty quarries and pits are seen as large negative environmental consequences. At the root of this disassociation is the philosophy of environmentalism, which flavors our perceptions of the excavation, processing, and distribution of natural aggregate. The two end-member ideas in this philosophy are ecocentrism and anthropocentrism. Ecocentrism takes the position that the natural world is a organism whose arteries are the rivers-their flow must not be altered. The soil is another vital organ and must not be covered with concrete and asphalt. The motto of the ecocentrist is "man must live more lightly on the land." The anthropocentrist wants clean water and air and an uncluttered landscape for human use. Mining is allowed and even encouraged, but dust and noise from quarry and pit operations must be minimized. The large volume of truck traffic is viewed as a real menace to human life and should be regulated and isolated. The environmental problems that the producers of natural aggregate (crushed stone and sand and gravel) face today are mostly difficult social and political concerns associated with the large holes dug in the ground and the large volume of heavy truck traffic associated with quarry and pit operations. These concerns have increased in recent years as society's demand for

  10. Characterization of dispersed and aggregated Al2O3 morphologies for predicting nanofluid thermal conductivities

    International Nuclear Information System (INIS)

    Feng Xuemei; Johnson, Drew W.

    2013-01-01

    Nanofluids are reported to have enhanced thermal conductivities resulting from nanoparticle aggregation. The goal of this study was to explore through experimental measurements, dispersed and aggregated morphology effects on enhanced thermal conductivities for Al 2 O 3 nanoparticles with a primary size of 54.2 ± 2.0 nm. Aggregation effects were investigated by measuring thermal conductivity of different particle morphologies that occurred under different aggregation conditions. Fractal dimensions and aspect ratios were used to quantify the aggregation morphologies. Fractal dimensions were measured using static light scattering and imaging techniques. Aspect ratios were measured using dynamic light scattering, scanning electron microscopy, and atomic force microscopy. Results showed that the enhancements in thermal conductivity can be predicted with effective medium theory when aspect ratio was considered.

  11. Aggregate size and architecture determine biomass activity for one-stage partial nitritation and anammox

    DEFF Research Database (Denmark)

    Vlaeminck, S.; Terada, Akihiko; Smets, Barth F.

    2010-01-01

    to the inoculation and operation of the reactors. Fluorescent in-situ hybridization (FISH) was applied on aggregate sections to quantify AerAOB and AnAOB, as well as to visualize the aggregate architecture. The activity balance of the aggregates was calculated as the nitrite accumulation rate ratio (NARR), i...... and nitrite sources (NARR, > 1.7). Large A and C aggregates were granules capable of autonomous nitrogen removal (NARR, 0.6 to 1.1) with internal AnAOB zones surrounded by an AerAOB rim. Around 50% of the autotrophic space in these granules consisted of AerAOB- and AnAOB-specific EPS. Large B aggregates were...... thin film-like nitrite sinks (NARR,

  12. Uncertainty quantification for PZT bimorph actuators

    Science.gov (United States)

    Bravo, Nikolas; Smith, Ralph C.; Crews, John

    2018-03-01

    In this paper, we discuss the development of a high fidelity model for a PZT bimorph actuator used for micro-air vehicles, which includes the Robobee. We developed a high-fidelity model for the actuator using the homogenized energy model (HEM) framework, which quantifies the nonlinear, hysteretic, and rate-dependent behavior inherent to PZT in dynamic operating regimes. We then discussed an inverse problem on the model. We included local and global sensitivity analysis of the parameters in the high-fidelity model. Finally, we will discuss the results of Bayesian inference and uncertainty quantification on the HEM.

  13. Drivers and uncertainties of forecasted range shifts for warm-water fishes under climate and land cover change

    Science.gov (United States)

    Bouska, Kristen; Whitledge, Gregory W.; Lant, Christopher; Schoof, Justin

    2018-01-01

    Land cover is an important determinant of aquatic habitat and is projected to shift with climate changes, yet climate-driven land cover changes are rarely factored into climate assessments. To quantify impacts and uncertainty of coupled climate and land cover change on warm-water fish species’ distributions, we used an ensemble model approach to project distributions of 14 species. For each species, current range projections were compared to 27 scenario-based projections and aggregated to visualize uncertainty. Multiple regression and model selection techniques were used to identify drivers of range change. Novel, or no-analogue, climates were assessed to evaluate transferability of models. Changes in total probability of occurrence ranged widely across species, from a 63% increase to a 65% decrease. Distributional gains and losses were largely driven by temperature and flow variables and underscore the importance of habitat heterogeneity and connectivity to facilitate adaptation to changing conditions. Finally, novel climate conditions were driven by mean annual maximum temperature, which stresses the importance of understanding the role of temperature on fish physiology and the role of temperature-mitigating management practices.

  14. Role of uncertainty in the basalt waste isolation project

    International Nuclear Information System (INIS)

    Knepp, A.J.; Dahlem, D.H.

    1989-01-01

    The current national Civilian Radioactive Waste Management (CRWM) Program to select a mined geologic repository will likely require the extensive use of probabilistic techniques to quantify uncertainty in predictions of repository isolation performance. Performance of nonhomogeneous, geologic hydrologic, and chemical systems must be predicted over time frames of thousands of years and therefore will likely contain significant uncertainty. A qualitative assessment of our limited ability to interrogate the site in a nondestructive manner coupled with the early stage of development in the pertinent geosciences support this statement. The success of the approach to incorporate what currently appears to be an appreciable element of uncertainty into the predictions of repository performance will play an important role in acquiring a license to operate and in establishing the level of safety associated with the concept of long-term geologic storage of nuclear waste. This paper presents a brief background on the Hanford Site and the repository program, references the sources that establish the legislative requirement to quantify uncertainties in performance predictions, and summarized the present and future program at the Hanford Site in this area. The decision to quantify significant sources of uncertainties has had a major impact on the direction of the site characterization program here at Hanford. The paper concludes with a number of observations on the impacts of this decision

  15. Uncertainty quantification in Rothermel's Model using an efficient sampling method

    Science.gov (United States)

    Edwin Jimenez; M. Yousuff Hussaini; Scott L. Goodrick

    2007-01-01

    The purpose of the present work is to quantify parametric uncertainty in Rothermel’s wildland fire spread model (implemented in software such as BehavePlus3 and FARSITE), which is undoubtedly among the most widely used fire spread models in the United States. This model consists of a nonlinear system of equations that relates environmental variables (input parameter...

  16. Uncertainty in adaptive capacity

    International Nuclear Information System (INIS)

    Neil Adger, W.; Vincent, K.

    2005-01-01

    The capacity to adapt is a critical element of the process of adaptation: it is the vector of resources that represent the asset base from which adaptation actions can be made. Adaptive capacity can in theory be identified and measured at various scales, from the individual to the nation. The assessment of uncertainty within such measures comes from the contested knowledge domain and theories surrounding the nature of the determinants of adaptive capacity and the human action of adaptation. While generic adaptive capacity at the national level, for example, is often postulated as being dependent on health, governance and political rights, and literacy, and economic well-being, the determinants of these variables at national levels are not widely understood. We outline the nature of this uncertainty for the major elements of adaptive capacity and illustrate these issues with the example of a social vulnerability index for countries in Africa. (authors)

  17. Uncertainties about climate

    International Nuclear Information System (INIS)

    Laval, Katia; Laval, Guy

    2013-01-01

    Like meteorology, climatology is not an exact science: climate change forecasts necessarily include a share of uncertainty. It is precisely this uncertainty which is brandished and exploited by the opponents to the global warming theory to put into question the estimations of its future consequences. Is it legitimate to predict the future using the past climate data (well documented up to 100000 years BP) or the climates of other planets, taking into account the impreciseness of the measurements and the intrinsic complexity of the Earth's machinery? How is it possible to model a so huge and interwoven system for which any exact description has become impossible? Why water and precipitations play such an important role in local and global forecasts, and how should they be treated? This book written by two physicists answers with simpleness these delicate questions in order to give anyone the possibility to build his own opinion about global warming and the need to act rapidly

  18. Exchange rate uncertainty and deviations from Purchasing\\ud Power Parity: evidence from the G7 area

    OpenAIRE

    Arghyrou, Michael; Gregoriou, Andros; Pourpourides, Panayiotis; Cardiff University

    2009-01-01

    Arghyrou, Gregoriou and Pourpourides (2009) argue that exchange rate uncertainty causes deviations from the law of one price. We test this hypothesis on aggregate data from the G7-area. We find that exchange rate uncertainty explains to a significant degree deviations from Purchasing Power Parity.

  19. The uncertainty principle

    International Nuclear Information System (INIS)

    Martens, Hans.

    1991-01-01

    The subject of this thesis is the uncertainty principle (UP). The UP is one of the most characteristic points of differences between quantum and classical mechanics. The starting point of this thesis is the work of Niels Bohr. Besides the discussion the work is also analyzed. For the discussion of the different aspects of the UP the formalism of Davies and Ludwig is used instead of the more commonly used formalism of Neumann and Dirac. (author). 214 refs.; 23 figs

  20. Uncertainty in artificial intelligence

    CERN Document Server

    Shachter, RD; Henrion, M; Lemmer, JF

    1990-01-01

    This volume, like its predecessors, reflects the cutting edge of research on the automation of reasoning under uncertainty.A more pragmatic emphasis is evident, for although some papers address fundamental issues, the majority address practical issues. Topics include the relations between alternative formalisms (including possibilistic reasoning), Dempster-Shafer belief functions, non-monotonic reasoning, Bayesian and decision theoretic schemes, and new inference techniques for belief nets. New techniques are applied to important problems in medicine, vision, robotics, and natural language und

  1. Decision Making Under Uncertainty

    Science.gov (United States)

    2010-11-01

    A sound approach to rational decision making requires a decision maker to establish decision objectives, identify alternatives, and evaluate those...often violate the axioms of rationality when making decisions under uncertainty. The systematic description of such observations may lead to the...which leads to “anchoring” on the initial value. The fact that individuals have been shown to deviate from rationality when making decisions

  2. Economic uncertainty principle?

    OpenAIRE

    Alexander Harin

    2006-01-01

    The economic principle of (hidden) uncertainty is presented. New probability formulas are offered. Examples of solutions of three types of fundamental problems are reviewed.; Principe d'incertitude économique? Le principe économique d'incertitude (cachée) est présenté. De nouvelles formules de chances sont offertes. Les exemples de solutions des trois types de problèmes fondamentaux sont reconsidérés.

  3. Citizen Candidates Under Uncertainty

    OpenAIRE

    Eguia, Jon X.

    2005-01-01

    In this paper we make two contributions to the growing literature on "citizen-candidate" models of representative democracy. First, we add uncertainty about the total vote count. We show that in a society with a large electorate, where the outcome of the election is uncertain and where winning candidates receive a large reward from holding office, there will be a two-candidate equilibrium and no equilibria with a single candidate. Second, we introduce a new concept of equilibrium, which we te...

  4. Forecasting Uncertainty in Electricity Smart Meter Data by Boosting Additive Quantile Regression

    KAUST Repository

    Taieb, Souhaib Ben; Huser, Raphaë l; Hyndman, Rob J.; Genton, Marc G.

    2016-01-01

    volatile and less predictable. There is a need within the energy industry for probabilistic forecasts of household electricity consumption to quantify the uncertainty of future electricity demand in order to undertake appropriate planning of generation

  5. UNCERTAINTIES IN GALACTIC CHEMICAL EVOLUTION MODELS

    International Nuclear Information System (INIS)

    Côté, Benoit; Ritter, Christian; Herwig, Falk; O’Shea, Brian W.; Pignatari, Marco; Jones, Samuel; Fryer, Chris L.

    2016-01-01

    We use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number of SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions, along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model

  6. Calibration Under Uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Trucano, Timothy Guy

    2005-03-01

    This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

  7. Participation under Uncertainty

    International Nuclear Information System (INIS)

    Boudourides, Moses A.

    2003-01-01

    This essay reviews a number of theoretical perspectives about uncertainty and participation in the present-day knowledge-based society. After discussing the on-going reconfigurations of science, technology and society, we examine how appropriate for policy studies are various theories of social complexity. Post-normal science is such an example of a complexity-motivated approach, which justifies civic participation as a policy response to an increasing uncertainty. But there are different categories and models of uncertainties implying a variety of configurations of policy processes. A particular role in all of them is played by expertise whose democratization is an often-claimed imperative nowadays. Moreover, we discuss how different participatory arrangements are shaped into instruments of policy-making and framing regulatory processes. As participation necessitates and triggers deliberation, we proceed to examine the role and the barriers of deliberativeness. Finally, we conclude by referring to some critical views about the ultimate assumptions of recent European policy frameworks and the conceptions of civic participation and politicization that they invoke

  8. Uncertainty analysis techniques

    International Nuclear Information System (INIS)

    Marivoet, J.; Saltelli, A.; Cadelli, N.

    1987-01-01

    The origin of the uncertainty affecting Performance Assessments, as well as their propagation to dose and risk results is discussed. The analysis is focused essentially on the uncertainties introduced by the input parameters, the values of which may range over some orders of magnitude and may be given as probability distribution function. The paper briefly reviews the existing sampling techniques used for Monte Carlo simulations and the methods for characterizing the output curves, determining their convergence and confidence limits. Annual doses, expectation values of the doses and risks are computed for a particular case of a possible repository in clay, in order to illustrate the significance of such output characteristics as the mean, the logarithmic mean and the median as well as their ratios. The report concludes that provisionally, due to its better robustness, such estimation as the 90th percentile may be substituted to the arithmetic mean for comparison of the estimated doses with acceptance criteria. In any case, the results obtained through Uncertainty Analyses must be interpreted with caution as long as input data distribution functions are not derived from experiments reasonably reproducing the situation in a well characterized repository and site

  9. Deterministic uncertainty analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1987-12-01

    This paper presents a deterministic uncertainty analysis (DUA) method for calculating uncertainties that has the potential to significantly reduce the number of computer runs compared to conventional statistical analysis. The method is based upon the availability of derivative and sensitivity data such as that calculated using the well known direct or adjoint sensitivity analysis techniques. Formation of response surfaces using derivative data and the propagation of input probability distributions are discussed relative to their role in the DUA method. A sample problem that models the flow of water through a borehole is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. Propogation of uncertainties by the DUA method is compared for ten cases in which the number of reference model runs was varied from one to ten. The DUA method gives a more accurate representation of the true cumulative distribution of the flow rate based upon as few as two model executions compared to fifty model executions using a statistical approach. 16 refs., 4 figs., 5 tabs

  10. Aggregation and fusion of modified low density lipoprotein.

    Science.gov (United States)

    Pentikäinen, M O; Lehtonen, E M; Kovanen, P T

    1996-12-01

    In atherogenesis, low density lipoprotein (LDL, diameter 22 nm) accumulates in the extracellular space of the arterial intima in the form of aggregates of lipid droplets (droplet diameter up to 400 nm). Here we studied the effects of various established in vitro LDL modifications on LDL aggregation and fusion. LDL was subjected to vortexing, oxidation by copper ions, proteolysis by alpha-chymotrypsin, lipolysis by sphingomyelinase, and nonenzymatic glycosylation, and was induced to form adducts with malondialdehyde or complexes with anti-apoB-100 antibodies. To assess the amount of enlarged LDL-derived structures formed (due to aggregation or fusion), we measured the turbidity of solutions containing modified LDL, and quantified the proportion of modified LDL that 1) sedimented at low-speed centrifugation (14,000 g), 2) floated at an increased rate at high-speed centrifugation (rate zonal flotation at 285,000 gmax), 3) were excluded in size-exclusion column chromatography (exclusion limit 40 MDa), or 4) failed to enter into 0.5%. Fast Lane agarose gel during electrophoresis. To detect whether particle fusion had contributed to the formation of the enlarged LDL-derived structures, particle morphology was examined using negative staining and thin-section transmission electron microscopy. We found that 1) aggregation was induced by the formation of LDL-antibody complexes, malondialdehyde treatment, and glycosylation of LDL; 2) fusion of LDL was induced by proteolysis of LDL by alpha-chymotrypsin; and 3) aggregation and fusion of LDL were induced by vortexing, oxidation by copper ions, and lipolysis by sphingomyclinase of LDL. The various modifications of LDL differed in their ability to induce aggregation and fusion.

  11. Soil Aggregate Stability and Grassland Productivity Associations in a Northern Mixed-Grass Prairie.

    Directory of Open Access Journals (Sweden)

    Kurt O Reinhart

    Full Text Available Soil aggregate stability data are often predicted to be positively associated with measures of plant productivity, rangeland health, and ecosystem functioning. Here we revisit the hypothesis that soil aggregate stability is positively associated with plant productivity. We measured local (plot-to-plot variation in grassland community composition, plant (aboveground biomass, root biomass, % water-stable soil aggregates, and topography. After accounting for spatial autocorrelation, we observed a negative association between % water-stable soil aggregates (0.25-1 and 1-2 mm size classes of macroaggregates and dominant graminoid biomass, and negative associations between the % water-stable aggregates and the root biomass of a dominant sedge (Carex filifolia. However, variation in total root biomass (0-10 or 0-30 cm depths was either negatively or not appreciably associated with soil aggregate stabilities. Overall, regression slope coefficients were consistently negative thereby indicating the general absence of a positive association between measures of plant productivity and soil aggregate stability for the study area. The predicted positive association between factors was likely confounded by variation in plant species composition. Specifically, sampling spanned a local gradient in plant community composition which was likely driven by niche partitioning along a subtle gradient in elevation. Our results suggest an apparent trade-off between some measures of plant biomass production and soil aggregate stability, both known to affect the land's capacity to resist erosion. These findings further highlight the uncertainty of plant biomass-soil stability associations.

  12. Shape characterization of concrete aggregate

    NARCIS (Netherlands)

    Stroeven, P.; Hu, J.

    2006-01-01

    As a composite material, the performance of concrete materials can be expected to depend on the properties of the interfaces between its two major components, aggregate and cement paste. The microstructure at the interfacial transition zone (ITZ) is assumed to be different from the bulk material. In

  13. Aggregate modeling of manufacturing systems

    NARCIS (Netherlands)

    Lefeber, A.A.J.; Armbruster, H.D.; Kempf, K.G.; Keskinocak, P.; Uzsoy, R.

    2011-01-01

    In this chapter we will present three approaches to model manufacturing systems in an aggregate way leading to fast and effective (i.e., scalable) simulations that allow the development of simulation tools for rapid exploration of different production scenarios in a factory as well as in a whole

  14. The Aggregate Dutch Historical Censuses

    NARCIS (Netherlands)

    Ashkpour, Ashkan; Meroño-Peñuela, Albert; Mandemakers, Kees

    2015-01-01

    Historical censuses have an enormous potential for research. In order to fully use this potential, harmonization of these censuses is essential. During the last decades, enormous efforts have been undertaken in digitizing the published aggregated outcomes of the Dutch historical censuses

  15. The Aggregate Dutch Historical Censuses

    NARCIS (Netherlands)

    A. Ashkpour (Ashkan); A. Meronõ-Peñuela (Albert); C.A. Mandemakers (Kees)

    2015-01-01

    textabstractHistorical censuses have an enormous potential for research. In order to fully use this potential, harmonization of these censuses is essential. During the last decades, enormous efforts have been undertaken in digitizing the published aggregated outcomes of the Dutch historical censuses

  16. Aggregate modeling of manufacturing systems

    NARCIS (Netherlands)

    Lefeber, A.A.J.; Armbruster, H.D.

    2007-01-01

    In this report we will present three approaches to model manufacturing systems in an aggregate way leading to fast and effective (i.e., scalable) simulations that allow the development of simulation tools for rapid exploration of different production scenarios in a factory as well as in a whole

  17. Diversity, intent, and aggregated search

    NARCIS (Netherlands)

    de Rijke, M.

    2014-01-01

    Diversity, intent and aggregated search are three core retrieval concepts that receive significant attention. In search result diversification one typically considers the relevance of a document in light of other retrieved documents. The goal is to identify the probable "aspects" of an ambiguous

  18. Quantifiers in Russian Sign Language

    NARCIS (Netherlands)

    Kimmelman, V.; Paperno, D.; Keenan, E.L.

    2017-01-01

    After presenting some basic genetic, historical and typological information about Russian Sign Language, this chapter outlines the quantification patterns it expresses. It illustrates various semantic types of quantifiers, such as generalized existential, generalized universal, proportional,

  19. Quantified Self in de huisartsenpraktijk

    NARCIS (Netherlands)

    de Groot, Martijn; Timmers, Bart; Kooiman, Thea; van Ittersum, Miriam

    2015-01-01

    Quantified Self staat voor de zelfmetende mens. Het aantal mensen dat met zelf gegeneerde gezondheidsgegevens het zorgproces binnenwandelt gaat de komende jaren groeien. Verschillende soorten activity trackers en gezondheidsapplicaties voor de smartphone maken het relatief eenvoudig om persoonlijke

  20. Uncertainty Analysis with Considering Resonance Self-shielding Effect

    International Nuclear Information System (INIS)

    Han, Tae Young

    2016-01-01

    If infinitely diluted multi-group cross sections were used for the sensitivity, the covariance data from the evaluated nuclear data library (ENDL) was directly applied. However, in case of using a self-shielded multi-group cross section, the covariance data should be corrected considering self-shielding effect. Usually, implicit uncertainty can be defined as the uncertainty change by the resonance self-shielding effect as described above. MUSAD ( Modules of Uncertainty and Sensitivity Analysis for DeCART ) has been developed for a multiplication factor and cross section uncertainty based on the generalized perturbation theory and it, however, can only quantify the explicit uncertainty by the self-shielded multi-group cross sections without considering the implicit effect. Thus, this paper addresses the implementation of the implicit uncertainty analysis module into the code and the numerical results for the verification are provided. The implicit uncertainty analysis module has been implemented into MUSAD based on infinitely-diluted cross section-based consistent method. The verification calculation was performed on MHTGR 350 Ex.I-1a and the differences with McCARD result decrease from 40% to 1% in CZP case and 3% in HFP case. From this study, it is expected that MUSAD code can reasonably produce the complete uncertainty on VHTR or LWR where the resonance self-shielding effect should be significantly considered

  1. Uncertainty Analysis with Considering Resonance Self-shielding Effect

    Energy Technology Data Exchange (ETDEWEB)

    Han, Tae Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    If infinitely diluted multi-group cross sections were used for the sensitivity, the covariance data from the evaluated nuclear data library (ENDL) was directly applied. However, in case of using a self-shielded multi-group cross section, the covariance data should be corrected considering self-shielding effect. Usually, implicit uncertainty can be defined as the uncertainty change by the resonance self-shielding effect as described above. MUSAD ( Modules of Uncertainty and Sensitivity Analysis for DeCART ) has been developed for a multiplication factor and cross section uncertainty based on the generalized perturbation theory and it, however, can only quantify the explicit uncertainty by the self-shielded multi-group cross sections without considering the implicit effect. Thus, this paper addresses the implementation of the implicit uncertainty analysis module into the code and the numerical results for the verification are provided. The implicit uncertainty analysis module has been implemented into MUSAD based on infinitely-diluted cross section-based consistent method. The verification calculation was performed on MHTGR 350 Ex.I-1a and the differences with McCARD result decrease from 40% to 1% in CZP case and 3% in HFP case. From this study, it is expected that MUSAD code can reasonably produce the complete uncertainty on VHTR or LWR where the resonance self-shielding effect should be significantly considered.

  2. Studies on recycled aggregates-based concrete.

    Science.gov (United States)

    Rakshvir, Major; Barai, Sudhirkumar V

    2006-06-01

    Reduced extraction of raw materials, reduced transportation cost, improved profits, reduced environmental impact and fast-depleting reserves of conventional natural aggregates has necessitated the use of recycling, in order to be able to conserve conventional natural aggregate. In this study various physical and mechanical properties of recycled concrete aggregates were examined. Recycled concrete aggregates are different from natural aggregates and concrete made from them has specific properties. The percentages of recycled concrete aggregates were varied and it was observed that properties such as compressive strength showed a decrease of up to 10% as the percentage of recycled concrete aggregates increased. Water absorption of recycled aggregates was found to be greater than natural aggregates, and this needs to be compensated during mix design.

  3. How incorporating more data reduces uncertainty in recovery predictions

    Energy Technology Data Exchange (ETDEWEB)

    Campozana, F.P.; Lake, L.W.; Sepehrnoori, K. [Univ. of Texas, Austin, TX (United States)

    1997-08-01

    From the discovery to the abandonment of a petroleum reservoir, there are many decisions that involve economic risks because of uncertainty in the production forecast. This uncertainty may be quantified by performing stochastic reservoir modeling (SRM); however, it is not practical to apply SRM every time the model is updated to account for new data. This paper suggests a novel procedure to estimate reservoir uncertainty (and its reduction) as a function of the amount and type of data used in the reservoir modeling. Two types of data are analyzed: conditioning data and well-test data. However, the same procedure can be applied to any other data type. Three performance parameters are suggested to quantify uncertainty. SRM is performed for the following typical stages: discovery, primary production, secondary production, and infill drilling. From those results, a set of curves is generated that can be used to estimate (1) the uncertainty for any other situation and (2) the uncertainty reduction caused by the introduction of new wells (with and without well-test data) into the description.

  4. Uncertainty estimation and risk prediction in air quality

    International Nuclear Information System (INIS)

    Garaud, Damien

    2011-01-01

    This work is about uncertainty estimation and risk prediction in air quality. Firstly, we build a multi-model ensemble of air quality simulations which can take into account all uncertainty sources related to air quality modeling. Ensembles of photochemical simulations at continental and regional scales are automatically generated. Then, these ensemble are calibrated with a combinatorial optimization method. It selects a sub-ensemble which is representative of uncertainty or shows good resolution and reliability for probabilistic forecasting. This work shows that it is possible to estimate and forecast uncertainty fields related to ozone and nitrogen dioxide concentrations or to improve the reliability of threshold exceedance predictions. The approach is compared with Monte Carlo simulations, calibrated or not. The Monte Carlo approach appears to be less representative of the uncertainties than the multi-model approach. Finally, we quantify the observational error, the representativeness error and the modeling errors. The work is applied to the impact of thermal power plants, in order to quantify the uncertainty on the impact estimates. (author) [fr

  5. Uncertainty in Simulating Wheat Yields Under Climate Change

    Science.gov (United States)

    Asseng, S.; Ewert, F.; Rosenzweig, Cynthia; Jones, J. W.; Hatfield, J. W.; Ruane, A. C.; Boote, K. J.; Thornburn, P. J.; Rotter, R. P.; Cammarano, D.; hide

    2013-01-01

    Projections of climate change impacts on crop yields are inherently uncertain1. Uncertainty is often quantified when projecting future greenhouse gas emissions and their influence on climate2. However, multi-model uncertainty analysis of crop responses to climate change is rare because systematic and objective comparisons among process-based crop simulation models1,3 are difficult4. Here we present the largest standardized model intercomparison for climate change impacts so far. We found that individual crop models are able to simulate measured wheat grain yields accurately under a range of environments, particularly if the input information is sufficient. However, simulated climate change impacts vary across models owing to differences in model structures and parameter values. A greater proportion of the uncertainty in climate change impact projections was due to variations among crop models than to variations among downscaled general circulation models. Uncertainties in simulated impacts increased with CO2 concentrations and associated warming. These impact uncertainties can be reduced by improving temperature and CO2 relationships in models and better quantified through use of multi-model ensembles. Less uncertainty in describing how climate change may affect agricultural productivity will aid adaptation strategy development and policymaking.

  6. Managing structural uncertainty in health economic decision models: a discrepancy approach

    OpenAIRE

    Strong, M.; Oakley, J.; Chilcott, J.

    2012-01-01

    Healthcare resource allocation decisions are commonly informed by computer model predictions of population mean costs and health effects. It is common to quantify the uncertainty in the prediction due to uncertain model inputs, but methods for quantifying uncertainty due to inadequacies in model structure are less well developed. We introduce an example of a model that aims to predict the costs and health effects of a physical activity promoting intervention. Our goal is to develop a framewor...

  7. Uncertainty Evaluation of Best Estimate Calculation Results

    International Nuclear Information System (INIS)

    Glaeser, H.

    2006-01-01

    Efforts are underway in Germany to perform analysis using best estimate computer codes and to include uncertainty evaluation in licensing. The German Reactor Safety Commission (RSK) issued a recommendation to perform uncertainty analysis in loss of coolant accident safety analyses (LOCA), recently. A more general requirement is included in a draft revision of the German Nuclear Regulation which is an activity of the German Ministry of Environment and Reactor Safety (BMU). According to the recommendation of the German RSK to perform safety analyses for LOCA in licensing the following deterministic requirements have still to be applied: Most unfavourable single failure, Unavailability due to preventive maintenance, Break location, Break size and break type, Double ended break, 100 percent through 200 percent, Large, medium and small break, Loss of off-site power, Core power (at accident initiation the most unfavourable conditions and values have to be assumed which may occur under normal operation taking into account the set-points of integral power and power density control. Measurement and calibration errors can be considered statistically), Time of fuel cycle. Analysis using best estimate codes with evaluation of uncertainties is the only way to quantify conservatisms with regard to code models and uncertainties of plant, fuel parameters and decay heat. This is especially the case for approaching licensing limits, e.g. due to power up-rates, higher burn-up and higher enrichment. Broader use of best estimate analysis is therefore envisaged in the future. Since some deterministic unfavourable assumptions regarding availability of NPP systems are still used, some conservatism in best-estimate analyses remains. Methods of uncertainty analyses have been developed and applied by the vendor Framatome ANP as well as by GRS in Germany. The GRS development was sponsored by the German Ministry of Economy and Labour (BMWA). (author)

  8. Methodologies of Uncertainty Propagation Calculation

    International Nuclear Information System (INIS)

    Chojnacki, Eric

    2002-01-01

    After recalling the theoretical principle and the practical difficulties of the methodologies of uncertainty propagation calculation, the author discussed how to propagate input uncertainties. He said there were two kinds of input uncertainty: - variability: uncertainty due to heterogeneity, - lack of knowledge: uncertainty due to ignorance. It was therefore necessary to use two different propagation methods. He demonstrated this in a simple example which he generalised, treating the variability uncertainty by the probability theory and the lack of knowledge uncertainty by the fuzzy theory. He cautioned, however, against the systematic use of probability theory which may lead to unjustifiable and illegitimate precise answers. Mr Chojnacki's conclusions were that the importance of distinguishing variability and lack of knowledge increased as the problem was getting more and more complex in terms of number of parameters or time steps, and that it was necessary to develop uncertainty propagation methodologies combining probability theory and fuzzy theory

  9. LOFT uncertainty-analysis methodology

    International Nuclear Information System (INIS)

    Lassahn, G.D.

    1983-01-01

    The methodology used for uncertainty analyses of measurements in the Loss-of-Fluid Test (LOFT) nuclear-reactor-safety research program is described and compared with other methodologies established for performing uncertainty analyses

  10. LOFT uncertainty-analysis methodology

    International Nuclear Information System (INIS)

    Lassahn, G.D.

    1983-01-01

    The methodology used for uncertainty analyses of measurements in the Loss-of-Fluid Test (LOFT) nuclear reactor safety research program is described and compared with other methodologies established for performing uncertainty analyses

  11. Uncertainty of a hydrological climate change impact assessment - Is it really all about climate uncertainty?

    Science.gov (United States)

    Honti, Mark; Reichert, Peter; Scheidegger, Andreas; Stamm, Christian

    2013-04-01

    climate change impact assessment and estimated the relative importance of the uncertainty sources. The study was performed on 2 small catchments in the Swiss Plateau with a lumped conceptual rainfall runoff model. In the climatic part we applied the standard ensemble approach to quantify uncertainty but in hydrology we used formal Bayesian uncertainty assessment method with 2 different likelihood functions. One was a time-series error model that was able to deal with the complicated statistical properties of hydrological model residuals. The second was a likelihood function for the flow quantiles directly. Due to the better data coverage and smaller hydrological complexity in one of our test catchments we had better performance from the hydrological model and thus could observe that the relative importance of different uncertainty sources varied between sites, boundary conditions and flow indicators. The uncertainty of future climate was important, but not dominant. The deficiencies of the hydrological model were on the same scale, especially for the sites and flow components where model performance for the past observations was further from optimal (Nash-Sutcliffe index = 0.5 - 0.7). The overall uncertainty of predictions was well beyond the expected change signal even for the best performing site and flow indicator.

  12. The role of spatial aggregation in forensic entomology.

    Science.gov (United States)

    Fiene, Justin G; Sword, Gregory A; Van Laerhoven, Sherah L; Tarone, Aaron M

    2014-01-01

    A central concept in forensic entomology is that arthropod succession on carrion is predictable and can be used to estimate the postmortem interval (PMI) of human remains. However, most studies have reported significant variation in successional patterns, particularly among replicate carcasses, which has complicated estimates of PMIs. Several forensic entomology researchers have proposed that further integration of ecological and evolutionary theory in forensic entomology could help advance the application of succession data for producing PMI estimates. The purpose of this essay is to draw attention to the role of spatial aggregation of arthropods among carrion resources as a potentially important aspect to consider for understanding and predicting the assembly of arthropods on carrion over time. We review ecological literature related to spatial aggregation of arthropods among patchy and ephemeral resources, such as carrion, and when possible integrate these results with published forensic literature. We show that spatial aggregation of arthropods across resources is commonly reported and has been used to provide fundamental insight for understanding regional and local patterns of arthropod diversity and coexistence. Moreover, two suggestions are made for conducting future research. First, because intraspecific aggregation affects species frequency distributions across carcasses, data from replicate carcasses should not be combined, but rather statistically quantified to generate occurrence probabilities. Second, we identify a need for studies that tease apart the degree to which community assembly on carrion is spatially versus temporally structured, which will aid in developing mechanistic hypotheses on the ecological factors shaping community assembly on carcasses.

  13. A Bayesian Framework of Uncertainties Integration in 3D Geological Model

    Science.gov (United States)

    Liang, D.; Liu, X.

    2017-12-01

    3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.

  14. Uncertainty and inference in the world of paleoecological data

    Science.gov (United States)

    McLachlan, J. S.; Dawson, A.; Dietze, M.; Finley, M.; Hooten, M.; Itter, M.; Jackson, S. T.; Marlon, J. R.; Raiho, A.; Tipton, J.; Williams, J.

    2017-12-01

    Proxy data in paleoecology and paleoclimatology share a common set of biases and uncertainties: spatiotemporal error associated with the taphonomic processes of deposition, preservation, and dating; calibration error between proxy data and the ecosystem states of interest; and error in the interpolation of calibrated estimates across space and time. Researchers often account for this daunting suite of challenges by applying qualitave expert judgment: inferring the past states of ecosystems and assessing the level of uncertainty in those states subjectively. The effectiveness of this approach can be seen by the extent to which future observations confirm previous assertions. Hierarchical Bayesian (HB) statistical approaches allow an alternative approach to accounting for multiple uncertainties in paleo data. HB estimates of ecosystem state formally account for each of the common uncertainties listed above. HB approaches can readily incorporate additional data, and data of different types into estimates of ecosystem state. And HB estimates of ecosystem state, with associated uncertainty, can be used to constrain forecasts of ecosystem dynamics based on mechanistic ecosystem models using data assimilation. Decisions about how to structure an HB model are also subjective, which creates a parallel framework for deciding how to interpret data from the deep past.Our group, the Paleoecological Observatory Network (PalEON), has applied hierarchical Bayesian statistics to formally account for uncertainties in proxy based estimates of past climate, fire, primary productivity, biomass, and vegetation composition. Our estimates often reveal new patterns of past ecosystem change, which is an unambiguously good thing, but we also often estimate a level of uncertainty that is uncomfortably high for many researchers. High levels of uncertainty are due to several features of the HB approach: spatiotemporal smoothing, the formal aggregation of multiple types of uncertainty, and a

  15. Do Orthopaedic Surgeons Acknowledge Uncertainty?

    NARCIS (Netherlands)

    Teunis, Teun; Janssen, Stein; Guitton, Thierry G.; Ring, David; Parisien, Robert

    2016-01-01

    Much of the decision-making in orthopaedics rests on uncertain evidence. Uncertainty is therefore part of our normal daily practice, and yet physician uncertainty regarding treatment could diminish patients' health. It is not known if physician uncertainty is a function of the evidence alone or if

  16. Optimal trading of plug-in electric vehicle aggregation agents in a market environment for sustainability

    International Nuclear Information System (INIS)

    Shafie-khah, M.; Heydarian-Forushani, E.; Golshan, M.E.H.; Siano, P.; Moghaddam, M.P.; Sheikh-El-Eslami, M.K.; Catalão, J.P.S.

    2016-01-01

    Highlights: • Proposing a multi-stage stochastic model of a PEV aggregation agent. • Reflecting several uncertainties using a stochastic model and appropriate scenarios. • Updating bids/offers of PEV aggregation agents by taking part in the intraday market. • Taking part in Demand Response eXchange (DRX) markets. - Abstract: Ever since energy sustainability is an emergent concern, Plug-in Electric Vehicles (PEVs) significantly affect the approaching smart grids. Indeed, Demand Response (DR) brings a positive effect on the uncertainties of renewable energy sources, improving market efficiency and enhancing system reliability. This paper proposes a multi-stage stochastic model of a PEV aggregation agent to participate in day-ahead and intraday electricity markets. The stochastic model reflects several uncertainties such as the behaviour of PEV owners, electricity market prices, and activated quantity of reserve by the system operator. For this purpose, appropriate scenarios are utilized to realize the uncertain feature of the problem. Furthermore, in the proposed model, the PEV aggregation agents can update their bids/offers by taking part in the intraday market. To this end, these aggregation agents take part in Demand Response eXchange (DRX) markets designed in the intraday session by employing DR resources. The numerical results show that DR provides a perfect opportunity for PEV aggregation agents to increase the profit. In addition, the results reveal that the PEV aggregation agent not only can increase its profit by participating in the DRX market, but also can become an important player in the mentioned market.

  17. The One or the Many: Quantified Subjectivity and Aggregated Uniqueness in Qualitative Rehabilitation Research.

    Science.gov (United States)

    Juritzen, Truls I; Soberg, Helene L; Røe, Cecilie; Saebu, Martin; Engen, Grace; Bliksvaer, Trond; Engebretsen, Eivind

    2017-01-01

    This article aims to identify and critically assess qualitative intervention studies of rehabilitation processes that target young adults. By applying a meta-epistemological approach inspired by the works of Michel Foucault and Julia Kristeva, we examine how the included studies present qualitative knowledge and whether they adhere to their own stated principles of qualitative knowledge. Through their stated aims and theoretical framing, the articles draw attention to individual processes of meaning making. Nonetheless, we find that the articles to a great extent emphasize frequencies of the qualitative data they present. Individual processes and experiences are subject to subdivisions and categorization and transformed into manageable objects of knowledge. In conclusion, these studies, with one important exception, contribute to self-marginalization of the knowledge they themselves promote: They undermine the uniqueness of the qualitative knowledge they proclaim by focusing on frequency and the general patterns and categories encompassing the unique. © The Author(s) 2016.

  18. Uncertainty propagation analysis of an N2O emission model at the plot and landscape scale

    NARCIS (Netherlands)

    Nol, L.; Heuvelink, G.B.M.; Veldkamp, A.; Vries, de W.; Kros, J.

    2010-01-01

    Nitrous oxide (N2O) emission from agricultural land is an important component of the total annual greenhouse gas (GHG) budget. In addition, uncertainties associated with agricultural N2O emissions are large. The goals of this work were (i) to quantify the uncertainties of modelled N2O emissions

  19. Leaf area index uncertainty estimates for model-data fusion applications

    Science.gov (United States)

    Andrew D. Richardson; D. Bryan Dail; D.Y. Hollinger

    2011-01-01

    Estimates of data uncertainties are required to integrate different observational data streams as model constraints using model-data fusion. We describe an approach with which random and systematic uncertainties in optical measurements of leaf area index [LAI] can be quantified. We use data from a measurement campaign at the spruce-dominated Howland Forest AmeriFlux...

  20. Application of Uncertainty and Sensitivity Analysis to a Kinetic Model for Enzymatic Biodiesel Production

    DEFF Research Database (Denmark)

    Price, Jason Anthony; Nordblad, Mathias; Woodley, John

    2014-01-01

    This paper demonstrates the added benefits of using uncertainty and sensitivity analysis in the kinetics of enzymatic biodiesel production. For this study, a kinetic model by Fedosov and co-workers is used. For the uncertainty analysis the Monte Carlo procedure was used to statistically quantify...