Uncertainty quantified trait predictions
Fazayeli, Farideh; Kattge, Jens; Banerjee, Arindam; Schrodt, Franziska; Reich, Peter
2015-04-01
Functional traits of organisms are key to understanding and predicting biodiversity and ecological change, which motivates continuous collection of traits and their integration into global databases. Such composite trait matrices are inherently sparse, severely limiting their usefulness for further analyses. On the other hand, traits are characterized by the phylogenetic trait signal, trait-trait correlations and environmental constraints, all of which provide information that could be used to statistically fill gaps. We propose the application of probabilistic models which, for the first time, utilize all three characteristics to fill gaps in trait databases and predict trait values at larger spatial scales. For this purpose we introduce BHPMF, a hierarchical Bayesian extension of Probabilistic Matrix Factorization (PMF). PMF is a machine learning technique which exploits the correlation structure of sparse matrices to impute missing entries. BHPMF additionally utilizes the taxonomic hierarchy for trait prediction. Implemented in the context of a Gibbs Sampler MCMC approach BHPMF provides uncertainty estimates for each trait prediction. We present comprehensive experimental results on the problem of plant trait prediction using the largest database of plant traits, where BHPMF shows strong empirical performance in uncertainty quantified trait prediction, outperforming the state-of-the-art based on point estimates. Further, we show that BHPMF is more accurate when it is confident, whereas the error is high when the uncertainty is high.
Aggregating and Communicating Uncertainty.
1980-04-01
means for identifying and communicating uncertainty. i 12- APPENDIX A BIBLIOGRAPHY j| 1. Ajzen , Icek ; "Intuitive Theories of Events and the Effects...disregarding valid but noncausal information." (Icak Ajzen , "Intuitive Theo- ries of Events and the Effects of Base-Rate Information on Prediction...9 4i,* ,4.. -. .- S % to the criterion while disregarding valid but noncausal information." (Icak Ajzen , "Intuitive Theories of Events and the Effects
Quantifying uncertainty from material inhomogeneity.
Energy Technology Data Exchange (ETDEWEB)
Battaile, Corbett Chandler; Emery, John M.; Brewer, Luke N.; Boyce, Brad Lee
2009-09-01
Most engineering materials are inherently inhomogeneous in their processing, internal structure, properties, and performance. Their properties are therefore statistical rather than deterministic. These inhomogeneities manifest across multiple length and time scales, leading to variabilities, i.e. statistical distributions, that are necessary to accurately describe each stage in the process-structure-properties hierarchy, and are ultimately the primary source of uncertainty in performance of the material and component. When localized events are responsible for component failure, or when component dimensions are on the order of microstructural features, this uncertainty is particularly important. For ultra-high reliability applications, the uncertainty is compounded by a lack of data describing the extremely rare events. Hands-on testing alone cannot supply sufficient data for this purpose. To date, there is no robust or coherent method to quantify this uncertainty so that it can be used in a predictive manner at the component length scale. The research presented in this report begins to address this lack of capability through a systematic study of the effects of microstructure on the strain concentration at a hole. To achieve the strain concentration, small circular holes (approximately 100 {micro}m in diameter) were machined into brass tensile specimens using a femto-second laser. The brass was annealed at 450 C, 600 C, and 800 C to produce three hole-to-grain size ratios of approximately 7, 1, and 1/7. Electron backscatter diffraction experiments were used to guide the construction of digital microstructures for finite element simulations of uniaxial tension. Digital image correlation experiments were used to qualitatively validate the numerical simulations. The simulations were performed iteratively to generate statistics describing the distribution of plastic strain at the hole in varying microstructural environments. In both the experiments and simulations, the
Multicandidate Elections: Aggregate Uncertainty in the Laboratory*
Bouton, Laurent; Castanheira, Micael; Llorente-Saguer, Aniol
2015-01-01
The rational-voter model is often criticized on the grounds that two of its central predictions (the paradox of voting and Duverger’s Law) are at odds with reality. Recent theoretical advances suggest that these empirically unsound predictions might be an artifact of an (arguably unrealistic) assumption: the absence of aggregate uncertainty about the distribution of preferences in the electorate. In this paper, we propose direct empirical evidence of the effect of aggregate uncertainty in multicandidate elections. Adopting a theory-based experimental approach, we explore whether aggregate uncertainty indeed favors the emergence of non-Duverger’s law equilibria in plurality elections. Our experimental results support the main theoretical predictions: sincere voting is a predominant strategy under aggregate uncertainty, whereas without aggregate uncertainty, voters massively coordinate their votes behind one candidate, who wins almost surely.
Quantifying reliability uncertainty : a proof of concept.
Energy Technology Data Exchange (ETDEWEB)
Diegert, Kathleen V.; Dvorack, Michael A.; Ringland, James T.; Mundt, Michael Joseph; Huzurbazar, Aparna (Los Alamos National Laboratory, Los Alamos, NM); Lorio, John F.; Fatherley, Quinn (Los Alamos National Laboratory, Los Alamos, NM); Anderson-Cook, Christine (Los Alamos National Laboratory, Los Alamos, NM); Wilson, Alyson G. (Los Alamos National Laboratory, Los Alamos, NM); Zurn, Rena M.
2009-10-01
This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.
Quantifying uncertainty in stable isotope mixing models
Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.
2015-05-01
Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, Stable Isotope Analysis in R (SIAR), a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated
Quantifying uncertainty in the chemical master equation
Bayati, Basil S.
2017-06-01
We describe a novel approach to quantifying the uncertainty inherent in the chemical kinetic master equation with stochastic coefficients. A stochastic collocation method is coupled to an analytical expansion of the master equation to analyze the effects of both extrinsic and intrinsic noise. The method consists of an analytical moment-closure method resulting in a large set of differential equations with stochastic coefficients that are in turn solved via a Smolyak sparse grid collocation method. We discuss the error of the method relative to the dimension of the model and clarify which methods are most suitable for the problem. We apply the method to two typical problems arising in chemical kinetics with time-independent extrinsic noise. Additionally, we show agreement with classical Monte Carlo simulations and calculate the variance over time as the sum of two expectations. The method presented here has better convergence properties for low to moderate dimensions than standard Monte Carlo methods and is therefore a superior alternative in this regime.
Quantifying Mixed Uncertainties in Cyber Attacker Payoffs
Energy Technology Data Exchange (ETDEWEB)
Chatterjee, Samrat; Halappanavar, Mahantesh; Tipireddy, Ramakrishna; Oster, Matthew R.; Saha, Sudip
2015-04-15
Representation and propagation of uncertainty in cyber attacker payoffs is a key aspect of security games. Past research has primarily focused on representing the defender’s beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and intervals. Within cyber-settings, continuous probability distributions may still be appropriate for addressing statistical (aleatory) uncertainties where the defender may assume that the attacker’s payoffs differ over time. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information about the attacker’s payoff generation mechanism. Such epistemic uncertainties are more suitably represented as probability boxes with intervals. In this study, we explore the mathematical treatment of such mixed payoff uncertainties.
Quantifying uncertainty in future ocean carbon uptake
Dunne, John P.
2016-10-01
Attributing uncertainty in ocean carbon uptake between societal trajectory (scenarios), Earth System Model construction (structure), and inherent natural variation in climate (internal) is critical to make progress in identifying, understanding, and reducing those uncertainties. In the present issue of Global Biogeochemical Cycles, Lovenduski et al. (2016) disentangle these drivers of uncertainty in ocean carbon uptake over time and space and assess the resulting implications for the emergence timescales of structural and scenario uncertainty over internal variability. Such efforts are critical for establishing realizable and efficient monitoring goals and prioritizing areas of continued model development. Under recently proposed climate stabilization targets, such efforts to partition uncertainty also become increasingly critical to societal decision-making in the context of carbon stabilization.
A New Framework for Quantifying Lidar Uncertainty
Energy Technology Data Exchange (ETDEWEB)
Newman, Jennifer, F.; Clifton, Andrew; Bonin, Timothy A.; Churchfield, Matthew J.
2017-03-24
As wind turbine sizes increase and wind energy expands to more complex and remote sites, remote sensing devices such as lidars are expected to play a key role in wind resource assessment and power performance testing. The switch to remote sensing devices represents a paradigm shift in the way the wind industry typically obtains and interprets measurement data for wind energy. For example, the measurement techniques and sources of uncertainty for a remote sensing device are vastly different from those associated with a cup anemometer on a meteorological tower. Current IEC standards discuss uncertainty due to mounting, calibration, and classification of the remote sensing device, among other parameters. Values of the uncertainty are typically given as a function of the mean wind speed measured by a reference device. However, real-world experience has shown that lidar performance is highly dependent on atmospheric conditions, such as wind shear, turbulence, and aerosol content. At present, these conditions are not directly incorporated into the estimated uncertainty of a lidar device. In this presentation, we propose the development of a new lidar uncertainty framework that adapts to current flow conditions and more accurately represents the actual uncertainty inherent in lidar measurements under different conditions. In this new framework, sources of uncertainty are identified for estimation of the line-of-sight wind speed and reconstruction of the three-dimensional wind field. These sources are then related to physical processes caused by the atmosphere and lidar operating conditions. The framework is applied to lidar data from an operational wind farm to assess the ability of the framework to predict errors in lidar-measured wind speed.
Quantifying uncertainty in observational rainfall datasets
Lennard, Chris; Dosio, Alessandro; Nikulin, Grigory; Pinto, Izidine; Seid, Hussen
2015-04-01
The CO-ordinated Regional Downscaling Experiment (CORDEX) has to date seen the publication of at least ten journal papers that examine the African domain during 2012 and 2013. Five of these papers consider Africa generally (Nikulin et al. 2012, Kim et al. 2013, Hernandes-Dias et al. 2013, Laprise et al. 2013, Panitz et al. 2013) and five have regional foci: Tramblay et al. (2013) on Northern Africa, Mariotti et al. (2014) and Gbobaniyi el al. (2013) on West Africa, Endris et al. (2013) on East Africa and Kalagnoumou et al. (2013) on southern Africa. There also are a further three papers that the authors know about under review. These papers all use an observed rainfall and/or temperature data to evaluate/validate the regional model output and often proceed to assess projected changes in these variables due to climate change in the context of these observations. The most popular reference rainfall data used are the CRU, GPCP, GPCC, TRMM and UDEL datasets. However, as Kalagnoumou et al. (2013) point out there are many other rainfall datasets available for consideration, for example, CMORPH, FEWS, TAMSAT & RIANNAA, TAMORA and the WATCH & WATCH-DEI data. They, with others (Nikulin et al. 2012, Sylla et al. 2012) show that the observed datasets can have a very wide spread at a particular space-time coordinate. As more ground, space and reanalysis-based rainfall products become available, all which use different methods to produce precipitation data, the selection of reference data is becoming an important factor in model evaluation. A number of factors can contribute to a uncertainty in terms of the reliability and validity of the datasets such as radiance conversion algorithims, the quantity and quality of available station data, interpolation techniques and blending methods used to combine satellite and guage based products. However, to date no comprehensive study has been performed to evaluate the uncertainty in these observational datasets. We assess 18 gridded
Quantifying uncertainty in proxy-based paleoclimate reconstructions
D'Andrea, William J.; Polissar, Pratigya J.
2017-04-01
There are now numerous geochemical tools that can provide quantitative estimates of past temperatures and hydrologic variables. These proxies for past climate are usually calibrated using empirical relationships with quantifiable uncertainty. Laboratory analysis introduces additional sources of uncertainty that must also be accounted for in order to fully propagate the uncertainty in an estimated climate variable. The aim of this presentation is to review the sources of uncertainty that can be quantified and reported when using different geochemical climate proxies for paleoclimate reconstruction. I will consider a number of widely used climate proxies, discuss the relative importance of various sources of uncertainty for each, and attempt to identify ways in which the scientific community might reduce these uncertainties. For example, compound-specific δD measurements can be used for quantitative estimation of source water δD values, a useful tracer for paleohydrologic changes. Such estimates have quantifiable levels of uncertainty that are often miscalculated, resulting in inaccurate error reporting in the scientific literature that can impact paleohydrologic interpretations. Here, I will summarize the uncertainties inherent to molecular δD measurements and the quantification of source water δD values, and discuss the assumptions involved when omitting various sources of uncertainty. The analytical uncertainty of δD measurements is often improperly estimated and secondary to the apparent fractionation between δD values of source water and molecule, normalization of data to the VSMOW scale introduces the largest amount of uncertainty.
A Bayesian approach to simultaneously quantify assignments and linguistic uncertainty
Energy Technology Data Exchange (ETDEWEB)
Chavez, Gregory M [Los Alamos National Laboratory; Booker, Jane M [BOOKER SCIENTIFIC FREDERICKSBURG; Ross, Timothy J [UNM
2010-10-07
Subject matter expert assessments can include both assignment and linguistic uncertainty. This paper examines assessments containing linguistic uncertainty associated with a qualitative description of a specific state of interest and the assignment uncertainty associated with assigning a qualitative value to that state. A Bayesian approach is examined to simultaneously quantify both assignment and linguistic uncertainty in the posterior probability. The approach is applied to a simplified damage assessment model involving both assignment and linguistic uncertainty. The utility of the approach and the conditions under which the approach is feasible are examined and identified.
Quantifying data worth toward reducing predictive uncertainty.
Dausman, Alyssa M; Doherty, John; Langevin, Christian D; Sukop, Michael C
2010-01-01
The present study demonstrates a methodology for optimization of environmental data acquisition. Based on the premise that the worth of data increases in proportion to its ability to reduce the uncertainty of key model predictions, the methodology can be used to compare the worth of different data types, gathered at different locations within study areas of arbitrary complexity. The method is applied to a hypothetical nonlinear, variable density numerical model of salt and heat transport. The relative utilities of temperature and concentration measurements at different locations within the model domain are assessed in terms of their ability to reduce the uncertainty associated with predictions of movement of the salt water interface in response to a decrease in fresh water recharge. In order to test the sensitivity of the method to nonlinear model behavior, analyses were repeated for multiple realizations of system properties. Rankings of observation worth were similar for all realizations, indicating robust performance of the methodology when employed in conjunction with a highly nonlinear model. The analysis showed that while concentration and temperature measurements can both aid in the prediction of interface movement, concentration measurements, especially when taken in proximity to the interface at locations where the interface is expected to move, are of greater worth than temperature measurements. Nevertheless, it was also demonstrated that pairs of temperature measurements, taken in strategic locations with respect to the interface, can also lead to more precise predictions of interface movement.
Quantifying data worth toward reducing predictive uncertainty
Dausman, A.M.; Doherty, J.; Langevin, C.D.; Sukop, M.C.
2010-01-01
The present study demonstrates a methodology for optimization of environmental data acquisition. Based on the premise that the worth of data increases in proportion to its ability to reduce the uncertainty of key model predictions, the methodology can be used to compare the worth of different data types, gathered at different locations within study areas of arbitrary complexity. The method is applied to a hypothetical nonlinear, variable density numerical model of salt and heat transport. The relative utilities of temperature and concentration measurements at different locations within the model domain are assessed in terms of their ability to reduce the uncertainty associated with predictions of movement of the salt water interface in response to a decrease in fresh water recharge. In order to test the sensitivity of the method to nonlinear model behavior, analyses were repeated for multiple realizations of system properties. Rankings of observation worth were similar for all realizations, indicating robust performance of the methodology when employed in conjunction with a highly nonlinear model. The analysis showed that while concentration and temperature measurements can both aid in the prediction of interface movement, concentration measurements, especially when taken in proximity to the interface at locations where the interface is expected to move, are of greater worth than temperature measurements. Nevertheless, it was also demonstrated that pairs of temperature measurements, taken in strategic locations with respect to the interface, can also lead to more precise predictions of interface movement. Journal compilation ?? 2010 National Ground Water Association.
Gronewold, A.; Bruxer, J.; Smith, J.; Hunter, T.; Fortin, V.; Clites, A. H.; Durnford, D.; Qian, S.; Seglenieks, F.
2015-12-01
Resolving and projecting the water budget of the North American Great Lakes basin (Earth's largest lake system) requires aggregation of data from a complex array of in situ monitoring and remote sensing products that cross an international border (leading to potential sources of bias and other inconsistencies), and are relatively sparse over the surfaces of the lakes themselves. Data scarcity over the surfaces of the lakes is a particularly significant problem because, unlike Earth's other large freshwater basins, the Great Lakes basin water budget is (on annual scales) comprised of relatively equal contributions from runoff, over-lake precipitation, and over-lake evaporation. Consequently, understanding drivers behind changes in regional water storage and water levels requires a data management framework that can reconcile uncertainties associated with data scarcity and bias, and propagate those uncertainties into regional water budget projections and historical records. Here, we assess the development of a historical hydrometeorological database for the entire Great Lakes basin with records dating back to the late 1800s, and describe improvements that are specifically intended to differentiate hydrological, climatological, and anthropogenic drivers behind recent extreme changes in Great Lakes water levels. Our assessment includes a detailed analysis of the extent to which extreme cold winters in central North America in 2013-2014 (caused by the anomalous meridional upper air flow - commonly referred to in the public media as the "polar vortex" phenomenon) altered the thermal and hydrologic regimes of the Great Lakes and led to a record setting surge in water levels between January 2014 and December 2015.
Quantifying and reducing uncertainties in cancer therapy
Barrett, Harrison H.; Alberts, David S.; Woolfenden, James M.; Liu, Zhonglin; Caucci, Luca; Hoppin, John W.
2015-03-01
There are two basic sources of uncertainty in cancer chemotherapy: how much of the therapeutic agent reaches the cancer cells, and how effective it is in reducing or controlling the tumor when it gets there. There is also a concern about adverse effects of the therapy drug. Similarly in external-beam radiation therapy or radionuclide therapy, there are two sources of uncertainty: delivery and efficacy of the radiation absorbed dose, and again there is a concern about radiation damage to normal tissues. The therapy operating characteristic (TOC) curve, developed in the context of radiation therapy, is a plot of the probability of tumor control vs. the probability of normal-tissue complications as the overall radiation dose level is varied, e.g. by varying the beam current in external-beam radiotherapy or the total injected activity in radionuclide therapy. The TOC can be applied to chemotherapy with the administered drug dosage as the variable. The area under a TOC curve (AUTOC) can be used as a figure of merit for therapeutic efficacy, analogous to the area under an ROC curve (AUROC), which is a figure of merit for diagnostic efficacy. In radiation therapy AUTOC can be computed for a single patient by using image data along with radiobiological models for tumor response and adverse side effects. In this paper we discuss the potential of using mathematical models of drug delivery and tumor response with imaging data to estimate AUTOC for chemotherapy, again for a single patient. This approach provides a basis for truly personalized therapy and for rigorously assessing and optimizing the therapy regimen for the particular patient. A key role is played by Emission Computed Tomography (PET or SPECT) of radiolabeled chemotherapy drugs.
Quantifying uncertainty in LCA-modelling of waste management systems.
Clavreul, Julie; Guyonnet, Dominique; Christensen, Thomas H
2012-12-01
Uncertainty analysis in LCA studies has been subject to major progress over the last years. In the context of waste management, various methods have been implemented but a systematic method for uncertainty analysis of waste-LCA studies is lacking. The objective of this paper is (1) to present the sources of uncertainty specifically inherent to waste-LCA studies, (2) to select and apply several methods for uncertainty analysis and (3) to develop a general framework for quantitative uncertainty assessment of LCA of waste management systems. The suggested method is a sequence of four steps combining the selected methods: (Step 1) a sensitivity analysis evaluating the sensitivities of the results with respect to the input uncertainties, (Step 2) an uncertainty propagation providing appropriate tools for representing uncertainties and calculating the overall uncertainty of the model results, (Step 3) an uncertainty contribution analysis quantifying the contribution of each parameter uncertainty to the final uncertainty and (Step 4) as a new approach, a combined sensitivity analysis providing a visualisation of the shift in the ranking of different options due to variations of selected key parameters. This tiered approach optimises the resources available to LCA practitioners by only propagating the most influential uncertainties.
Spatial and Temporal Uncertainty of Crop Yield Aggregations
Porwollik, Vera; Mueller, Christoph; Elliott, Joshua; Chryssanthacopoulos, James; Iizumi, Toshichika; Ray, Deepak K.; Ruane, Alex C.; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Folberth, Christian; Izaurralde, Robert C.; Jones, Curtis D.; Khabarov, Nikolay; Lawrence, Peter J.; Liu, Wenfeng; Pugh, Thomas A. M.; Reddy, Ashwan; Sakurai, Gen; Schmid, Erwin; Wang, Xuhui; Wu, Xiuchen; de Wit, Allard
2016-01-01
The aggregation of simulated gridded crop yields to national or regional scale requires information on temporal and spatial patterns of crop-specific harvested areas. This analysis estimates the uncertainty of simulated gridded yield time series related to the aggregation with four different harvested area data sets. We compare aggregated yield time series from the Global Gridded Crop Model Inter-comparison project for four crop types from 14 models at global, national, and regional scale to determine aggregation-driven differences in mean yields and temporal patterns as measures of uncertainty. The quantity and spatial patterns of harvested areas differ for individual crops among the four datasets applied for the aggregation. Also simulated spatial yield patterns differ among the 14 models. These differences in harvested areas and simulated yield patterns lead to differences in aggregated productivity estimates, both in mean yield and in the temporal dynamics. Among the four investigated crops, wheat yield (17% relative difference) is most affected by the uncertainty introduced by the aggregation at the global scale. The correlation of temporal patterns of global aggregated yield time series can be as low as for soybean (r = 0.28).For the majority of countries, mean relative differences of nationally aggregated yields account for10% or less. The spatial and temporal difference can be substantial higher for individual countries. Of the top-10 crop producers, aggregated national multi-annual mean relative difference of yields can be up to 67% (maize, South Africa), 43% (wheat, Pakistan), 51% (rice, Japan), and 427% (soybean, Bolivia).Correlations of differently aggregated yield time series can be as low as r = 0.56 (maize, India), r = 0.05*Corresponding (wheat, Russia), r = 0.13 (rice, Vietnam), and r = -0.01 (soybean, Uruguay). The aggregation to sub-national scale in comparison to country scale shows that spatial uncertainties can cancel out in countries with
Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint
Energy Technology Data Exchange (ETDEWEB)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-09-28
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.
A Novel Method to Quantify Soil Aggregate Stability by Measuring Aggregate Bond Energies
Efrat, Rachel; Rawlins, Barry G.; Quinton, John N.; Watts, Chris W.; Whitmore, Andy P.
2016-04-01
Soil aggregate stability is a key indicator of soil quality because it controls physical, biological and chemical functions important in cultivated soils. Micro-aggregates are responsible for the long term sequestration of carbon in soil, therefore determine soils role in the carbon cycle. It is thus vital that techniques to measure aggregate stability are accurate, consistent and reliable, in order to appropriately manage and monitor soil quality, and to develop our understanding and estimates of soil as a carbon store to appropriately incorporate in carbon cycle models. Practices used to assess the stability of aggregates vary in sample preparation, operational technique and unit of results. They use proxies and lack quantification. Conflicting results are therefore drawn between projects that do not provide methodological or resultant comparability. Typical modern stability tests suspend aggregates in water and monitor fragmentation upon exposure to an un-quantified amount of ultrasonic energy, utilising a laser granulometer to measure the change in mean weight diameter. In this project a novel approach has been developed based on that of Zhu et al., (2009), to accurately quantify the stability of aggregates by specifically measuring their bond energies. The bond energies are measured operating a combination of calorimetry and a high powered ultrasonic probe, with computable output function. Temperature change during sonication is monitored by an array of probes which enables calculation of the energy spent heating the system (Ph). Our novel technique suspends aggregates in heavy liquid lithium heteropolytungstate, as opposed to water, to avoid exposing aggregates to an immeasurable disruptive energy source, due to cavitation, collisions and clay swelling. Mean weight diameter is measured by a laser granulometer to monitor aggregate breakdown after successive periods of calculated ultrasonic energy input (Pi), until complete dispersion is achieved and bond
Quantifying Uncertainty of Wind Power Production Through an Analog Ensemble
Shahriari, M.; Cervone, G.
2016-12-01
The Analog Ensemble (AnEn) method is used to generate probabilistic weather forecasts that quantify the uncertainty in power estimates at hypothetical wind farm locations. The data are from the NREL Eastern Wind Dataset that includes more than 1,300 modeled wind farms. The AnEn model uses a two-dimensional grid to estimate the probability distribution of wind speed (the predictand) given the values of predictor variables such as temperature, pressure, geopotential height, U-component and V-component of wind. The meteorological data is taken from the NCEP GFS which is available on a 0.25 degree grid resolution. The methodology first divides the data into two classes: training period and verification period. The AnEn selects a point in the verification period and searches for the best matching estimates (analogs) in the training period. The predictand value at those analogs are the ensemble prediction for the point in the verification period. The model provides a grid of wind speed values and the uncertainty (probability index) associated with each estimate. Each wind farm is associated with a probability index which quantifies the degree of difficulty to estimate wind power. Further, the uncertainty in estimation is related to other factors such as topography, land cover and wind resources. This is achieved by using a GIS system to compute the correlation between the probability index and geographical characteristics. This study has significant applications for investors in renewable energy sector especially wind farm developers. Lower level of uncertainty facilitates the process of submitting bids into day ahead and real time electricity markets. Thus, building wind farms in regions with lower levels of uncertainty will reduce the real-time operational risks and create a hedge against volatile real-time prices. Further, the links between wind estimate uncertainty and factors such as topography and wind resources, provide wind farm developers with valuable
Quantifying catchment water balances and their uncertainties by expert elicitation
Sebok, Eva; Refsgaard, Jens Christian; Warmink, Jord J.; Stisen, Simon; Høgh Jensen, Karsten
2017-04-01
The increasing demand on water resources necessitates a more responsible and sustainable water management requiring a thorough understanding of hydrological processes both on small scale and on catchment scale. On catchment scale, the characterization of hydrological processes is often carried out by calculating a water balance based on the principle of mass conservation in hydrological fluxes. Assuming a perfect water balance closure and estimating one of these fluxes as a residual of the water balance is a common practice although this estimate will contain uncertainties related to uncertainties in the other components. Water balance closure on the catchment scale is also an issue in Denmark, thus, it was one of the research objectives of the HOBE hydrological observatory, that has been collecting data in the Skjern river catchment since 2008. Water balance components in the 1050 km2 Ahlergaarde catchment and the nested 120 km2 Holtum catchment, located in the glacial outwash plan of the Skjern catchment, were estimated using a multitude of methods. As the collected data enables the complex assessment of uncertainty of both the individual water balance components and catchment-scale water balances, the expert elicitation approach was chosen to integrate the results of the hydrological observatory. This approach relies on the subjective opinion of experts whose available knowledge and experience about the subject allows to integrate complex information from multiple sources. In this study 35 experts were involved in a multi-step elicitation process with the aim of (1) eliciting average annual values of water balance components for two nested catchments and quantifying the contribution of different sources of uncertainties to the total uncertainty in these average annual estimates; (2) calculating water balances for two catchments by reaching consensus among experts interacting in form of group discussions. To address the complex problem of water balance closure
Quantifying inflow uncertainties in RANS simulations of urban pollutant dispersion
García-Sánchez, C.; Van Tendeloo, G.; Gorlé, C.
2017-07-01
Numerical simulations of flow and pollutant dispersion in urban environments have the potential to support design and policy decisions that could reduce the population's exposure to air pollution. Reynolds-averaged Navier-Stokes simulations are a common modeling technique for urban flow and dispersion, but several sources of uncertainty in the simulations can affect the accuracy of the results. The present study proposes a method to quantify the uncertainty related to variability in the inflow boundary conditions. The method is applied to predict flow and pollutant dispersion in downtown Oklahoma City and the results are compared to field measurements available from the Joint Urban 2003 measurement campaign. Three uncertain parameters that define the inflow profiles for velocity, turbulence kinetic energy and turbulence dissipation are defined: the velocity magnitude and direction, and the terrain roughness length. The uncertain parameter space is defined based on the available measurement data, and a non-intrusive propagation approach that employs 729 simulations is used to quantify the uncertainty in the simulation output. A variance based sensitivity analysis is performed to identify the most influential uncertain parameters, and it is shown that the predicted tracer concentrations are influenced by all three uncertain variables. Subsequently, we specify different probability distributions for the uncertain inflow variables based on the available measurement data and calculate the corresponding means and 95% confidence intervals for comparison with the field measurements at 35 locations in downtown Oklahoma City.
Quantifying uncertainty in NIF implosion performance across target scales
Spears, Brian; Baker, K.; Brandon, S.; Buchoff, M.; Callahan, D.; Casey, D.; Field, J.; Gaffney, J.; Hammer, J.; Humbird, K.; Hurricane, O.; Kruse, M.; Munro, D.; Nora, R.; Peterson, L.; Springer, P.; Thomas, C.
2016-10-01
Ignition experiments at NIF are being performed at a variety of target scales. Smaller targets require less energy and can be fielded more frequently. Successful small target designs can be scaled up to take advantage of the full NIF laser energy and power. In this talk, we will consider a rigorous framework for scaling from smaller to larger targets. The framework uses both simulation and experimental results to build a statistical prediction of target performance as scale is increased. Our emphasis is on quantifying uncertainty in scaling predictions with the goal of identifying the dominant contributors to that uncertainty. We take as a particular example the Big Foot platform that produces a round, 0.8 scale implosion with the potential to scale to full NIF size (1.0 scale). This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Methods for quantifying uncertainty in fast reactor analyses.
Energy Technology Data Exchange (ETDEWEB)
Fanning, T. H.; Fischer, P. F.
2008-04-07
Liquid-metal-cooled fast reactors in the form of sodium-cooled fast reactors have been successfully built and tested in the U.S. and throughout the world. However, no fast reactor has operated in the U.S. for nearly fourteen years. More importantly, the U.S. has not constructed a fast reactor in nearly 30 years. In addition to reestablishing the necessary industrial infrastructure, the development, testing, and licensing of a new, advanced fast reactor concept will likely require a significant base technology program that will rely more heavily on modeling and simulation than has been done in the past. The ability to quantify uncertainty in modeling and simulations will be an important part of any experimental program and can provide added confidence that established design limits and safety margins are appropriate. In addition, there is an increasing demand from the nuclear industry for best-estimate analysis methods to provide confidence bounds along with their results. The ability to quantify uncertainty will be an important component of modeling that is used to support design, testing, and experimental programs. Three avenues of UQ investigation are proposed. Two relatively new approaches are described which can be directly coupled to simulation codes currently being developed under the Advanced Simulation and Modeling program within the Reactor Campaign. A third approach, based on robust Monte Carlo methods, can be used in conjunction with existing reactor analysis codes as a means of verification and validation of the more detailed approaches.
Quantifying Snow Volume Uncertainty from Repeat Terrestrial Laser Scanning Observations
Gadomski, P. J.; Hartzell, P. J.; Finnegan, D. C.; Glennie, C. L.; Deems, J. S.
2014-12-01
Terrestrial laser scanning (TLS) systems are capable of providing rapid, high density, 3D topographic measurements of snow surfaces from increasing standoff distances. By differencing snow surface with snow free measurements within a common scene, snow depths and volumes can be estimated. These data can support operational water management decision-making when combined with measured or modeled snow densities to estimate basin water content, evaluate in-situ data, or drive operational hydrologic models. In addition, change maps from differential TLS scans can also be used to support avalanche control operations to quantify loading patterns for both pre-control planning and post-control assessment. However, while methods for computing volume from TLS point cloud data are well documented, a rigorous quantification of the volumetric uncertainty has yet to be presented. Using repeat TLS data collected at the Arapahoe Basin Ski Area in Summit County, Colorado, we demonstrate the propagation of TLS point measurement and cloud registration uncertainties into 3D covariance matrices at the point level. The point covariances are then propagated through a volume computation to arrive at a single volume uncertainty value. Results from two volume computation methods are compared and the influence of data voids produced by occlusions examined.
Quantifying uncertainty, variability and likelihood for ordinary differential equation models
LENUS (Irish Health Repository)
Weisse, Andrea Y
2010-10-28
Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.
Quantifying the uncertainties of chemical evolution studies. II. Stellar yields
Romano, D; Tosi, M; Matteucci, F
2010-01-01
This is the second paper of a series which aims at quantifying the uncertainties in chemical evolution model predictions related to the underlying model assumptions. Specifically, it deals with the uncertainties due to the choice of the stellar yields. We adopt a widely used model for the chemical evolution of the Galaxy and test the effects of changing the stellar nucleosynthesis prescriptions on the predicted evolution of several chemical species. We find that, except for a handful of elements whose nucleosynthesis in stars is well understood by now, large uncertainties still affect the model predictions. This is especially true for the majority of the iron-peak elements, but also for much more abundant species such as carbon and nitrogen. The main causes of the mismatch we find among the outputs of different models assuming different stellar yields and among model predictions and observations are: (i) the adopted location of the mass cut in models of type II supernova explosions; (ii) the adopted strength ...
Quantifying uncertainty in climate change science through empirical information theory.
Majda, Andrew J; Gershgorin, Boris
2010-08-24
Quantifying the uncertainty for the present climate and the predictions of climate change in the suite of imperfect Atmosphere Ocean Science (AOS) computer models is a central issue in climate change science. Here, a systematic approach to these issues with firm mathematical underpinning is developed through empirical information theory. An information metric to quantify AOS model errors in the climate is proposed here which incorporates both coarse-grained mean model errors as well as covariance ratios in a transformation invariant fashion. The subtle behavior of model errors with this information metric is quantified in an instructive statistically exactly solvable test model with direct relevance to climate change science including the prototype behavior of tracer gases such as CO(2). Formulas for identifying the most sensitive climate change directions using statistics of the present climate or an AOS model approximation are developed here; these formulas just involve finding the eigenvector associated with the largest eigenvalue of a quadratic form computed through suitable unperturbed climate statistics. These climate change concepts are illustrated on a statistically exactly solvable one-dimensional stochastic model with relevance for low frequency variability of the atmosphere. Viable algorithms for implementation of these concepts are discussed throughout the paper.
Quantifying uncertainty in climate change science through empirical information theory
Majda, Andrew J.; Gershgorin, Boris
2010-01-01
Quantifying the uncertainty for the present climate and the predictions of climate change in the suite of imperfect Atmosphere Ocean Science (AOS) computer models is a central issue in climate change science. Here, a systematic approach to these issues with firm mathematical underpinning is developed through empirical information theory. An information metric to quantify AOS model errors in the climate is proposed here which incorporates both coarse-grained mean model errors as well as covariance ratios in a transformation invariant fashion. The subtle behavior of model errors with this information metric is quantified in an instructive statistically exactly solvable test model with direct relevance to climate change science including the prototype behavior of tracer gases such as CO2. Formulas for identifying the most sensitive climate change directions using statistics of the present climate or an AOS model approximation are developed here; these formulas just involve finding the eigenvector associated with the largest eigenvalue of a quadratic form computed through suitable unperturbed climate statistics. These climate change concepts are illustrated on a statistically exactly solvable one-dimensional stochastic model with relevance for low frequency variability of the atmosphere. Viable algorithms for implementation of these concepts are discussed throughout the paper. PMID:20696940
Three-dimensional laser scanning technique to quantify aggregate and ballast shape properties
CSIR Research Space (South Africa)
Anochie-Boateng, Joseph
2013-06-01
Full Text Available methods towards a more accurate and automated techniques to quantify aggregate shape properties. This paper validates a new flakiness index equation using three-dimensional (3-D) laser scanning data of aggregate and ballast materials obtained from...
Quantifying Uncertainty in Brain Network Measures using Bayesian Connectomics
Directory of Open Access Journals (Sweden)
Ronald Johannes Janssen
2014-10-01
Full Text Available The wiring diagram of the human brain can be described in terms of graph measures that characterize structural regularities. These measures require an estimate of whole-brain structural connectivity for which one may resort to deterministic or thresholded probabilistic streamlining procedures. While these procedures have provided important insights about the characteristics of human brain networks, they ultimately rely on unwarranted assumptions such as those of noise-free data or the use of an arbitrary threshold. Therefore, resulting structural connectivity estimates as well as derived graph measures fail to fully take into account the inherent uncertainty in the structural estimate.In this paper, we illustrate an easy way of obtaining posterior distributions over graph metrics using Bayesian inference. It is shown that this posterior distribution can be used to quantify uncertainty about graph-theoretical measures at the single subject level, thereby providing a more nuanced view of the graph-theoretical properties of human brain connectivity. We refer to this model-based approach to connectivity analysis as Bayesian connectomics.
Quantifying uncertainty contributions for fibre optic power meter calibrations
CSIR Research Space (South Africa)
Nel, M
2009-09-01
Full Text Available A fibre optic power meter calibration has several uncertainty contributors associated with it. In order to assign a realistic uncertainty to the calibration result it is necessary to realistically estimate the magnitude of each uncertainty...
Quantifying uncertainties of permafrost carbon–climate feedbacks
Directory of Open Access Journals (Sweden)
E. J. Burke
2017-06-01
Full Text Available The land surface models JULES (Joint UK Land Environment Simulator, two versions and ORCHIDEE-MICT (Organizing Carbon and Hydrology in Dynamic Ecosystems, each with a revised representation of permafrost carbon, were coupled to the Integrated Model Of Global Effects of climatic aNomalies (IMOGEN intermediate-complexity climate and ocean carbon uptake model. IMOGEN calculates atmospheric carbon dioxide (CO2 and local monthly surface climate for a given emission scenario with the land–atmosphere CO2 flux exchange from either JULES or ORCHIDEE-MICT. These simulations include feedbacks associated with permafrost carbon changes in a warming world. Both IMOGEN–JULES and IMOGEN–ORCHIDEE-MICT were forced by historical and three alternative future-CO2-emission scenarios. Those simulations were performed for different climate sensitivities and regional climate change patterns based on 22 different Earth system models (ESMs used for CMIP3 (phase 3 of the Coupled Model Intercomparison Project, allowing us to explore climate uncertainties in the context of permafrost carbon–climate feedbacks. Three future emission scenarios consistent with three representative concentration pathways were used: RCP2.6, RCP4.5 and RCP8.5. Paired simulations with and without frozen carbon processes were required to quantify the impact of the permafrost carbon feedback on climate change. The additional warming from the permafrost carbon feedback is between 0.2 and 12 % of the change in the global mean temperature (ΔT by the year 2100 and 0.5 and 17 % of ΔT by 2300, with these ranges reflecting differences in land surface models, climate models and emissions pathway. As a percentage of ΔT, the permafrost carbon feedback has a greater impact on the low-emissions scenario (RCP2.6 than on the higher-emissions scenarios, suggesting that permafrost carbon should be taken into account when evaluating scenarios of heavy mitigation and stabilization
Quantifying Uncertainties in Land Surface Microwave Emissivity Retrievals
Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko
2012-01-01
Uncertainties in the retrievals of microwave land surface emissivities were quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including SSM/I, TMI and AMSR-E, were studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors in the retrievals. Generally these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 14% (312 K) over desert and 17% (320 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.52% (26 K). In particular, at 85.0/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are mostly likely caused by rain/cloud contamination, which can lead to random errors up to 1017 K under the most severe conditions.
Quantifying Uncertainties in Land-Surface Microwave Emissivity Retrievals
Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko
2013-01-01
Uncertainties in the retrievals of microwaveland-surface emissivities are quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including the Special Sensor Microwave Imager, the Tropical Rainfall Measuring Mission Microwave Imager, and the Advanced Microwave Scanning Radiometer for Earth Observing System, are studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land-surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors inthe retrievals. Generally, these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 1%-4% (3-12 K) over desert and 1%-7% (3-20 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.5%-2% (2-6 K). In particular, at 85.5/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are most likely caused by rain/cloud contamination, which can lead to random errors up to 10-17 K under the most severe conditions.
Quantifying uncertainty in LCA-modelling of waste management systems
DEFF Research Database (Denmark)
Clavreul, Julie; Guyonnet, D.; Christensen, Thomas Højlund
2012-01-01
Uncertainty analysis in LCA studies has been subject to major progress over the last years. In the context of waste management, various methods have been implemented but a systematic method for uncertainty analysis of waste-LCA studies is lacking. The objective of this paper is (1) to present...... the sources of uncertainty specifically inherent to waste-LCA studies, (2) to select and apply several methods for uncertainty analysis and (3) to develop a general framework for quantitative uncertainty assessment of LCA of waste management systems. The suggested method is a sequence of four steps combining...
Ways forward in quantifying data uncertainty in geological databases
Kint, Lars; Chademenos, Vasileios; De Mol, Robin; Kapel, Michel; Lagring, Ruth; Stafleu, Jan; van Heteren, Sytze; Van Lancker, Vera
2017-04-01
Issues of compatibility of geological data resulting from the merging of many different data sources and time periods may jeopardize harmonization of data products. Important progress has been made due to increasing data standardization, e.g., at a European scale through the SeaDataNet and Geo-Seas data management infrastructures. Common geological data standards are unambiguously defined, avoiding semantic overlap in geological data and associated metadata. Quality flagging is also applied increasingly, though ways in further propagating this information in data products is still at its infancy. For the Belgian and southern Netherlands part of the North Sea, databases are now rigorously re-analyzed in view of quantifying quality flags in terms of uncertainty to be propagated through a 3D voxel model of the subsurface (https://odnature.naturalsciences.be/tiles/). An approach is worked out to consistently account for differences in positioning, sampling gear, analysis procedures and vintage. The flag scaling is used in the interpolation process of geological data, but will also be used when visualizing the suitability of geological resources in a decision support system. Expert knowledge is systematically revisited as to avoid totally inappropriate use of the flag scaling process. The quality flagging is also important when communicating results to end-users. Therefore, an open data policy in combination with several processing tools will be at the heart of a new Belgian geological data portal as a platform for knowledge building (KB) and knowledge management (KM) serving the marine geoscience, the policy community and the public at large.
Quantifying uncertainties in precipitation: a case study from Greece
Directory of Open Access Journals (Sweden)
C. Anagnostopoulou
2008-04-01
Full Text Available The main objective of the present study was the examination and the quantification of the uncertainties in the precipitation time series over the Greek area, for a 42-year time period. The uncertainty index applied to the rainfall data is a combination (total of the departures of the rainfall season length, of the median data of the accumulated percentages and of the total amounts of rainfall. Results of the study indicated that all the stations are characterized, on an average basis, by medium to high uncertainty. The stations that presented an increasing rainfall uncertainty were the ones located mainly to the continental parts of the study region. From the temporal analysis of the uncertainty index, it was demonstrated that the greatest percentage of the years, for all the stations time-series, was characterized by low to high uncertainty (intermediate categories of the index. Most of the results of the uncertainty index for the Greek region are similar to the corresponding results of various stations all over the European region.
Quantifying uncertainty in NDSHA estimates due to earthquake catalogue
Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano
2014-05-01
The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate
Quantifying uncertainty on sediment loads using bootstrap confidence intervals
Slaets, Johanna I. F.; Piepho, Hans-Peter; Schmitter, Petra; Hilger, Thomas; Cadisch, Georg
2017-01-01
Load estimates are more informative than constituent concentrations alone, as they allow quantification of on- and off-site impacts of environmental processes concerning pollutants, nutrients and sediment, such as soil fertility loss, reservoir sedimentation and irrigation channel siltation. While statistical models used to predict constituent concentrations have been developed considerably over the last few years, measures of uncertainty on constituent loads are rarely reported. Loads are the product of two predictions, constituent concentration and discharge, integrated over a time period, which does not make it straightforward to produce a standard error or a confidence interval. In this paper, a linear mixed model is used to estimate sediment concentrations. A bootstrap method is then developed that accounts for the uncertainty in the concentration and discharge predictions, allowing temporal correlation in the constituent data, and can be used when data transformations are required. The method was tested for a small watershed in Northwest Vietnam for the period 2010-2011. The results showed that confidence intervals were asymmetric, with the highest uncertainty in the upper limit, and that a load of 6262 Mg year-1 had a 95 % confidence interval of (4331, 12 267) in 2010 and a load of 5543 Mg an interval of (3593, 8975) in 2011. Additionally, the approach demonstrated that direct estimates from the data were biased downwards compared to bootstrap median estimates. These results imply that constituent loads predicted from regression-type water quality models could frequently be underestimating sediment yields and their environmental impact.
Quantifying uncertainties in primordial nucleosynthesis without Monte Carlo simulations
Fiorentini, G; Sarkar, S; Villante, F L
1998-01-01
We present a simple method for determining the (correlated) uncertainties of the light element abundances expected from big bang nucleosynthesis, which avoids the need for lengthy Monte Carlo simulations. Our approach helps to clarify the role of the different nuclear reactions contributing to a particular elemental abundance and makes it easy to implement energy-independent changes in the measured reaction rates. As an application, we demonstrate how this method simplifies the statistical estimation of the nucleon-to-photon ratio through comparison of the standard BBN predictions with the observationally inferred abundances.
Quantifying uncertainty and computational complexity for pore-scale simulations
Chen, C.; Yuan, Z.; Wang, P.; Yang, X.; Zhenyan, L.
2016-12-01
Pore-scale simulation is an essential tool to understand the complex physical process in many environmental problems, from multi-phase flow in the subsurface to fuel cells. However, in practice, factors such as sample heterogeneity, data sparsity and in general, our insufficient knowledge of the underlying process, render many simulation parameters and hence the prediction results uncertain. Meanwhile, most pore-scale simulations (in particular, direct numerical simulation) incur high computational cost due to finely-resolved spatio-temporal scales, which further limits our data/samples collection. To address those challenges, we propose a novel framework based on the general polynomial chaos (gPC) and build a surrogate model representing the essential features of the underlying system. To be specific, we apply the novel framework to analyze the uncertainties of the system behavior based on a series of pore-scale numerical experiments, such as flow and reactive transport in 2D heterogeneous porous media and 3D packed beds. Comparing with recent pore-scale uncertainty quantification studies using Monte Carlo techniques, our new framework requires fewer number of realizations and hence considerably reduce the overall computational cost, while maintaining the desired accuracy.
Moving Beyond 2% Uncertainty: A New Framework for Quantifying Lidar Uncertainty
Energy Technology Data Exchange (ETDEWEB)
Newman, Jennifer F.; Clifton, Andrew
2017-03-08
Remote sensing of wind using lidar is revolutionizing wind energy. However, current generations of wind lidar are ascribed a climatic value of uncertainty, which is based on a poor description of lidar sensitivity to external conditions. In this presentation, we show how it is important to consider the complete lidar measurement process to define the measurement uncertainty, which in turn offers the ability to define a much more granular and dynamic measurement uncertainty. This approach is a progression from the 'white box' lidar uncertainty method.
Quantifying uncertainty in the phylogenetics of Australian numeral systems.
Zhou, Kevin; Bowern, Claire
2015-09-22
Researchers have long been interested in the evolution of culture and the ways in which change in cultural systems can be reconstructed and tracked. Within the realm of language, these questions are increasingly investigated with Bayesian phylogenetic methods. However, such work in cultural phylogenetics could be improved by more explicit quantification of reconstruction and transition probabilities. We apply such methods to numerals in the languages of Australia. As a large phylogeny with almost universal 'low-limit' systems, Australian languages are ideal for investigating numeral change over time. We reconstruct the most likely extent of the system at the root and use that information to explore the ways numerals evolve. We show that these systems do not increment serially, but most commonly vary their upper limits between 3 and 5. While there is evidence for rapid system elaboration beyond the lower limits, languages lose numerals as well as gain them. We investigate the ways larger numerals build on smaller bases, and show that there is a general tendency to both gain and replace 4 by combining 2 + 2 (rather than inventing a new unanalysable word 'four'). We develop a series of methods for quantifying and visualizing the results.
Directory of Open Access Journals (Sweden)
P. Pokhrel
2012-10-01
Full Text Available Hydrological post-processors refer here to statistical models that are applied to hydrological model predictions to further reduce prediction errors and to quantify remaining uncertainty. For streamflow predictions, post-processors are generally applied to daily or sub-daily time scales. For many applications such as seasonal streamflow forecasting and water resources assessment, monthly volumes of streamflows are of primary interest. While it is possible to aggregate post-processed daily or sub-daily predictions to monthly time scales, the monthly volumes so produced may not have the least errors achievable and may not be reliable in uncertainty distributions. Post-processing directly at the monthly time scale is likely to be more effective. In this study, we investigate the use of a Bayesian joint probability modelling approach to directly post-process model predictions of monthly streamflow volumes. We apply the BJP post-processor to 18 catchments located in eastern Australia and demonstrate its effectiveness in reducing prediction errors and quantifying prediction uncertainty.
Wang, Jian-Xun; Xiao, Heng
2015-01-01
Simulations based on Reynolds-Averaged Navier--Stokes (RANS) models have been used to support high-consequence decisions related to turbulent flows. Apart from the deterministic model predictions, the decision makers are often equally concerned about the predictions confidence. Among the uncertainties in RANS simulations, the model-form uncertainty is an important or even a dominant source. Therefore, quantifying and reducing the model-form uncertainties in RANS simulations are of critical importance to make risk-informed decisions. Researchers in statistics communities have made efforts on this issue by considering numerical models as black boxes. However, this physics-neutral approach is not a most efficient use of data, and is not practical for most engineering problems. Recently, we proposed an open-box, Bayesian framework for quantifying and reducing model-form uncertainties in RANS simulations by incorporating observation data and physics-prior knowledge. It can incorporate the information from the vast...
Quantifying Uncertainty from Computational Factors in Simulations of a Model Ballistic System
2017-08-01
ARL-TR-8074 ● AUG 2017 US Army Research Laboratory Quantifying Uncertainty from Computational Factors in Simulations of a Model...Uncertainty from Computational Factors in Simulations of a Model Ballistic System by Daniel J Hornbaker Weapons and Materials Research...penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM
A meta-analytic approach to quantifying scientific uncertainty in stock assessments
Ralston, Stephen; Punt, André E; Hamel, Owen S.; DeVore, John D.; Conser, Ramon J.
2011-01-01
Quantifying scientific uncertainty when setting total allowable catch limits for fish stocks is a major challenge, but it is a requirement in the United States since changes to national fisheries legislation. Multiple sources of error are readily identifiable, including estimation error, model specification error, forecast error, and errors associated with the definition and estimation of reference points. Our focus here, however, is to quantify the influence of estimation error and model ...
Sadef, Yumna; Poulsen, Tjalfe G; Bester, Kai
2014-05-01
Reductions in measurement uncertainty for organic micro-pollutant concentrations in full scale compost piles using comprehensive sampling and allowing equilibration time before sampling were quantified. Results showed that both application of a comprehensive sampling procedure (involving sample crushing) and allowing one week of equilibration time before sampling reduces measurement uncertainty by about 50%. Results further showed that for measurements carried out on samples collected using a comprehensive procedure, measurement uncertainty was associated exclusively with the analytic methods applied. Application of statistical analyses confirmed that these results were significant at the 95% confidence level. Overall implications of these results are (1) that it is possible to eliminate uncertainty associated with material inhomogeneity and (2) that in order to reduce uncertainty, sampling procedure is very important early in the composting process but less so later in the process.
Göhler, Maren; Mai, Juliane; Zacharias, Steffen; Cuntz, Matthias
2015-04-01
Pedotransfer Functions are often used to estimate soil water retention which is an important physical property of soils and hence quantifying their uncertainty is of high interest. Three independent uncertainties with regard to uncertainty in Pedotransfer Functions are analysed using a probabilistic approach: (1) uncertainty resulting through a limited data base for Pedotransfer Function calibration, (2) uncertainty arising through unknown errors in the measurements which are used for developing the Pedotransfer Functions, and (3) uncertainty arising through the application of the Pedotransfer Functions in a modeling procedure using soil maps with textural classifications. The third uncertainty, arising through the application of the functions to random textural compositions, appears to be the most influential uncertainty in water retention estimates especially for soil classes where sparse data was available for calibration. Furthermore, the bulk density is strongly influencing the variability in the saturated water content and spatial variations in soil moisture. Furthermore, the propagation of the uncertainty arising from random sampling of the calibration data set has a large effect on soil moisture computed with a mesoscale hydrologic model. The evapotranspiration is the most affected hydrologic model output, whereas the discharge shows only minor variation. The analysis of the measurement error remains difficult due to high correlation between the Pedotransfer function coefficients.
Use of Paired Simple and Complex Models to Reduce Predictive Bias and Quantify Uncertainty
DEFF Research Database (Denmark)
Doherty, John; Christensen, Steen
2011-01-01
into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology...
A Random Matrix Approach for Quantifying Model-Form Uncertainties in Turbulence Modeling
Xiao, Heng; Ghanem, Roger G
2016-01-01
With the ever-increasing use of Reynolds-Averaged Navier--Stokes (RANS) simulations in mission-critical applications, the quantification of model-form uncertainty in RANS models has attracted attention in the turbulence modeling community. Recently, a physics-based, nonparametric approach for quantifying model-form uncertainty in RANS simulations has been proposed, where Reynolds stresses are projected to physically meaningful dimensions and perturbations are introduced only in the physically realizable limits. However, a challenge associated with this approach is to assess the amount of information introduced in the prior distribution and to avoid imposing unwarranted constraints. In this work we propose a random matrix approach for quantifying model-form uncertainties in RANS simulations with the realizability of the Reynolds stress guaranteed. Furthermore, the maximum entropy principle is used to identify the probability distribution that satisfies the constraints from available information but without int...
Quantifying statistical uncertainties in ab initio nuclear physics using Lagrange multipliers
Carlsson, B D
2016-01-01
Theoretical predictions need quantified uncertainties for a meaningful comparison to experimental results. This is an idea which presently permeates the field of theoretical nuclear physics. In light of the recent progress in estimating theoretical uncertainties in ab initio nuclear physics, we here present and compare methods for evaluating the statistical part of the uncertainties. A special focus is put on the (for the field) novel method of Lagrange multipliers (LM). Uncertainties from the fit of the nuclear interaction to experimental data are propagated to a few observables in light-mass nuclei to highlight any differences between the presented methods. The main conclusion is that the LM method is more robust, while covariance based methods are less demanding in their evaluation.
Directory of Open Access Journals (Sweden)
C.-F. Ni
2011-03-01
Full Text Available This study presents a numerical first-order spectral model to quantify flow and remediation zone uncertainties for partially opened wells in heterogeneous aquifers. Taking advantages of spectral theories in solving unmodeled small-scale variability in hydraulic conductivity (K, the presented nonstationary spectral method (NSM can efficiently estimate flow uncertainties, including hydraulic heads and Darcy velocities in r- and z profile in a cylindrical coordinate system. The velocity uncertainties associated with the particle backward tracking algorithm are then used to estimate stochastic remediation zones for scenarios with partially opened well screens. In this study the flow and remediation zone uncertainties obtained by NSM were first compared with those obtained by Monte Carlo simulations (MCS. A layered aquifer with different geometric mean of K and screen locations was then illustrated with the developed NSM. To compare NSM flow and remediation zone uncertainties with those of MCS, three different small-scale K variances and correlation lengths were considered for illustration purpose. The MCS remediation zones for different degrees of heterogeneity were presented with the uncertainty clouds obtained by 200 equally likely MCS realizations. Results of simulations reveal that the first-order NSM solutions agree well with those of MCS for partially opened wells. The flow uncertainties obtained by using NSM and MCS show identically for aquifers with small ln K variances and correlation lengths. Based on the test examples, the remediation zone uncertainties are not sensitive to the changes of small-scale ln K correlation lengths. However, the increases of remediation zone uncertainties are significant with the increases of small-scale ln K variances. The largest displacement uncertainties may have several meters of differences when the ln K variances increase from 0.1 to 1.0. Such results are
Analytical algorithms to quantify the uncertainty in remaining useful life prediction
Sankararaman, S.; Daigle, M.; Saxena, A.; Goebel, K.
This paper investigates the use of analytical algorithms to quantify the uncertainty in the remaining useful life (RUL) estimate of components used in aerospace applications. The prediction of RUL is affected by several sources of uncertainty and it is important to systematically quantify their combined effect by computing the uncertainty in the RUL prediction in order to aid risk assessment, risk mitigation, and decision-making. While sampling-based algorithms have been conventionally used for quantifying the uncertainty in RUL, analytical algorithms are computationally cheaper and sometimes, are better suited for online decision-making. While exact analytical algorithms are available only for certain special cases (for e.g., linear models with Gaussian variables), effective approximations can be made using the first-order second moment method (FOSM), the first-order reliabilitymethod (FORM), and the inverse first-order reliabilitymethod (Inverse FORM). These methods can be used not only to calculate the entire probability distribution of RUL but also to obtain probability bounds on RUL. This paper explains these three methods in detail and illustrates them using the state-space model of a lithium-ion battery.
Aggregate Measures of Watershed Health from Reconstructed Water Quality Data with Uncertainty.
Hoque, Yamen M; Tripathi, Shivam; Hantush, Mohamed M; Govindaraju, Rao S
2016-03-01
Risk-based measures such as reliability, resilience, and vulnerability (R-R-V) have the potential to serve as watershed health assessment tools. Recent research has demonstrated the applicability of such indices for water quality (WQ) constituents such as total suspended solids and nutrients on an individual basis. However, the calculations can become tedious when time-series data for several WQ constituents have to be evaluated individually. Also, comparisons between locations with different sets of constituent data can prove difficult. In this study, data reconstruction using a relevance vector machine algorithm was combined with dimensionality reduction via variational Bayesian noisy principal component analysis to reconstruct and condense sparse multidimensional WQ data sets into a single time series. The methodology allows incorporation of uncertainty in both the reconstruction and dimensionality-reduction steps. The R-R-V values were calculated using the aggregate time series at multiple locations within two Indiana watersheds. Results showed that uncertainty present in the reconstructed WQ data set propagates to the aggregate time series and subsequently to the aggregate R-R-V values as well. This data-driven approach to calculating aggregate R-R-V values was found to be useful for providing a composite picture of watershed health. Aggregate R-R-V values also enabled comparison between locations with different types of WQ data.
Hosseiny, S. M. H.; Zarzar, C.; Gomez, M.; Siddique, R.; Smith, V.; Mejia, A.; Demir, I.
2016-12-01
The National Water Model (NWM) provides a platform for operationalize nationwide flood inundation forecasting and mapping. The ability to model flood inundation on a national scale will provide invaluable information to decision makers and local emergency officials. Often, forecast products use deterministic model output to provide a visual representation of a single inundation scenario, which is subject to uncertainty from various sources. While this provides a straightforward representation of the potential inundation, the inherent uncertainty associated with the model output should be considered to optimize this tool for decision making support. The goal of this study is to produce ensembles of future flood inundation conditions (i.e. extent, depth, and velocity) to spatially quantify and visually assess uncertainties associated with the predicted flood inundation maps. The setting for this study is located in a highly urbanized watershed along the Darby Creek in Pennsylvania. A forecasting framework coupling the NWM with multiple hydraulic models was developed to produce a suite ensembles of future flood inundation predictions. Time lagged ensembles from the NWM short range forecasts were used to account for uncertainty associated with the hydrologic forecasts. The forecasts from the NWM were input to iRIC and HEC-RAS two-dimensional software packages, from which water extent, depth, and flow velocity were output. Quantifying the agreement between output ensembles for each forecast grid provided the uncertainty metrics for predicted flood water inundation extent, depth, and flow velocity. For visualization, a series of flood maps that display flood extent, water depth, and flow velocity along with the underlying uncertainty associated with each of the forecasted variables were produced. The results from this study demonstrate the potential to incorporate and visualize model uncertainties in flood inundation maps in order to identify the high flood risk zones.
Quantifying the interplay between environmental and social effects on aggregated-fish dynamics
Capello, Manuela; Cotel, Pascal; Deneubourg, Jean-Louis; Dagorn, Laurent
2011-01-01
Demonstrating and quantifying the respective roles of social interactions and external stimuli governing fish dynamics is key to understanding fish spatial distribution. If seminal studies have contributed to our understanding of fish spatial organization in schools, little experimental information is available on fish in their natural environment, where aggregations often occur in the presence of spatial heterogeneities. Here, we applied novel modeling approaches coupled to accurate acoustic tracking for studying the dynamics of a group of gregarious fish in a heterogeneous environment. To this purpose, we acoustically tracked with submeter resolution the positions of twelve small pelagic fish (Selar crumenophthalmus) in the presence of an anchored floating object, constituting a point of attraction for several fish species. We constructed a field-based model for aggregated-fish dynamics, deriving effective interactions for both social and external stimuli from experiments. We tuned the model parameters that...
A Bayesian approach to quantifying uncertainty from experimental noise in DEER spectroscopy
Edwards, Thomas H.; Stoll, Stefan
2016-09-01
Double Electron-Electron Resonance (DEER) spectroscopy is a solid-state pulse Electron Paramagnetic Resonance (EPR) experiment that measures distances between unpaired electrons, most commonly between protein-bound spin labels separated by 1.5-8 nm. From the experimental data, a distance distribution P (r) is extracted using Tikhonov regularization. The disadvantage of this method is that it does not directly provide error bars for the resulting P (r) , rendering correct interpretation difficult. Here we introduce a Bayesian statistical approach that quantifies uncertainty in P (r) arising from noise and numerical regularization. This method provides credible intervals (error bars) of P (r) at each r . This allows practitioners to answer whether or not small features are significant, whether or not apparent shoulders are significant, and whether or not two distance distributions are significantly different from each other. In addition, the method quantifies uncertainty in the regularization parameter.
Ehlers, Lennart; Refsgaard, Jens Christian; Sonnenborg, Torben O.; He, Xin; Jensen, Karsten H.
2016-04-01
Precipitation is a key input to hydrological models. Spatially distributed rainfall used in hydrological modelling is commonly based on the interpolation of gauge rainfall using conventional geostatistical techniques such as kriging, e.g. Salamon and Feyen [2009], Stisen et al. [2011]. While being effective point interpolators [Moulin et al., 2009], these techniques are unable to reproduce the spatial variability inherent in the rainfall process at unsampled locations. Stochastic simulation approaches provide the means to better capture this variability and hence to quantify the associated spatial uncertainty [McMillan et al., 2011]. The objective of this study is to quantify uncertainties in interpolated gauge based rainfall by employing sequential Gaussian simulation (SGS) coupled with ordinary kriging (OK) to generate realizations of daily precipitation at a 2x2 km2 grid. The rainfall gauge data was collected in a 1055 km2 subcatchment within the HOBE catchment (Jutland, Denmark) [Jensen and Illangasekare, 2011]. The following uncertainties are considered: i) interpolation uncertainty ii) uncertainty on the point measurement iii) location uncertainty. Results from using different numbers of SGS realizations and different lengths of the simulated period as well as different assumptions on the underlying uncertainties will be presented and discussed with regard to mean annual catchment rainfall. Jensen, K. H., and T. H. Illangasekare (2011), HOBE: A Hydrological Observatory, Vadose Zone J, 10(1), 1-7. McMillan, H., B. Jackson, M. Clark, D. Kavetski, and R. Woods (2011), Rainfall uncertainty in hydrological modelling: An evaluation of multiplicative error models, J Hydrol, 400(1-2), 83-94. Moulin, L., E. Gaume, and C. Obled (2009), Uncertainties on mean areal precipitation: assessment and impact on streamflow simulations, Hydrol Earth Syst Sc, 13(2), 99-114. Salamon, P., and L. Feyen (2009), Assessing parameter, precipitation, and predictive uncertainty in a
Aydin, Orhun; Caers, Jef Karel
2017-08-01
Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed
Directory of Open Access Journals (Sweden)
C.-F. Ni
2011-07-01
Full Text Available This study presents a numerical first-order spectral model to quantify transient flow and remediation zone uncertainties for partially opened wells in heterogeneous aquifers. Taking advantages of spectral theories in solving unmodeled small-scale variability in hydraulic conductivity (K, the presented nonstationary spectral method (NSM can efficiently estimate flow uncertainties, including hydraulic heads and Darcy velocities in r- and z-directions in a cylindrical coordinate system. The velocity uncertainties associated with the particle backward tracking algorithm are then used to estimate stochastic remediation zones for scenarios with partially opened well screens. In this study the flow and remediation zone uncertainties obtained by NSM were first compared with those obtained by Monte Carlo simulations (MCS. A layered aquifer with different geometric mean of K and screen locations was then illustrated with the developed NSM. To compare NSM flow and remediation zone uncertainties with those of MCS, three different small-scale K variances and correlation lengths were considered for illustration purpose. The MCS remediation zones for different degrees of heterogeneity were presented with the uncertainty clouds obtained by 200 equally likely MCS realizations. Results of simulations reveal that the first-order NSM solutions agree well with those of MCS for partially opened wells. The flow uncertainties obtained by using NSM and MCS show identically for aquifers with small ln K variances and correlation lengths. Based on the test examples, the remediation zone uncertainties (bandwidths are not sensitive to the changes of small-scale ln K correlation lengths. However, the increases of remediation zone uncertainties (i.e. the uncertainty bandwidths are significant with the increases of small-scale ln K variances. The largest displacement uncertainties may have several meters of differences
Quantifying Groundwater Recharge Uncertainty: A Multiple-Model Framework and Case Study
Kikuchi, C.; Ferré, T. P. A.
2014-12-01
In practice, it is difficult to estimate groundwater recharge accurately. Despite this challenge, most recharge investigations produce a single, best estimate of recharge. However, there is growing recognition that quantification of natural recharge uncertainty is critical for groundwater management. We present a multiple-model framework for estimating recharge uncertainty. In addition, we show how direct water flux measurements can be used to reduce the uncertainty of estimates of total basin recharge for an arid, closed hydrologic basin in the Atacama Desert, Chile. We first formulated multiple hydrogeologic conceptual models of the basin based on existing data, and implemented each conceptual model for the purpose of conducting numerical simulations. For each conceptual model, groundwater recharge was inversely estimated; then, Null-Space Monte Carlo techniques were used to quantify the uncertainty on the initial estimate of total basin recharge. Second, natural recharge components - including both deep percolation and streambed infiltration - were estimated from field data. Specifically, vertical temperature profiles were measured in monitoring wells and streambeds, and water fluxes were estimated from thermograph analysis. Third, calculated water fluxes were incorporated as prior information to the model calibration and Null-Space Monte Carlo procedures, yielding revised estimates of both total basin recharge and associated uncertainty. The fourth and final component of this study uses value of information analyses to identify potentially informative locations for additional water flux measurements. The uncertainty quantification framework presented here is broadly transferable; furthermore, this research provides an applied example of the extent to which water flux measurements may serve to reduce groundwater recharge uncertainty at the basin scale.
Quantifying the Aggregation Factor in Carbon Nanotube Dispersions by Absorption Spectroscopy
Directory of Open Access Journals (Sweden)
Hari Pathangi
2014-01-01
Full Text Available Absorption spectroscopy in the ultraviolet-visible-near infrared (UV-Vis-NIR wavelength region has been used to quantify the aggregation factor of single-walled carbon nanotubes (SWCNTs in liquid media through a series of controlled experiments. SWCNT bundles are dispersed in selected solvents using a calibrated ultrasonicator, which helps in determining the true amount of energy used in the exfoliation process. We also establish the selectivity of the centrifugation process, under the conditions used, in removing the nanotube aggregates as a function of the sonication time and the dispersion solvent. This study, along with the calibration of the sonication process, is shown to be very important for measuring the true aggregation factor of SWCNTs through a modified approach. We also show that the systematic characterization of SWCNT dispersions by optical spectroscopy significantly contributes to the success of dielectrophoresis (DEP of nanotubes at predefined on-chip positions. The presence of individually dispersed SWCNTs in the dispersions is substantiated by dielectrophoretic assembly and post-DEP electromechanical measurements.
Zhao, Y.; Nielsen, C. P.; Lei, Y.; McElroy, M. B.; Hao, J.
2011-03-01
The uncertainties of a national, bottom-up inventory of Chinese emissions of anthropogenic SO2, NOx, and particulate matter (PM) of different size classes and carbonaceous species are comprehensively quantified, for the first time, using Monte Carlo simulation. The inventory is structured by seven dominant sectors: coal-fired electric power, cement, iron and steel, other industry (boiler combustion), other industry (non-combustion processes), transportation, and residential. For each parameter related to emission factors or activity-level calculations, the uncertainties, represented as probability distributions, are either statistically fitted using results of domestic field tests or, when these are lacking, estimated based on foreign or other domestic data. The uncertainties (i.e., 95% confidence intervals around the central estimates) of Chinese emissions of SO2, NOx, total PM, PM10, PM2.5, black carbon (BC), and organic carbon (OC) in 2005 are estimated to be -14%~13%, -13%~37%, -11%~38%, -14%~45%, -17%~54%, -25%~136%, and -40%~121%, respectively. Variations at activity levels (e.g., energy consumption or industrial production) are not the main source of emission uncertainties. Due to narrow classification of source types, large sample sizes, and relatively high data quality, the coal-fired power sector is estimated to have the smallest emission uncertainties for all species except BC and OC. Due to poorer source classifications and a wider range of estimated emission factors, considerable uncertainties of NOx and PM emissions from cement production and boiler combustion in other industries are found. The probability distributions of emission factors for biomass burning, the largest source of BC and OC, are fitted based on very limited domestic field measurements, and special caution should thus be taken interpreting these emission uncertainties. Although Monte Carlo simulation yields narrowed estimates of uncertainties compared to previous bottom-up emission
Hallett, Paul; Ogden, Mike; Karim, Kamal; Schmidt, Sonja; Yoshida, Shuichiro
2014-05-01
Soil aggregates are a figment of your energy input and initial boundary conditions, so the basic thermodynamics that drive soil structure formation are needed to understand soil structure dynamics. Using approaches from engineering and materials science, it is possible quantify basic thermodynamic properties, but at present tests are generally limited to highly simplified, often remoulded, soil structures. Although this presents limitations, the understanding of underlying processes driving soil structure dynamics is poor, which could be argued is due to the enormity of the challenge of such an incredibly complex system. Other areas of soil science, particularly soil water physics, relied on simplified structures to develop theories that can now be applied to more complex pore structures. We argue that a similar approach needs to gain prominence in the study of soil aggregates. An overview will be provided of approaches adapted from other disciplines to quantify particle bonding, fracture resistance, rheology and capillary cohesion of soil that drive its aggregation and structure dynamics. All of the tests are limited as they require simplified soil structures, ranging from repacked soils to flat surfaces coated with mineral particles. A brief summary of the different approaches will demonstrate the benefits of collecting basic physical data relevant to soil structure dynamics, including examples where they are vital components of models. The soil treatments we have tested with these engineering and materials science approaches include field soils from a range of management practices with differing clay and organic matters contents, amendment and incubation of soils with a range of microorganisms and substrates in the laboratory, model clay-sand mixes and planar mineral surfaces with different topologies. In addition to advocating the wider adoption of these approaches, we will discuss limitations and hope to stimulate discussion on how approaches could be improved
Ahn, Kuk-Hyun; Merwade, Venkatesh; Ojha, C. S. P.; Palmer, Richard N.
2016-11-01
In spite of recent popularity for investigating human-induced climate change in regional areas, understanding the contributors to the relative uncertainties in the process remains unclear. To remedy this, this study presents a statistical framework to quantify relative uncertainties in a detection and attribution study. Primary uncertainty contributors are categorized into three types: climate data, hydrologic, and detection uncertainties. While an ensemble of climate models is used to define climate data uncertainty, hydrologic uncertainty is defined using a Bayesian approach. Before relative uncertainties in the detection and attribution study are quantified, an optimal fingerprint-based detection and attribution analysis is employed to investigate changes in winter streamflow in the Connecticut River Basin, which is located in the Eastern United States. Results indicate that winter streamflow over a period of 64 years (1950-2013) lies outside the range expected from natural variability of climate alone with a 90% confidence interval in the climate models. Investigation of relative uncertainties shows that the uncertainty linked to the climate data is greater than the uncertainty induced by hydrologic modeling. Detection uncertainty, defined as the uncertainty related to time evolution of the anthropogenic climate change in the historical data (signal) above the natural internal climate variability (noise), shows that uncertainties in natural internal climate variability (piControl) scenarios may be the source of the significant degree of uncertainty in the regional Detection and Attribution study.
Quantifying uncertainties in first-principles alloy thermodynamics using cluster expansions
Aldegunde, Manuel; Zabaras, Nicholas; Kristensen, Jesper
2016-10-01
The cluster expansion is a popular surrogate model for alloy modeling to avoid costly quantum mechanical simulations. As its practical implementations require approximations, its use trades efficiency for accuracy. Furthermore, the coefficients of the model need to be determined from some known data set (training set). These two sources of error, if not quantified, decrease the confidence we can put in the results obtained from the surrogate model. This paper presents a framework for the determination of the cluster expansion coefficients using a Bayesian approach, which allows for the quantification of uncertainties in the predictions. In particular, a relevance vector machine is used to automatically select the most relevant terms of the model while retaining an analytical expression for the predictive distribution. This methodology is applied to two binary alloys, SiGe and MgLi, including the temperature dependence in their effective cluster interactions. The resulting cluster expansions are used to calculate the uncertainty in several thermodynamic quantities: ground state line, including the uncertainty in which structures are thermodynamically stable at 0 K, phase diagrams and phase transitions. The uncertainty in the ground state line is found to be of the order of meV/atom, showing that the cluster expansion is reliable to ab initio level accuracy even with limited data. We found that the uncertainty in the predicted phase transition temperature increases when including the temperature dependence of the effective cluster interactions. Also, the use of the bond stiffness versus bond length approximation to calculate temperature dependent properties from a reduced set of alloy configurations showed similar uncertainty to the approach where all training configurations are considered but at a much reduced computational cost.
Towards Quantifying Robust Uncertainty Information for Climate Change Decision-making
Forest, C. E.; Libardoni, A. G.; Tsai, C. Y.; Sokolov, A. P.; Monier, E.; Sriver, R. L.; Keller, K.
2015-12-01
The expected future impacts of climate change can be a manageable problem provided the risks to society can be properly assessed. Given our current understanding of both the climate system and the related decision problems, we strive to develop tools that can assess these risks and provide robust strategies given possible futures. In this talk, we will present two examples from recent work ranging from global to regional scales to highlight these issues. Typically, we begin by assessing the probability of events without information on impacts specifically, however, recent developments allow us to address the risk management problem directly. In the first example, we discuss recent advances in quantifying probability distributions for equilibrium climate sensitivity (ECS). A comprehensive examination of factors all contributing to the total uncertainty in ECS can include updates to estimates of observed climate changes (oceanic, atmospheric, and surface records), improved understanding of radiative forcing and internal variability, revised statistical calibration methods, and overall longer records. In a second example, we contrast the assessment of probabilistic information for global scale climate change with that for regional changes. The relative importance of model structural uncertainty, uncertainty in future forcing, and the role of internal variability will be compared within the context of the decision making problem. In both cases, robust estimates of uncertainty are desired and needed… but surprises happen. Incorporating these basic issues into robust decision making frameworks is a long-term research goal with near-term implications.
Lindhiem, Oliver; Kolko, David J; Yu, Lan
2013-06-01
Using traditional Diagnostic and Statistical Manual of Mental Disorders, fourth edition, text revision (American Psychiatric Association, 2000) diagnostic criteria, clinicians are forced to make categorical decisions (diagnosis vs. no diagnosis). This forced choice implies that mental and behavioral health disorders are categorical and does not fully characterize varying degrees of uncertainty associated with a particular diagnosis. Using an item response theory (latent trait model) framework, we describe the development of the Posterior Probability of Diagnosis (PPOD) Index, which answers the question: What is the likelihood that a patient meets or exceeds the latent trait threshold for a diagnosis? The PPOD Index is based on the posterior distribution of θ (latent trait score) for each patient's profile of symptoms. The PPOD Index allows clinicians to quantify and communicate the degree of uncertainty associated with each diagnosis in probabilistic terms. We illustrate the advantages of the PPOD Index in a clinical sample (N = 321) of children and adolescents with oppositional defiant disorder.
Quantifying the Contribution of Post-Processing in Computed Tomography Measurement Uncertainty
DEFF Research Database (Denmark)
Stolfi, Alessandro; Thompson, Mary Kathryn; Carli, Lorenzo;
2016-01-01
This paper evaluates and quantifies the repeatability of post-processing settings, such as surface determination, data fitting, and the definition of the datum system, on the uncertainties of Computed Tomography (CT) measurements. The influence of post-processing contributions was determined...... by calculating the standard deviation of 10 repeated measurement evaluations on the same data set. The evaluations were performed on an industrial assembly. Each evaluation includes several dimensional and geometrical measurands that were expected to have different responses to the various post......-processing settings. It was found that the definition of the datum system had the largest impact on the uncertainty with a standard deviation of a few microns. The surface determination and data fitting had smaller contributions with sub-micron repeatability....
PROBABILISTIC NON-RIGID REGISTRATION OF PROSTATE IMAGES: MODELING AND QUANTIFYING UNCERTAINTY
Risholm, Petter; Fedorov, Andriy; Pursley, Jennifer; Tuncali, Kemal; Cormack, Robert; Wells, William M.
2012-01-01
Registration of pre- to intra-procedural prostate images needs to handle the large changes in position and shape of the prostate caused by varying rectal filling and patient positioning. We describe a probabilistic method for non-rigid registration of prostate images which can quantify the most probable deformation as well as the uncertainty of the estimated deformation. The method is based on a biomechanical Finite Element model which treats the prostate as an elastic material. We use a Markov Chain Monte Carlo sampler to draw deformation configurations from the posterior distribution. In practice, we simultaneously estimate the boundary conditions (surface displacements) and the internal deformations of our biomechanical model. The proposed method was validated on a clinical MRI dataset with registration results comparable to previously published methods, but with the added benefit of also providing uncertainty estimates which may be important to take into account during prostate biopsy and brachytherapy procedures. PMID:22288004
Quantifying uncertainty in predictions of groundwater levels using formal likelihood methods
Marchant, Ben; Mackay, Jonathan; Bloomfield, John
2016-09-01
Informal and formal likelihood methods can be used to quantify uncertainty in modelled predictions of groundwater levels (GWLs). Informal methods use a relatively subjective criterion to identify sets of plausible or behavioural parameters of the GWL models. In contrast, formal methods specify a statistical model for the residuals or errors of the GWL model. The formal uncertainty estimates are only reliable when the assumptions of the statistical model are appropriate. We apply the formal approach to historical reconstructions of GWL hydrographs from four UK boreholes. We test whether a model which assumes Gaussian and independent errors is sufficient to represent the residuals or whether a model which includes temporal autocorrelation and a general non-Gaussian distribution is required. Groundwater level hydrographs are often observed at irregular time intervals so we use geostatistical methods to quantify the temporal autocorrelation rather than more standard time series methods such as autoregressive models. According to the Akaike Information Criterion, the more general statistical model better represents the residuals of the GWL model. However, no substantial difference between the accuracy of the GWL predictions and the estimates of their uncertainty is observed when the two statistical models are compared. When the general model is applied, significant temporal correlation over periods ranging from 3 to 20 months is evident for the different boreholes. When the GWL model parameters are sampled using a Markov Chain Monte Carlo approach the distributions based on the general statistical model differ from those of the Gaussian model, particularly for the boreholes with the most autocorrelation. These results suggest that the independent Gaussian model of residuals is sufficient to estimate the uncertainty of a GWL prediction on a single date. However, if realistically autocorrelated simulations of GWL hydrographs for multiple dates are required or if the
P. Pokhrel; Robertson, D E; Q. J. Wang
2013-01-01
Hydrologic model predictions are often biased and subject to heteroscedastic errors originating from various sources including data, model structure and parameter calibration. Statistical post-processors are applied to reduce such errors and quantify uncertainty in the predictions. In this study, we investigate the use of a statistical post-processor based on the Bayesian joint probability (BJP) modelling approach to reduce errors and quantify uncertainty in streamflow predi...
A framework to quantify uncertainty in simulations of oil transport in the ocean
Gonçalves, Rafael C.
2016-03-02
An uncertainty quantification framework is developed for the DeepC Oil Model based on a nonintrusive polynomial chaos method. This allows the model\\'s output to be presented in a probabilistic framework so that the model\\'s predictions reflect the uncertainty in the model\\'s input data. The new capability is illustrated by simulating the far-field dispersal of oil in a Deepwater Horizon blowout scenario. The uncertain input consisted of ocean current and oil droplet size data and the main model output analyzed is the ensuing oil concentration in the Gulf of Mexico. A 1331 member ensemble was used to construct a surrogate for the model which was then mined for statistical information. The mean and standard deviations in the oil concentration were calculated for up to 30 days, and the total contribution of each input parameter to the model\\'s uncertainty was quantified at different depths. Also, probability density functions of oil concentration were constructed by sampling the surrogate and used to elaborate probabilistic hazard maps of oil impact. The performance of the surrogate was constantly monitored in order to demarcate the space-time zones where its estimates are reliable. © 2016. American Geophysical Union.
Energy Technology Data Exchange (ETDEWEB)
Salloum, Maher N.; Gharagozloo, Patricia E.
2013-10-01
Metal particle beds have recently become a major technique for hydrogen storage. In order to extract hydrogen from such beds, it is crucial to understand the decomposition kinetics of the metal hydride. We are interested in obtaining a a better understanding of the uranium hydride (UH3) decomposition kinetics. We first developed an empirical model by fitting data compiled from different experimental studies in the literature and quantified the uncertainty resulting from the scattered data. We found that the decomposition time range predicted by the obtained kinetics was in a good agreement with published experimental results. Secondly, we developed a physics based mathematical model to simulate the rate of hydrogen diffusion in a hydride particle during the decomposition. We used this model to simulate the decomposition of the particles for temperatures ranging from 300K to 1000K while propagating parametric uncertainty and evaluated the kinetics from the results. We compared the kinetics parameters derived from the empirical and physics based models and found that the uncertainty in the kinetics predicted by the physics based model covers the scattered experimental data. Finally, we used the physics-based kinetics parameters to simulate the effects of boundary resistances and powder morphological changes during decomposition in a continuum level model. We found that the species change within the bed occurring during the decomposition accelerates the hydrogen flow by increasing the bed permeability, while the pressure buildup and the thermal barrier forming at the wall significantly impede the hydrogen extraction.
Bevilacqua, Andrea; Isaia, Roberto; Neri, Augusto; Vitale, Stefano; Aspinall, Willy P.; Bisson, Marina; Flandoli, Franco; Baxter, Peter J.; Bertagnini, Antonella; Esposti Ongaro, Tomaso; Iannuzzi, Enrico; Pistolesi, Marco; Rosi, Mauro
2015-04-01
Campi Flegrei is an active volcanic area situated in the Campanian Plain (Italy) and dominated by a resurgent caldera. The great majority of past eruptions have been explosive, variable in magnitude, intensity, and in their vent locations. In this hazard assessment study we present a probabilistic analysis using a variety of volcanological data sets to map the background spatial probability of vent opening conditional on the occurrence of an event in the foreseeable future. The analysis focuses on the reconstruction of the location of past eruptive vents in the last 15 ka, including the distribution of faults and surface fractures as being representative of areas of crustal weakness. One of our key objectives was to incorporate some of the main sources of epistemic uncertainty about the volcanic system through a structured expert elicitation, thereby quantifying uncertainties for certain important model parameters and allowing outcomes from different expert weighting models to be evaluated. Results indicate that past vent locations are the most informative factors governing the probabilities of vent opening, followed by the locations of faults and then fractures. Our vent opening probability maps highlight the presence of a sizeable region in the central eastern part of the caldera where the likelihood of new vent opening per kilometer squared is about 6 times higher than the baseline value for the whole caldera. While these probability values have substantial uncertainties associated with them, our findings provide a rational basis for hazard mapping of the next eruption at Campi Flegrei caldera.
Quantifying natural delta variability using a multiple-point geostatistics prior uncertainty model
Scheidt, Céline; Fernandes, Anjali M.; Paola, Chris; Caers, Jef
2016-10-01
We address the question of quantifying uncertainty associated with autogenic pattern variability in a channelized transport system by means of a modern geostatistical method. This question has considerable relevance for practical subsurface applications as well, particularly those related to uncertainty quantification relying on Bayesian approaches. Specifically, we show how the autogenic variability in a laboratory experiment can be represented and reproduced by a multiple-point geostatistical prior uncertainty model. The latter geostatistical method requires selection of a limited set of training images from which a possibly infinite set of geostatistical model realizations, mimicking the training image patterns, can be generated. To that end, we investigate two methods to determine how many training images and what training images should be provided to reproduce natural autogenic variability. The first method relies on distance-based clustering of overhead snapshots of the experiment; the second method relies on a rate of change quantification by means of a computer vision algorithm termed the demon algorithm. We show quantitatively that with either training image selection method, we can statistically reproduce the natural variability of the delta formed in the experiment. In addition, we study the nature of the patterns represented in the set of training images as a representation of the "eigenpatterns" of the natural system. The eigenpattern in the training image sets display patterns consistent with previous physical interpretations of the fundamental modes of this type of delta system: a highly channelized, incisional mode; a poorly channelized, depositional mode; and an intermediate mode between the two.
Extending Ripley's K-Function to Quantify Aggregation in 2-D Grayscale Images.
Directory of Open Access Journals (Sweden)
Mohamed Amgad
Full Text Available In this work, we describe the extension of Ripley's K-function to allow for overlapping events at very high event densities. We show that problematic edge effects introduce significant bias to the function at very high densities and small radii, and propose a simple correction method that successfully restores the function's centralization. Using simulations of homogeneous Poisson distributions of events, as well as simulations of event clustering under different conditions, we investigate various aspects of the function, including its shape-dependence and correspondence between true cluster radius and radius at which the K-function is maximized. Furthermore, we validate the utility of the function in quantifying clustering in 2-D grayscale images using three modalities: (i Simulations of particle clustering; (ii Experimental co-expression of soluble and diffuse protein at varying ratios; (iii Quantifying chromatin clustering in the nuclei of wt and crwn1 crwn2 mutant Arabidopsis plant cells, using a previously-published image dataset. Overall, our work shows that Ripley's K-function is a valid abstract statistical measure whose utility extends beyond the quantification of clustering of non-overlapping events. Potential benefits of this work include the quantification of protein and chromatin aggregation in fluorescent microscopic images. Furthermore, this function has the potential to become one of various abstract texture descriptors that are utilized in computer-assisted diagnostics in anatomic pathology and diagnostic radiology.
Directory of Open Access Journals (Sweden)
Y. Zhao
2010-11-01
Full Text Available The uncertainties of a national, bottom-up inventory of Chinese emissions of anthropogenic SO_{2}, NO_{x}, and particulate matter (PM of different size classes and carbonaceous species are comprehensively quantified, for the first time, using Monte Carlo simulation. The inventory is structured by seven dominant sectors: coal-fired electric power, cement, iron and steel, other industry (boiler combustion, other industry (non-combustion processes, transportation, and residential. For each parameter related to emission factors or activity-level calculations, the uncertainties, represented as probability distributions, are either statistically fitted using results of domestic field tests or, when these are lacking, estimated based on foreign or other domestic data. The uncertainties (i.e., 95% confidence intervals around the central estimates of Chinese emissions of SO_{2}, NO_{x}, total PM, PM_{10}, PM_{2.5}, black carbon (BC, and organic carbon (OC in 2005 are estimated to be −14%~12%, −10%~36%, −10%~36%, −12%~42% −16%~52%, −23%~130%, and −37%~117%, respectively. Variations at activity levels (e.g., energy consumption or industrial production are not the main source of emission uncertainties. Due to narrow classification of source types, large sample sizes, and relatively high data quality, the coal-fired power sector is estimated to have the smallest emission uncertainties for all species except BC and OC. Due to poorer source classifications and a wider range of estimated emission factors, considerable uncertainties of NO_{x} and PM emissions from cement production and boiler combustion in other industries are found. The probability distributions of emission factors for biomass burning, the largest source of BC and OC, are fitted based on very limited domestic field measurements, and special caution should thus be taken interpreting these emission uncertainties. Although Monte
Quantifying Uncertainty in Model Predictions for the Pliocene (Plio-QUMP): Initial results
Pope, J.O.; Collins, M.; Haywood, A.M.; Dowsett, H.J.; Hunter, S.J.; Lunt, D.J.; Pickering, S.J.; Pound, M.J.
2011-01-01
Examination of the mid-Pliocene Warm Period (mPWP; ~. 3.3 to 3.0. Ma BP) provides an excellent opportunity to test the ability of climate models to reproduce warm climate states, thereby assessing our confidence in model predictions. To do this it is necessary to relate the uncertainty in model simulations of mPWP climate to uncertainties in projections of future climate change. The uncertainties introduced by the model can be estimated through the use of a Perturbed Physics Ensemble (PPE). Developing on the UK Met Office Quantifying Uncertainty in Model Predictions (QUMP) Project, this paper presents the results from an initial investigation using the end members of a PPE in a fully coupled atmosphere-ocean model (HadCM3) running with appropriate mPWP boundary conditions. Prior work has shown that the unperturbed version of HadCM3 may underestimate mPWP sea surface temperatures at higher latitudes. Initial results indicate that neither the low sensitivity nor the high sensitivity simulations produce unequivocally improved mPWP climatology relative to the standard. Whilst the high sensitivity simulation was able to reconcile up to 6 ??C of the data/model mismatch in sea surface temperatures in the high latitudes of the Northern Hemisphere (relative to the standard simulation), it did not produce a better prediction of global vegetation than the standard simulation. Overall the low sensitivity simulation was degraded compared to the standard and high sensitivity simulations in all aspects of the data/model comparison. The results have shown that a PPE has the potential to explore weaknesses in mPWP modelling simulations which have been identified by geological proxies, but that a 'best fit' simulation will more likely come from a full ensemble in which simulations that contain the strengths of the two end member simulations shown here are combined. ?? 2011 Elsevier B.V.
Quantifying uncertainty in Gulf of Mexico forecasts stemming from uncertain initial conditions
Iskandarani, Mohamed
2016-06-09
Polynomial Chaos (PC) methods are used to quantify the impacts of initial conditions uncertainties on oceanic forecasts of the Gulf of Mexico circulation. Empirical Orthogonal Functions are used as initial conditions perturbations with their modal amplitudes considered as uniformly distributed uncertain random variables. These perturbations impact primarily the Loop Current system and several frontal eddies located in its vicinity. A small ensemble is used to sample the space of the modal amplitudes and to construct a surrogate for the evolution of the model predictions via a nonintrusive Galerkin projection. The analysis of the surrogate yields verification measures for the surrogate\\'s reliability and statistical information for the model output. A variance analysis indicates that the sea surface height predictability in the vicinity of the Loop Current is limited to about 20 days. © 2016. American Geophysical Union. All Rights Reserved.
Quantifying uncertainty in Gulf of Mexico forecasts stemming from uncertain initial conditions
Iskandarani, Mohamed; Le Hénaff, Matthieu; Thacker, William Carlisle; Srinivasan, Ashwanth; Knio, Omar M.
2016-07-01
Polynomial Chaos (PC) methods are used to quantify the impacts of initial conditions uncertainties on oceanic forecasts of the Gulf of Mexico circulation. Empirical Orthogonal Functions are used as initial conditions perturbations with their modal amplitudes considered as uniformly distributed uncertain random variables. These perturbations impact primarily the Loop Current system and several frontal eddies located in its vicinity. A small ensemble is used to sample the space of the modal amplitudes and to construct a surrogate for the evolution of the model predictions via a nonintrusive Galerkin projection. The analysis of the surrogate yields verification measures for the surrogate's reliability and statistical information for the model output. A variance analysis indicates that the sea surface height predictability in the vicinity of the Loop Current is limited to about 20 days.
Camunas-Soler, Joan; Bizarro, Cristiano V; de Loreno, Sara; Fuentes-Perez, Maria Eugenia; Ramsch, Roland; Vilchez, Susana; Solans, Conxita; Moreno-Herrero, Fernando; Albericio, Fernando; Eritja, Ramon; Giralt, Ernest; Dev, Sukhendu B; Ritort, Felix
2014-01-01
Knowledge of the mechanisms of interaction between self-aggregating peptides and nucleic acids or other polyanions is key to the understanding of many aggregation processes underlying several human diseases (e.g. Alzheimer's and Parkinson's diseases). Determining the affinity and kinetic steps of such interactions is challenging due to the competition between hydrophobic self-aggregating forces and electrostatic binding forces. Kahalalide F (KF) is an anticancer hydrophobic peptide which contains a single positive charge that confers strong aggregative properties with polyanions. This makes KF an ideal model to elucidate the mechanisms by which self-aggregation competes with binding to a strongly charged polyelectrolyte such as DNA. We use optical tweezers to apply mechanical forces to single DNA molecules and show that KF and DNA interact in a two-step kinetic process promoted by the electrostatic binding of DNA to the aggregate surface followed by the stabilization of the complex due to hydrophobic interact...
Quantifying Model Form Uncertainty in RANS Simulation of Wing-Body Junction Flow
Wu, Jin-Long; Xiao, Heng
2016-01-01
Wing-body junction flows occur when a boundary layer encounters an airfoil mounted on the surface. The corner flow near the trailing edge is challenging for the linear eddy viscosity Reynolds Averaged Navier-Stokes (RANS) models, due to the interaction of two perpendicular boundary layers which leads to highly anisotropic Reynolds stress at the near wall region. Recently, Xiao et al. proposed a physics-informed Bayesian framework to quantify and reduce the model-form uncertainties in RANS simulations by utilizing sparse observation data. In this work, we extend this framework to incorporate the use of wall function in RANS simulations, and apply the extended framework to the RANS simulation of wing-body junction flow. Standard RANS simulations are performed on a 3:2 elliptic nose and NACA0020 tail cylinder joined at their maximum thickness location. Current results show that both the posterior mean velocity and the Reynolds stress anisotropy show better agreement with the experimental data at the corner regio...
Mandal, D.; Bhatia, N.; Srivastav, R. K.
2016-12-01
Soil Water Assessment Tool (SWAT) is one of the most comprehensive hydrologic models to simulate streamflow for a watershed. The two major inputs for a SWAT model are: (i) Digital Elevation Models (DEM), and (ii) Land Use and Land Cover Maps (LULC). This study aims to quantify the uncertainty in streamflow predictions using SWAT for San Bernard River in Brazos-Colorado coastal watershed, Texas, by incorporating the respective datasets from different sources: (i) DEM data will be obtained from ASTER GDEM V2, GMTED2010, NHD DEM, and SRTM DEM datasets with ranging resolution from 1/3 arc-second to 30 arc-second, and (ii) LULC data will be obtained from GLCC V2, MRLC NLCD2011, NOAA's C-CAP, USGS GAP, and TCEQ databases. Weather variables (Precipitation and Max-Min Temperature at daily scale) will be obtained from National Climatic Data Centre (NCDC) and SWAT in-built STASGO tool will be used to obtain the soil maps. The SWAT model will be calibrated using SWAT-CUP SUFI-2 approach and its performance will be evaluated using the statistical indices of Nash-Sutcliffe efficiency (NSE), ratio of Root-Mean-Square-Error to standard deviation of observed streamflow (RSR), and Percent-Bias Error (PBIAS). The study will help understand the performance of SWAT model with varying data sources and eventually aid the regional state water boards in planning, designing, and managing hydrologic systems.
A review on the CIRCE methodology to quantify the uncertainty of the physical models of a code
Energy Technology Data Exchange (ETDEWEB)
Jeon, Seong Su; Hong, Soon Joon [Seoul National Univ., Seoul (Korea, Republic of); Bang, Young Seok [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)
2012-10-15
In the field of nuclear engineering, recent regulatory audit calculations of large break loss of coolant accident (LBLOCA) have been performed with the best estimate code such as MARS, RELAP5 and CATHARE. Since the credible regulatory audit calculation is very important in the evaluation of the safety of the nuclear power plant (NPP), there have been many researches to develop rules and methodologies for the use of best estimate codes. One of the major points is to develop the best estimate plus uncertainty (BEPU) method for uncertainty analysis. As a representative BEPU method, NRC proposes the CSAU (Code scaling, applicability and uncertainty) methodology, which clearly identifies the different steps necessary for an uncertainty analysis. The general idea is 1) to determine all the sources of uncertainty in the code, also called basic uncertainties, 2) quantify them and 3) combine them in order to obtain the final uncertainty for the studied application. Using the uncertainty analysis such as CSAU methodology, an uncertainty band for the code response (calculation result), important from the safety point of view is calculated and the safety margin of the NPP is quantified. An example of such a response is the peak cladding temperature (PCT) for a LBLOCA. However, there is a problem in the uncertainty analysis with the best estimate codes. Generally, it is very difficult to determine the uncertainties due to the empiricism of closure laws (also called correlations or constitutive relationships). So far the only proposed approach is based on the expert judgment. For this case, the uncertainty range of important parameters can be wide and inaccurate so that the confidence level of the BEPU calculation results can be decreased. In order to solve this problem, recently CEA (France) proposes a statistical method of data analysis, called CIRCE. The CIRCE method is intended to quantify the uncertainties of the correlations of a code. It may replace the expert judgment
Mueller, David S.
2017-01-01
This paper presents a method using Monte Carlo simulations for assessing uncertainty of moving-boat acoustic Doppler current profiler (ADCP) discharge measurements using a software tool known as QUant, which was developed for this purpose. Analysis was performed on 10 data sets from four Water Survey of Canada gauging stations in order to evaluate the relative contribution of a range of error sources to the total estimated uncertainty. The factors that differed among data sets included the fraction of unmeasured discharge relative to the total discharge, flow nonuniformity, and operator decisions about instrument programming and measurement cross section. As anticipated, it was found that the estimated uncertainty is dominated by uncertainty of the discharge in the unmeasured areas, highlighting the importance of appropriate selection of the site, the instrument, and the user inputs required to estimate the unmeasured discharge. The main contributor to uncertainty was invalid data, but spatial inhomogeneity in water velocity and bottom-track velocity also contributed, as did variation in the edge velocity, uncertainty in the edge distances, edge coefficients, and the top and bottom extrapolation methods. To a lesser extent, spatial inhomogeneity in the bottom depth also contributed to the total uncertainty, as did uncertainty in the ADCP draft at shallow sites. The estimated uncertainties from QUant can be used to assess the adequacy of standard operating procedures. They also provide quantitative feedback to the ADCP operators about the quality of their measurements, indicating which parameters are contributing most to uncertainty, and perhaps even highlighting ways in which uncertainty can be reduced. Additionally, QUant can be used to account for self-dependent error sources such as heading errors, which are a function of heading. The results demonstrate the importance of a Monte Carlo method tool such as QUant for quantifying random and bias errors when
Directory of Open Access Journals (Sweden)
Csilla Hudek
2017-03-01
Full Text Available One fifth of the world's population is living in mountains or in their surrounding areas. This anthropogenic pressure continues to grow with the increasing number of settlements, especially in areas connected to touristic activities, such as the Italian Alps. The process of soil formation on high mountains is particularly slow and these soils are particularly vulnerable to soil degradation. In alpine regions, extreme meteorological events are increasingly frequent due to climate change, speeding up the process of soil degradation and increasing the number of severe erosion processes, shallow landslides and debris flows. Vegetation cover plays a crucial role in the stabilization of mountain soils thereby reducing the risk of natural hazards effecting downslope areas. Soil aggregate stability is one of the main soil properties that can be linked to soil loss processes. Soils developed on moraines in recently deglaciated areas typically have low levels of soil aggregation, and a limited or discontinuous vegetation cover making them more susceptible to degradation. However, soil structure can be influenced by the root system of the vegetation. Roots are actively involved in the formation of water-stable soil aggregation, increasing the stability of the soil and its nutrient content. In the present study, we aim to quantify the effect of the root system of alpine vegetation on the soil aggregate stability of the forefield of the Lys glacier, in the Aosta Valley (NW-Italy. This proglacial area provides the opportunity to study how the root system of ten pioneer alpine species from different successional stages can contribute to soil development and soil stabilization. To quantify the aggregate stability of root permeated soils, a modified wet sieving method was employed. The root length per soil volume of the different species was also determined and later correlated with the aggregate stability results. The results showed that soil aggregate
Brown, Tristan R.
The revised Renewable Fuel Standard requires the annual blending of 16 billion gallons of cellulosic biofuel by 2022 from zero gallons in 2009. The necessary capacity investments have been underwhelming to date, however, and little is known about the likely composition of the future cellulosic biofuel industry as a result. This dissertation develops a framework for identifying and analyzing the industry's likely future composition while also providing a possible explanation for why investment in cellulosic biofuels capacity has been low to date. The results of this dissertation indicate that few cellulosic biofuel pathways will be economically competitive with petroleum on an unsubsidized basis. Of five cellulosic biofuel pathways considered under 20-year price forecasts with volatility, only two achieve positive mean 20-year net present value (NPV) probabilities. Furthermore, recent exploitation of U.S. shale gas reserves and the subsequent fall in U.S. natural gas prices have negatively impacted the economic competitiveness of all but two of the cellulosic biofuel pathways considered; only two of the five pathways achieve substantially higher 20-year NPVs under a post-shale gas economic scenario relative to a pre-shale gas scenario. The economic competitiveness of cellulosic biofuel pathways with petroleum is reduced further when considered under price uncertainty in combination with realistic financial assumptions. This dissertation calculates pathway-specific costs of capital for five cellulosic biofuel pathway scenarios. The analysis finds that the large majority of the scenarios incur costs of capital that are substantially higher than those commonly assumed in the literature. Employment of these costs of capital in a comparative TEA greatly reduces the mean 20-year NPVs for each pathway while increasing their 10-year probabilities of default to above 80% for all five scenarios. Finally, this dissertation quantifies the economic competitiveness of six
Quantifying Urban Natural Gas Leaks from Street-level Methane Mapping: Measurements and Uncertainty
von Fischer, J. C.; Ham, J. M.; Griebenow, C.; Schumacher, R. S.; Salo, J.
2013-12-01
under different weather conditions during the summer of 2013 in Fort Collins, Colorado. Presented results will include the observed relationship between release rate (i.e., the emulated gas leak) and the observed street-level methane concentrations. Several simple algorithms will be demonstrated for estimating methane emissions using only the concentration map, vehicle speed and direction, and estimates of urban wind speed and direction. Uncertainty in the emission estimates will be estimated and correlated with landscape geometry, vehicle speed, and weather conditions. These results are a crucial first step in evaluating the feasibility of quantifying natural gas leakage from city pipeline systems using street-level methane mapping.
Jena, R; Mee, T; Kirkby, N F; Williams, M V
2015-02-01
The Malthus programme produces a model for the local and national level of radiotherapy demand for use by commissioners and radiotherapy service leads in England. The accuracy of simulation is dependent on the population cancer incidence, stage distribution and clinical decision data used by the model. In order to quantify uncertainty in the model, a global sensitivity analysis of the Malthus model was undertaken. As predicted, key decision points in the model relating to stage distribution and indications for surgical or non-surgical initial management of disease were observed to yield the strongest effect on simulated radiotherapy demand. The proportion of non-small cell lung cancer patients presenting with stage IIIB/IV disease had the largest effect on fraction burden in the four most common cancer types treated with radiotherapy, where a 1% change in stage IIIb/IV disease yielded a 1.3% change in fraction burden for lung cancer patients. A 1% change in mastectomy rate yielded a 0.37% change in fraction burden for breast cancer patients. The model is also highly sensitive to changes in the radiotherapy indications in colon and gastric cancer. Broadly, the findings of the sensitivity analysis mirror those previously published by other groups. Sensitivity analysis of the local-level population and cancer incidence data revealed that the cancer registration rate in the 50-64 year female population had the highest effect on simulation results. The analysis reveals where additional effort should be undertaken to provide accurate estimates of important parameters used in radiotherapy demand models.
Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean
Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.
2011-12-01
Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling
Gately, C.; Hutyra, L.; Wofsy, S.; Nehrkorn, T.; Sue Wing, I.
2015-12-01
Current approaches to quantifying surface-atmosphere fluxes of carbon often combine inventories of fossil fuel carbon emissions (ffCO2) and biosphere flux estimates with atmospheric measurements to drive forward and inverse-atmospheric modeling at high spatial and temporal resolutions (1km grids, hourly time steps have become common). Given that over 70% of total ffCO2 emissions are attributable to urban areas, accurate estimates of ffCO2 at urban scales are critical to support emissions mitigation policies at state and local levels. A successful regional or national carbon monitoring system requires a careful quantification of the uncertainties associated with estimates of both ffCO2 and biogenic carbon fluxes. Errors in the spatial distribution of ffCO2 priors used to inform atmospheric transport models can bias posterior flux estimates, and potentially provide misleading information to decision makers on the impact of policies. Most current ffCO2 priors are either too coarsely resolved in time and space, or suffer from poorly quantified errors in spatial distributions at local scales. Accurately downscaling aggregate activity data requires a careful understanding of the potentially non-linear relationships between source processes and spatial proxies. We report on ongoing work to develop an integrated, high-resolution carbon monitoring system for the Northeastern U.S., and discuss insights into the impact of spatial scaling on model uncertainty. We use a newly developed dataset of hourly surface carbon fluxes for all human and biogenic sources at 1km grid resolution for the years 2013 and 2014. To attain these spatial and temporal resolutions, ffCO2 flux estimates were subject to varying degrees of aggregation and/or downscaling depending on the native source data for each sector. We will discuss several important examples of how the choice of scaling variables and priors influences the spatial distribution CO2 and CH4 retrievals.
Wu, Stephen
can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.
Pernot, Pascal
2009-01-01
Bayesian Model Calibration is used to revisit the problem of scaling factor calibration for semi-empirical correction of ab initio calculations. A particular attention is devoted to uncertainty evaluation for scaling factors, and to their effect on prediction of observables involving scaled properties. We argue that linear models used for calibration of scaling factors are generally not statistically valid, in the sense that they are not able to fit calibration data within their uncertainty limits. Uncertainty evaluation and uncertainty propagation by statistical methods from such invalid models are doomed to failure. To relieve this problem, a stochastic function is included in the model to account for model inadequacy, according to the Bayesian Model Calibration approach. In this framework, we demonstrate that standard calibration summary statistics, as optimal scaling factor and root mean square, can be safely used for uncertainty propagation only when large calibration sets of precise data are used. For s...
Ludwig, Ralf
2014-05-01
According to current climate projections, the Mediterranean area is at high risk for severe changes in the hydrological budget and extremes. With innovative scientific measures, integrated hydrological modeling and novel field geophysical field monitoring techniques, the FP7 project CLIMB (Climate Induced Changes on the Hydrology of Mediterranean Basins; GA: 244151) assessed the impacts of climate change on the hydrology in seven basins in the Mediterranean area, in Italy, France, Turkey, Tunisia, Egypt and the Gaza Strip, and quantified uncertainties and risks for the main stakeholders of each test site. Intensive climate model auditing selected four regional climate models, whose data was bias corrected and downscaled to serve as climate forcing for a set of hydrological models in each site. The results of the multi-model hydro-climatic ensemble and socio-economic factor analysis were applied to develop a risk model building upon spatial vulnerability and risk assessment. Findings generally reveal an increasing risk for water resources management in the test sites, yet at different rates and severity in the investigated sectors, with highest impacts likely to occur in the transition months. Most important elements of this research include the following aspects: • Climate change contributes, yet in strong regional variation, to water scarcity in the Mediterranean; other factors, e.g. pollution or poor management practices, are regionally still dominant pressures on water resources. • Rain-fed agriculture needs to adapt to seasonal changes; stable or increasing productivity likely depends on additional irrigation. • Tourism could benefit in shoulder seasons, but may expect income losses in the summer peak season due to increasing heat stress. • Local & regional water managers and water users, lack, as yet, awareness of climate change induced risks; emerging focus areas are supplies of domestic drinking water, irrigation, hydropower and livestock. • Data
Nonparametric Stochastic Model for Uncertainty Quantifi cation of Short-term Wind Speed Forecasts
AL-Shehhi, A. M.; Chaouch, M.; Ouarda, T.
2014-12-01
nonparametric kernel methods. In addition, to the pointwise hourly wind speed forecasts, a confidence interval is also provided which allows to quantify the uncertainty around the forecasts.
Aggregate surface areas quantified through laser measurements for South African asphalt mixtures
CSIR Research Space (South Africa)
Anochie-Boateng, Joseph
2012-02-01
Full Text Available thicknesses of five typical South African mixtures were calculated and compared with the film thicknesses calculated from the traditional Hveem method. Based on the laser scanning method, a new surface area factors were developed for coarse aggregates used...
Helium Mass Spectrometer Leak Detection: A Method to Quantify Total Measurement Uncertainty
Mather, Janice L.; Taylor, Shawn C.
2015-01-01
In applications where leak rates of components or systems are evaluated against a leak rate requirement, the uncertainty of the measured leak rate must be included in the reported result. However, in the helium mass spectrometer leak detection method, the sensitivity, or resolution, of the instrument is often the only component of the total measurement uncertainty noted when reporting results. To address this shortfall, a measurement uncertainty analysis method was developed that includes the leak detector unit's resolution, repeatability, hysteresis, and drift, along with the uncertainty associated with the calibration standard. In a step-wise process, the method identifies the bias and precision components of the calibration standard, the measurement correction factor (K-factor), and the leak detector unit. Together these individual contributions to error are combined and the total measurement uncertainty is determined using the root-sum-square method. It was found that the precision component contributes more to the total uncertainty than the bias component, but the bias component is not insignificant. For helium mass spectrometer leak rate tests where unit sensitivity alone is not enough, a thorough evaluation of the measurement uncertainty such as the one presented herein should be performed and reported along with the leak rate value.
Quantifying and Reducing Uncertainty in Estimated Microcystin Concentrations from the ELISA Method.
Qian, Song S; Chaffin, Justin D; DuFour, Mark R; Sherman, Jessica J; Golnick, Phoenix C; Collier, Christopher D; Nummer, Stephanie A; Margida, Michaela G
2015-12-15
We discuss the uncertainty associated with a commonly used method for measuring the concentration of microcystin, a group of toxins associated with cyanobacterial blooms. Such uncertainty is rarely reported and accounted for in important drinking water management decisions. Using monitoring data from Ohio Environmental Protection Agency and from City of Toledo, we document the sources of measurement uncertainty and recommend a Bayesian hierarchical modeling approach for reducing the measurement uncertainty. Our analysis suggests that (1) much of the uncertainty is a result of the highly uncertain "standard curve" developed during each test and (2) the uncertainty can be reduced by pooling raw test data from multiple tests. Based on these results, we suggest that estimation uncertainty can be effectively reduced through the effort of either (1) regional regulatory agencies by sharing and combining raw test data from regularly scheduled microcystin monitoring program or (2) the manufacturer of the testing kit by conducting additional tests as part of an effort to improve the testing kit.
A Probabilistic Framework for Quantifying Mixed Uncertainties in Cyber Attacker Payoffs
Energy Technology Data Exchange (ETDEWEB)
Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.; Halappanavar, Mahantesh
2015-12-28
Quantification and propagation of uncertainties in cyber attacker payoffs is a key aspect within multiplayer, stochastic security games. These payoffs may represent penalties or rewards associated with player actions and are subject to various sources of uncertainty, including: (1) cyber-system state, (2) attacker type, (3) choice of player actions, and (4) cyber-system state transitions over time. Past research has primarily focused on representing defender beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and mathematical intervals. For cyber-systems, probability distributions may help address statistical (aleatory) uncertainties where the defender may assume inherent variability or randomness in the factors contributing to the attacker payoffs. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information about the attacker’s payoff generation mechanism. Such epistemic uncertainties are more suitably represented as generalizations of probability boxes. This paper explores the mathematical treatment of such mixed payoff uncertainties. A conditional probabilistic reasoning approach is adopted to organize the dependencies between a cyber-system’s state, attacker type, player actions, and state transitions. This also enables the application of probabilistic theories to propagate various uncertainties in the attacker payoffs. An example implementation of this probabilistic framework and resulting attacker payoff distributions are discussed. A goal of this paper is also to highlight this uncertainty quantification problem space to the cyber security research community and encourage further advancements in this area.
Energy Technology Data Exchange (ETDEWEB)
Saleh, Z; Thor, M; Apte, A; Deasy, J [Memorial Sloan Kettering Cancer Center, NY City, NY (United States); Sharp, G [Massachusetts General Hospital, Boston, MA (United States); Muren, L [Aarhus University Hospital, Aarhus (Denmark)
2014-06-01
Purpose: The quantitative evaluation of deformable image registration (DIR) is currently challenging due to lack of a ground truth. In this study we test a new method proposed for quantifying multiple-image based DIRrelated uncertainties, for DIR of pelvic images. Methods: 19 patients were analyzed, each with 6 CT scans, who previously had radiotherapy for prostate cancer. Manually delineated structures for rectum and bladder, which served as ground truth structures, were delineated on the planning CT and each subsequent scan. For each patient, voxel-by-voxel DIR-related uncertainties were evaluated, following B-spline based DIR, by applying a previously developed metric, the distance discordance metric (DDM; Saleh et al., PMB (2014) 59:733). The DDM map was superimposed on the first acquired CT scan and DDM statistics were assessed, also relative to two metrics estimating the agreement between the propagated and the manually delineated structures. Results: The highest DDM values which correspond to greatest spatial uncertainties were observed near the body surface and in the bowel due to the presence of gas. The mean rectal and bladder DDM values ranged from 1.1–11.1 mm and 1.5–12.7 mm, respectively. There was a strong correlation in the DDMs between the rectum and bladder (Pearson R = 0.68 for the max DDM). For both structures, DDM was correlated with the ratio between the DIR-propagated and manually delineated volumes (R = 0.74 for the max rectal DDM). The maximum rectal DDM was negatively correlated with the Dice Similarity Coefficient between the propagated and the manually delineated volumes (R= −0.52). Conclusion: The multipleimage based DDM map quantified considerable DIR variability across different structures and among patients. Besides using the DDM for quantifying DIR-related uncertainties it could potentially be used to adjust for uncertainties in DIR-based accumulated dose distributions.
Constantine, Paul; Larsson, Johan; Iaccarino, Gianluca
2014-01-01
We present a computational analysis of the reactive flow in a hypersonic scramjet engine with emphasis on effects of uncertainties in the operating conditions. We employ a novel methodology based on active subspaces to characterize the effects of the input uncertainty on the scramjet performance. The active subspace re-parameterizes the operating conditions from seven well characterized physical parameters to a single derived active variable. This dimension reduction enables otherwise intractable---given the cost of the simulation---computational studies to quantify uncertainty; bootstrapping provides confidence intervals on the studies' results. In particular we (i) identify the parameters that contribute the most to the variation in the output quantity of interest, (ii) compute a global upper and lower bound on the quantity of interest, and (iii) classify sets of operating conditions as safe or unsafe corresponding to a threshold on the output quantity of interest. We repeat this analysis for two values of ...
Directory of Open Access Journals (Sweden)
Ahuja Tarushee
2011-04-01
Full Text Available Abstract Arsenic is the toxic element, which creates several problems in human being specially when inhaled through air. So the accurate and precise measurement of arsenic in suspended particulate matter (SPM is of prime importance as it gives information about the level of toxicity in the environment, and preventive measures could be taken in the effective areas. Quality assurance is equally important in the measurement of arsenic in SPM samples before making any decision. The quality and reliability of the data of such volatile elements depends upon the measurement of uncertainty of each step involved from sampling to analysis. The analytical results quantifying uncertainty gives a measure of the confidence level of the concerned laboratory. So the main objective of this study was to determine arsenic content in SPM samples with uncertainty budget and to find out various potential sources of uncertainty, which affects the results. Keeping these facts, we have selected seven diverse sites of Delhi (National Capital of India for quantification of arsenic content in SPM samples with uncertainty budget following sampling by HVS to analysis by Atomic Absorption Spectrometer-Hydride Generator (AAS-HG. In the measurement of arsenic in SPM samples so many steps are involved from sampling to final result and we have considered various potential sources of uncertainties. The calculation of uncertainty is based on ISO/IEC17025: 2005 document and EURACHEM guideline. It has been found that the final results mostly depend on the uncertainty in measurement mainly due to repeatability, final volume prepared for analysis, weighing balance and sampling by HVS. After the analysis of data of seven diverse sites of Delhi, it has been concluded that during the period from 31st Jan. 2008 to 7th Feb. 2008 the arsenic concentration varies from 1.44 ± 0.25 to 5.58 ± 0.55 ng/m3 with 95% confidence level (k = 2.
de Hipt, Felix Op; Diekkrüger, Bernd; Steup, Gero; Rode, Michael
2016-04-01
Water-driven soil erosion, transport and deposition take place on different spatial and temporal scales. Therefore, related measurements are complex and require process understanding and a multi-method approach combining different measurement methods with soil erosion modeling. Turbidity as a surrogate measurement for suspended sediment concentration (SSC) in rivers is frequently used to overcome the disadvantages of conventional sediment measurement techniques regarding temporal resolution and continuity. The use of turbidity measurements requires a close correlation between turbidity and SSC. Depending on the number of samples collected, the measured range and the variations in the measurements, SSC-turbidity curves are subject to uncertainty. This uncertainty has to be determined in order to assess the reliability of measure-ments used to quantify catchment sediment yields and to calibrate soil erosion models. This study presents the calibration results from a sub-humid catchment in south-western Burkina Faso and investigates the related uncertainties. Daily in situ measurements of SSC manually collected at one turbidity station and the corresponding turbidity readings are used to obtain the site-specific calibration curve. The discharge is calculated based on an empirical water level-discharge relationship. The derived regression equations are used to define prediction intervals for SSC and discharge. The uncertainty of the suspended sediment load time series is influenced by the corresponding uncertainties of SSC and discharge. This study shows that the determination of uncertainty is relevant when turbidity-based measurements of suspended sediment loads are used to quantify catchment erosion and to calibrate erosion models.
Campbell, J. L.; Yanai, R. D.; Green, M.; Likens, G. E.; Buso, D. C.; See, C.; Barr, B.
2013-12-01
Small watersheds are hydrologically distinct ecological units that integrate chemical, physical and biological processes. The basic premise of the small watershed approach is that the flux of chemical elements in and out of watersheds can be used to evaluate nutrient gains or losses. In paired watershed studies, following a pre-treatment calibration period, a treated watershed is compared with a reference watershed enabling evaluation of the treatment on nutrient flux and cycling. This approach has provided invaluable insight into how ecosystems function and respond to both natural and human disturbances. Despite the great advances that have been made using this approach, the method is often criticized because the treatments are usually not replicated. The reason for this lack of replication is that it is often difficult to identify suitable replicate watersheds and is expensive due to the large scale of these studies. In cases where replication is not possible, traditional statistical approaches cannot be applied. Uncertainty analysis can help address this issue because it enables reporting of statistical confidence even when replicates are not used. However, estimating uncertainty can be challenging because it is difficult to identify and quantify sources of uncertainty, there are many different possible approaches, and the methods can be computationally challenging. In this study, we used uncertainty analysis to evaluate changes in the net hydrologic flux (inputs in precipitation minus outputs in stream water) of calcium following a whole-tree harvest at the Hubbard Brook Experimental Forest in New Hampshire, USA. In the year following the harvest, there was a large net loss of calcium (20 kg/ha/yr) in the treated watershed compared to the reference (5 kg/ha/yr). Net losses in the treated watershed have declined over the 26 years after the harvest, but still remain elevated compared to the reference. We used uncertainty analysis to evaluate whether the
Adamson, M W; Morozov, A Y; Kuzenkov, O A
2016-09-01
Mathematical models in biology are highly simplified representations of a complex underlying reality and there is always a high degree of uncertainty with regards to model function specification. This uncertainty becomes critical for models in which the use of different functions fitting the same dataset can yield substantially different predictions-a property known as structural sensitivity. Thus, even if the model is purely deterministic, then the uncertainty in the model functions carries through into uncertainty in model predictions, and new frameworks are required to tackle this fundamental problem. Here, we consider a framework that uses partially specified models in which some functions are not represented by a specific form. The main idea is to project infinite dimensional function space into a low-dimensional space taking into account biological constraints. The key question of how to carry out this projection has so far remained a serious mathematical challenge and hindered the use of partially specified models. Here, we propose and demonstrate a potentially powerful technique to perform such a projection by using optimal control theory to construct functions with the specified global properties. This approach opens up the prospect of a flexible and easy to use method to fulfil uncertainty analysis of biological models.
A Defence of the AR4’s Bayesian Approach to Quantifying Uncertainty
Vezer, M. A.
2009-12-01
The field of climate change research is a kimberlite pipe filled with philosophic diamonds waiting to be mined and analyzed by philosophers. Within the scientific literature on climate change, there is much philosophical dialogue regarding the methods and implications of climate studies. To this date, however, discourse regarding the philosophy of climate science has been confined predominately to scientific - rather than philosophical - investigations. In this paper, I hope to bring one such issue to the surface for explicit philosophical analysis: The purpose of this paper is to address a philosophical debate pertaining to the expressions of uncertainty in the International Panel on Climate Change (IPCC) Fourth Assessment Report (AR4), which, as will be noted, has received significant attention in scientific journals and books, as well as sporadic glances from the popular press. My thesis is that the AR4’s Bayesian method of uncertainty analysis and uncertainty expression is justifiable on pragmatic grounds: it overcomes problems associated with vagueness, thereby facilitating communication between scientists and policy makers such that the latter can formulate decision analyses in response to the views of the former. Further, I argue that the most pronounced criticisms against the AR4’s Bayesian approach, which are outlined below, are misguided. §1 Introduction Central to AR4 is a list of terms related to uncertainty that in colloquial conversations would be considered vague. The IPCC attempts to reduce the vagueness of its expressions of uncertainty by calibrating uncertainty terms with numerical probability values derived from a subjective Bayesian methodology. This style of analysis and expression has stimulated some controversy, as critics reject as inappropriate and even misleading the association of uncertainty terms with Bayesian probabilities. [...] The format of the paper is as follows. The investigation begins (§2) with an explanation of
Quantifying uncertainties in the high-energy neutrino cross-section
Indian Academy of Sciences (India)
A Cooper-Sarkar; P Mertsch; S Sarkar
2012-11-01
The predictions for high-energy neutrino and antineutrino deep inelastic scattering cross-sections are compared within the conventional DGLAP formalism of next-to-leading order QCD, using the latest parton distribution functions (PDF) such as CT10, HERAPDF1.5 and MSTW08 and taking account of PDF uncertainties. From this, a benchmark cross-section and uncertainty are derived which is consistent with the results obtained earlier using the ZEUS-S PDFs. The use of this is advocated for analysing data from neutrino telescopes, in order to facilitate comparison between their results.
Quantifying uncertainties in N(2O emission due to N fertilizer application in cultivated areas.
Directory of Open Access Journals (Sweden)
Aurore Philibert
Full Text Available Nitrous oxide (N(2O is a greenhouse gas with a global warming potential approximately 298 times greater than that of CO(2. In 2006, the Intergovernmental Panel on Climate Change (IPCC estimated N(2O emission due to synthetic and organic nitrogen (N fertilization at 1% of applied N. We investigated the uncertainty on this estimated value, by fitting 13 different models to a published dataset including 985 N(2O measurements. These models were characterized by (i the presence or absence of the explanatory variable "applied N", (ii the function relating N(2O emission to applied N (exponential or linear function, (iii fixed or random background (i.e. in the absence of N application N(2O emission and (iv fixed or random applied N effect. We calculated ranges of uncertainty on N(2O emissions from a subset of these models, and compared them with the uncertainty ranges currently used in the IPCC-Tier 1 method. The exponential models outperformed the linear models, and models including one or two random effects outperformed those including fixed effects only. The use of an exponential function rather than a linear function has an important practical consequence: the emission factor is not constant and increases as a function of applied N. Emission factors estimated using the exponential function were lower than 1% when the amount of N applied was below 160 kg N ha(-1. Our uncertainty analysis shows that the uncertainty range currently used by the IPCC-Tier 1 method could be reduced.
Quantifying Uncertainty in Distributed Flash Flood Forecasting for a Semiarid Region
Samadi, S.; Pourreza Bilondi, M.; Ghahraman, B.; Akhoond-Ali, A. M.
2015-12-01
Reliability of semiarid flood forecasting is affected by several factors, including rainfall forcing, the system input-state-output behavior, initial soil moisture conditions and model parameters and structure. This study employed Bayesian frameworks to enable the explicit description and assessment of parameter and predictive uncertainty for convective rainfall-runoff modeling of a semiarid watershed system in Iran. We examined the performance and uncertainty analysis of a mixed conceptual and physical based rainfall-runoff model (AFFDEF) linked with three Markov chain Monte Carlo (MCMC) samplers: the DiffeRential Evolution Adaptive Metropolis (DREAM), the Shuffled Complex Evolution Metropolis (SCEM-UA), and DREAM- ZS, to forecast four potential semiarid convective events with varying rainfall duration (20 mm). Calibration results demonstrated that model predictive uncertainty was heavily dominated by error and bias in the soil water storage capacity which reflect inadequate representation of the upper soil zone processes by hydrological model. Furthermore, parameters associated with infiltration and interception capacity along with contributing area threshold for digital river network were identified the key model parameters and more influential on the modeled flood hydrograph. In addition, parameter inference in the DREAM model showed a consistent behavior with the priori assumption by closely matching the inferred error distribution to the empirical distribution of the model residual, indicating that model parameters are well identified. DREAM result further revealed that the uncertainty associated with rainfall of lower magnitudes was higher than rainfall of higher magnitudes. Uncertainty quantification of semiarid convective events provided significant insights into the mathematical relationship and characteristics of short-term forecast error and may be applicable to other semiarid watershed systems with the similar rainfall-runoff processes.
Energy Technology Data Exchange (ETDEWEB)
Lei, Huan; Yang, Xiu; Zheng, Bin; Baker, Nathan A.
2015-11-05
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance of the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.
Quantifying instrumentation background and measurement uncertainty through cross-domain coupling
Olson, D. K.; Larsen, B.; Weaver, B. P.
2016-12-01
Space physics measurement data are inherently riddled with background and noise signals that are difficult to separate from the desired true measurements. These issues become more vital to understand quantitatively as we make use of multiple instrumentation sets and satellite platforms. It is essential to develop a rigorous understanding of uncertainty in space measurements, particularly as spacecraft become more autonomous, in order to identify events of interest and predict the nominal behavior of space plasmas. We use data from the Van Allen Probes to demonstrate statistical techniques that can be applied to measurements of the HOPE and RPS instruments on the Van Allen Probes toward a solid and quantitative understanding of the true uncertainty of its measurements. This work provides a framework that predicts HOPE background measurements from the RPS foreground measurements and a definition of an event of interest from the HOPE instrumentation perspective.
Quantifying uncertainties of a Soil-Foundation Structure-Interaction System under Seismic Excitation
Energy Technology Data Exchange (ETDEWEB)
Tong, C
2008-04-07
We applied a spectrum of uncertainty quantification (UQ) techniques to the study of a two-dimensional soil-foundation-structure-interaction (2DSFSI) system (obtained from Professor Conte at UCSD) subjected to earthquake excitation. In the process we varied 19 uncertain parameters describing material properties of the structure and the soil. We present in detail the results for the different stages of our UQ analyses.
2016-07-01
affect microstructure? What is the uncertainty associated with microstructure statistics (i.e., representative volume size )? How does material...Voronoi vertices to the occurrence and size of these interdendritic features. Interestingly, with respect to the distance from the dendrite core, it...TMS 2015 Annual Meeting, March 15–19, 2013, Orlando, FL Program Review Meetings 1. M.A. Tschopp, A.L. Oppedal, S. Turnage, ( Poster ) Hierarchically
Directory of Open Access Journals (Sweden)
B. J. Jonkheid
2012-11-01
Full Text Available The uncertainties in the cloud physical properties derived from satellite observations make it difficult to interpret model evaluation studies. In this paper, the uncertainties in the cloud water path (CWP retrievals derived with the cloud physical properties retrieval algorithm (CPP of the climate monitoring satellite application facility (CM SAF are investigated. To this end, a numerical simulator of MSG-SEVIRI observations has been developed that calculates the reflectances at 0.64 and 1.63 μm for a wide range of cloud parameter values, satellite viewing geometries and surface albedos using a plane-parallel radiative transfer model. The reflectances thus obtained are used as input to CPP, and the retrieved values of CWP are compared to the original input of the simulator. Cloud parameters considered in this paper refer to e.g. sub-pixel broken clouds and the simultaneous occurrence of ice and liquid water clouds within one pixel. These configurations are not represented in the CPP algorithm and as such the associated retrieval uncertainties are potentially substantial.
It is shown that the CWP retrievals are very sensitive to the assumptions made in the CPP code. The CWP retrieval errors are generally small for unbroken single-layer clouds with COT > 10, with retrieval errors of ~3% for liquid water clouds to ~10% for ice clouds. In a multi-layer cloud, when both liquid water and ice clouds are present in a pixel, the CWP retrieval errors increase dramatically; depending on the cloud, this can lead to uncertainties of 40–80%. CWP retrievals also become more uncertain when the cloud does not cover the entire pixel, leading to errors of ~50% for cloud fractions of 0.75 and even larger errors for smaller cloud fractions. Thus, the satellite retrieval of cloud physical properties of broken clouds as well as multi-layer clouds is complicated by inherent difficulties, and the proper interpretation of such retrievals requires extra care.
Myers, Casey A; Laz, Peter J; Shelburne, Kevin B; Davidson, Bradley S
2015-05-01
Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse kinematics, inverse dynamics, and muscle force prediction. A series of Monte Carlo simulations were performed that considered intrarater variability in marker placement, movement artifacts in each phase of gait, variability in body segment parameters, and variability in muscle parameters calculated from cadaveric investigations. Propagation of uncertainty was performed by also using the output distributions from one stage as input distributions to subsequent stages. Confidence bounds (5-95%) and sensitivity of outputs to model input parameters were calculated throughout the gait cycle. The combined impact of uncertainty resulted in mean bounds that ranged from 2.7° to 6.4° in joint kinematics, 2.7 to 8.1 N m in joint moments, and 35.8 to 130.8 N in muscle forces. The impact of movement artifact was 1.8 times larger than any other propagated source. Sensitivity to specific body segment parameters and muscle parameters were linked to where in the gait cycle they were calculated. We anticipate that through the increased use of probabilistic tools, researchers will better understand the strengths and limitations of their musculoskeletal simulations and more effectively use simulations to evaluate hypotheses and inform clinical decisions.
Adaptive method for quantifying uncertainty in discharge measurements using velocity-area method.
Despax, Aurélien; Favre, Anne-Catherine; Belleville, Arnaud
2015-04-01
Streamflow information provided by hydrometric services such as EDF-DTG allow real time monitoring of rivers, streamflow forecasting, paramount hydrological studies and engineering design. In open channels, the traditional approach to measure flow uses a rating curve, which is an indirect method to estimate the discharge in rivers based on water level and punctual discharge measurements. A large proportion of these discharge measurements are performed using the velocity-area method; it consists in integrating flow velocities and depths through the cross-section [1]. The velocity field is estimated by choosing a number m of verticals, distributed across the river, where vertical velocity profile is sampled by a current-meter at ni different depths. Uncertainties coming from several sources are related to the measurement process. To date, the framework for assessing uncertainty in velocity-area discharge measurements is the method presented in the ISO 748 standard [2] which follows the GUM [3] approach. The equation for the combined uncertainty in measured discharge u(Q), at 68% level of confidence, proposed by the ISO 748 standard is expressed as: Σ 2 2 2 -q2i[u2(Bi)+-u2(Di)+-u2p(Vi)+-(1ni) ×-[u2c(Vi)+-u2exp(Vi)
Directory of Open Access Journals (Sweden)
I.-W. Jung
2011-02-01
Full Text Available How will the combined impacts of land use change, climate change, and hydrologic modeling influence changes in urban flood frequency and what is the main uncertainty source of the results? Will such changes differ by catchment with different degrees of current and future urban development? We attempt to answer these questions in two catchments with different degrees of urbanization, the Fanno catchment with 84% urban land use and the Johnson catchment with 36% urban land use, both located in the Pacific Northwest of the US. Five uncertainty sources – general circulation model (GCM structures, future greenhouse gas (GHG emission scenarios, land use change scenarios, natural variability, and hydrologic model parameters – are considered to compare the relative source of uncertainty in flood frequency projections. Two land use change scenarios, conservation and development, representing possible future land use changes are used for analysis. Results show the highest increase in flood frequency under the combination of medium high GHG emission (A1B and development scenarios, and the lowest increase under the combination of low GHG emission (B1 and conservation scenarios. Although the combined impact is more significant to flood frequency change than individual scenarios, it does not linearly increase flood frequency. Changes in flood frequency are more sensitive to climate change than land use change in the two catchments for 2050s (2040–2069. Shorter term flood frequency change, 2 and 5 year floods, is highly affected by GCM structure, while longer term flood frequency change above 25 year floods is dominated by natural variability. Projected flood frequency changes more significantly in Johnson creek than Fanno creek. This result indicates that, under expected climate change conditions, adaptive urban planning based on the conservation scenario could be more effective in less developed Johnson catchment than in the already developed Fanno
Smeltzer, C. D.; Wang, Y.; Boersma, F.; Celarier, E. A.; Bucsela, E. J.
2013-12-01
We investigate the effects of retrieval radiation schemes and parameters on trend analysis using tropospheric nitrogen dioxide (NO2) vertical column density (VCD) measurements over the United States. Ozone Monitoring Instrument (OMI) observations from 2005 through 2012 are used in this analysis. We investigated two radiation schemes, provided by National Aeronautics and Space Administration (NASA TOMRAD) and Koninklijk Nederlands Meteorologisch Instituut (KNMI DAK). In addition, we analyzed trend dependence on radiation parameters, including surface albedo and viewing geometry. The cross-track mean VCD average difference is 10-15% between the two radiation schemes in 2005. As the OMI anomaly developed and progressively worsens, the difference between the two schemes becomes larger. Furthermore, applying surface albedo measurements from the Moderate Resolution Imaging Spectroradiometer (MODIS) leads to increases of estimated NO2 VCD trends over high-emission regions. We find that the uncertainties of OMI-derived NO2 VCD trends can be reduced by up to a factor of 3 by selecting OMI cross-track rows on the basis of their performance over the ocean [see abstract figure]. Comparison of OMI tropospheric VCD trends to those estimated based on the EPA surface NO2 observations indicate using MODIS surface albedo data and a more narrow selection of OMI cross-track rows greatly improves the agreement of estimated trends between satellite and surface data. This figure shows the reduction of uncertainty in OMI NO2 trend by selecting OMI cross-track rows based on the performance over the ocean. With this technique, uncertainties within the seasonal trend may be reduced by a factor of 3 or more (blue) compared with only removing the anomalous rows: considering OMI cross-track rows 4-24 (red).
Directory of Open Access Journals (Sweden)
B. J. Jonkheid
2012-02-01
Full Text Available The uncertainties in the cloud physical properties derived from satellite observations make it difficult to interpret model evaluation studies. In this paper, the uncertainties in the cloud water path (CWP retrievals derived with the cloud physical properties retrieval algorithm (CPP of the climate monitoring satellite application facility (CM-SAF are investigated. To this end, a numerical simulator of MSG-SEVIRI observations was developed that calculates the reflectances at 0.64 and 1.63 μm for a wide range of cloud parameters, satellite viewing geometries and surface albedos. These reflectances are used as input to CPP, and the retrieved values of CWP are compared to the original input of the simulator.
It is shown that the CWP retrievals are very sensitive to the assumptions made in the CPP code. The CWP retrieval errors are generally small for unbroken single-phase clouds with COT >10, with retrieval errors of ~3% for liquid water clouds to ~10% for ice clouds. When both liquid water and ice clouds are present in a pixel, the CWP retrieval errors increase dramatically; depending on the cloud, this can lead to uncertainties of 40–80%. CWP retrievals also become more uncertain when the cloud does not cover the entire pixel, leading to errors of ~50% for cloud fractions of 0.75 and even larger errors for smaller cloud fractions. Thus, the satellite retrieval of cloud physical properties of broken clouds and multi-phase clouds is complicated by inherent difficulties, and the proper interpretation of such retrievals requires extra care.
Quantifying the Contribution of Post-Processing in Computed Tomography Measurement Uncertainty
DEFF Research Database (Denmark)
Stolfi, Alessandro; Thompson, Mary Kathryn; Carli, Lorenzo
2016-01-01
by calculating the standard deviation of 10 repeated measurement evaluations on the same data set. The evaluations were performed on an industrial assembly. Each evaluation includes several dimensional and geometrical measurands that were expected to have different responses to the various post......-processing settings. It was found that the definition of the datum system had the largest impact on the uncertainty with a standard deviation of a few microns. The surface determination and data fitting had smaller contributions with sub-micron repeatability....
Lamelas, Cristina; Avaltroni, Fabrice; Benedetti, Marc; Wilkinson, Kevin J; Slaveykova, Vera I
2005-01-01
The Pb and Cd binding capacity of alginates were quantified by the determination of their complex stability constants and the concentration of complexing sites using H+, Pb2+, or Cd2+ selective electrodes in both static and dynamic titrations. Centrifugation filter devices (30 kDa filter cutoff), followed by inductively coupled plasma mass spectrometry (ICP-MS) measurements of lead or cadmium in the filtrates, were used to validate the results. The influence of ionic strength, pH, and the metal-to-alginate ratio was determined for a wide range of metal concentrations. Because of their polyelectrolytic properties, alginates may adopt different conformations depending on the physicochemistry of the medium, including the presence of metals. Therefore, molecular diffusion coefficients of the alginate were determined by fluorescence correlation spectroscopy under the same conditions of pH, ionic strength, and metal-to-alginate ratios that were used for the metal binding studies. The complexation and conformational properties of the alginate were related within the framework of the nonideal competitive adsorption isotherm (NICA) combined with a Donnan approach to account for both intrinsic and electrostatic contributions.
Quantifying the relative contributions of different solute carriers to aggregate substrate transport
Taslimifar, Mehdi; Oparija, Lalita; Verrey, Francois; Kurtcuoglu, Vartan; Olgac, Ufuk; Makrides, Victoria
2017-01-01
Determining the contributions of different transporter species to overall cellular transport is fundamental for understanding the physiological regulation of solutes. We calculated the relative activities of Solute Carrier (SLC) transporters using the Michaelis-Menten equation and global fitting to estimate the normalized maximum transport rate for each transporter (Vmax). Data input were the normalized measured uptake of the essential neutral amino acid (AA) L-leucine (Leu) from concentration-dependence assays performed using Xenopus laevis oocytes. Our methodology was verified by calculating Leu and L-phenylalanine (Phe) data in the presence of competitive substrates and/or inhibitors. Among 9 potentially expressed endogenous X. laevis oocyte Leu transporter species, activities of only the uniporters SLC43A2/LAT4 (and/or SLC43A1/LAT3) and the sodium symporter SLC6A19/B0AT1 were required to account for total uptake. Furthermore, Leu and Phe uptake by heterologously expressed human SLC6A14/ATB0,+ and SLC43A2/LAT4 was accurately calculated. This versatile systems biology approach is useful for analyses where the kinetics of each active protein species can be represented by the Hill equation. Furthermore, its applicable even in the absence of protein expression data. It could potentially be applied, for example, to quantify drug transporter activities in target cells to improve specificity. PMID:28091567
Kennedy, M.C.; Voet, van der H.; Roelofs, V.J.; Roelofs, W.; Glass, C.R.; Boer, de W.J.; Kruisselbrink, J.W.; Hart, A.D.M.
2015-01-01
Risk assessments for human exposures to plant protection products (PPPs) have traditionally focussed on single routes of exposure and single compounds. Extensions to estimate aggregate (multi-source) and cumulative (multi-compound) exposure from PPPs present many new challenges and additional uncert
Gopinathan, Devaraj; Roy, Debasish; Rajendran, Kusala; Guillas, Serge; Dias, Frederic
2016-01-01
Usual inversion for earthquake source parameters from tsunami wave data incorporates subjective elements. Noisy and possibly insufficient data also results in instability and non-uniqueness in most deterministic inversions. Here we employ the satellite altimetry data for the 2004 Sumatra-Andaman tsunami event to invert the source parameters. Using a finite fault model that represents the extent of rupture and the geometry of the trench, we perform a non-linear joint inversion of the slips, rupture velocities and rise times with minimal a priori constraints. Despite persistently good waveform fits, large variance and skewness in the joint parameter distribution constitute a remarkable feature of the inversion. These uncertainties suggest the need for objective inversion strategies that should incorporate more sophisticated physical models in order to significantly improve the performance of early warning systems.
THE SENSITIVITY ANALYSIS AS A METHOD OF QUANTIFYING THE DEGREE OF UNCERTAINTY
Directory of Open Access Journals (Sweden)
Tatiana MANOLE
2014-08-01
Full Text Available In this article the author relates about the uncertainty of any proposed investment or government policies. Taking in account this situation, it is necessary to do an analysis of proposed projects for implementation and from multiple choices to choose the project that is most advantageous. This is a general principle. The financial science provides to the researchers a set of tools with what we can identify the best project. The author aims to examine three projects that have the same features, applying them to various methods of financial analysis, such as net present value (NPV, the discount rate (SAR, recovery time (TR, additional income (VS and return on invested (RR. All these tools of financial analysis are in the cost-benefit analysis (CBA and have the aim to streamline the public money that are invested to achieve successful performance.
The sensitivity analysis as a method of quantifying the degree of uncertainty
Directory of Open Access Journals (Sweden)
Manole Tatiana
2013-01-01
Full Text Available In this article the author relates about the uncertainty of any proposed investment or government policies. Taking in account this situation, it is necessary to do an analysis of proposed projects for implementation and from multiple choices to choose the project that is most advantageous. This is a general principle. The financial science provides to the researchers a set of tools with what we can identify the best project. The author aims to examine three projects that have the same features, applying them to various methods of financial analysis, such as net present value (NPV, the discount rate (SAR, recovery time (TR, additional income (VS and return on invested (RR. All these tools of financial analysis are in the cost-benefit analysis (CBA and have the aim to streamline the public money that are invested to achieve successful performance.
Neri, Augusto; Bevilacqua, Andrea; Esposti Ongaro, Tomaso; Isaia, Roberto; Aspinall, Willy P.; Bisson, Marina; Flandoli, Franco; Baxter, Peter J.; Bertagnini, Antonella; Iannuzzi, Enrico; Orsucci, Simone; Pistolesi, Marco; Rosi, Mauro; Vitale, Stefano
2015-04-01
Campi Flegrei (CF) is an example of an active caldera containing densely populated settlements at very high risk of pyroclastic density currents (PDCs). We present here an innovative method for assessing background spatial PDC hazard in a caldera setting with probabilistic invasion maps conditional on the occurrence of an explosive event. The method encompasses the probabilistic assessment of potential vent opening positions, derived in the companion paper, combined with inferences about the spatial density distribution of PDC invasion areas from a simplified flow model, informed by reconstruction of deposits from eruptions in the last 15 ka. The flow model describes the PDC kinematics and accounts for main effects of topography on flow propagation. Structured expert elicitation is used to incorporate certain sources of epistemic uncertainty, and a Monte Carlo approach is adopted to produce a set of probabilistic hazard maps for the whole CF area. Our findings show that, in case of eruption, almost the entire caldera is exposed to invasion with a mean probability of at least 5%, with peaks greater than 50% in some central areas. Some areas outside the caldera are also exposed to this danger, with mean probabilities of invasion of the order of 5-10%. Our analysis suggests that these probability estimates have location-specific uncertainties which can be substantial. The results prove to be robust with respect to alternative elicitation models and allow the influence on hazard mapping of different sources of uncertainty, and of theoretical and numerical assumptions, to be quantified.
Applying an animal model to quantify the uncertainties of an image-based 4D-CT algorithm
Pierce, Greg; Wang, Kevin; Battista, Jerry; Lee, Ting-Yim
2012-06-01
The purpose of this paper is to use an animal model to quantify the spatial displacement uncertainties and test the fundamental assumptions of an image-based 4D-CT algorithm in vivo. Six female Landrace cross pigs were ventilated and imaged using a 64-slice CT scanner (GE Healthcare) operating in axial cine mode. The breathing amplitude pattern of the pigs was varied by periodically crimping the ventilator gas return tube during the image acquisition. The image data were used to determine the displacement uncertainties that result from matching CT images at the same respiratory phase using normalized cross correlation (NCC) as the matching criteria. Additionally, the ability to match the respiratory phase of a 4.0 cm subvolume of the thorax to a reference subvolume using only a single overlapping 2D slice from the two subvolumes was tested by varying the location of the overlapping matching image within the subvolume and examining the effect this had on the displacement relative to the reference volume. The displacement uncertainty resulting from matching two respiratory images using NCC ranged from 0.54 ± 0.10 mm per match to 0.32 ± 0.16 mm per match in the lung of the animal. The uncertainty was found to propagate in quadrature, increasing with number of NCC matches performed. In comparison, the minimum displacement achievable if two respiratory images were matched perfectly in phase ranged from 0.77 ± 0.06 to 0.93 ± 0.06 mm in the lung. The assumption that subvolumes from separate cine scan could be matched by matching a single overlapping 2D image between to subvolumes was validated. An in vivo animal model was developed to test an image-based 4D-CT algorithm. The uncertainties associated with using NCC to match the respiratory phase of two images were quantified and the assumption that a 4.0 cm 3D subvolume can by matched in respiratory phase by matching a single 2D image from the 3D subvolume was validated. The work in this paper shows the image-based 4D
Directory of Open Access Journals (Sweden)
Cristiano Cigagna
2015-12-01
Full Text Available Abstract Aim: This study aimed to map the concentrations of limnological variables in a reservoir employing semivariogram geostatistical techniques and Kriging estimates for unsampled locations, as well as the uncertainty calculation associated with the estimates. Methods: We established twenty-seven points distributed in a regular mesh for sampling. Then it was determined the concentrations of chlorophyll-a, total nitrogen and total phosphorus. Subsequently, a spatial variability analysis was performed and the semivariogram function was modeled for all variables and the variographic mathematical models were established. The main geostatistical estimation technique was the ordinary Kriging. The work was developed with the estimate of a heavy grid points for each variables that formed the basis of the interpolated maps. Results: Through the semivariogram analysis was possible to identify the random component as not significant for the estimation process of chlorophyll-a, and as significant for total nitrogen and total phosphorus. Geostatistical maps were produced from the Kriging for each variable and the respective standard deviations of the estimates calculated. These measurements allowed us to map the concentrations of limnological variables throughout the reservoir. The calculation of standard deviations provided the quality of the estimates and, consequently, the reliability of the final product. Conclusions: The use of the Kriging statistical technique to estimate heavy mesh points associated with the error dispersion (standard deviation of the estimate, made it possible to make quality and reliable maps of the estimated variables. Concentrations of limnological variables in general were higher in the lacustrine zone and decreased towards the riverine zone. The chlorophyll-a and total nitrogen correlated comparing the grid generated by Kriging. Although the use of Kriging is more laborious compared to other interpolation methods, this
Röttgers, Rüdiger; Heymann, Kerstin; Krasemann, Hajo
2014-12-01
Measurements of total suspended matter (TSM) concentration and the discrimination of the particulate inorganic (PIM) and organic matter fraction by the loss on ignition methods are susceptible to significant and contradictory bias errors by: (a) retention of sea salt in the filter (despite washing with deionized water), and (b) filter material loss during washing and combustion procedures. Several methodological procedures are described to avoid or correct errors associated with these biases but no analysis of the final uncertainty for the overall mass concentration determination has yet been performed. Typically, the exact values of these errors are unknown and can only be estimated. Measurements were performed in coastal and estuarine waters of the German Bight that allowed the individual error for each sample to be determined with respect to a systematic mass offset. This was achieved by using different volumes of the sample and analyzing the mass over volume relationship by linear regression. The results showed that the variation in the mass offset is much larger than expected (mean mass offset: 0.85 ± 0.84 mg, range: -2.4 - 7.5 mg) and that it often leads to rather large relative errors even when TSM concentrations were high. Similarly large variations were found for the mass offset for PIM measurements. Correction with a mean offset determined with procedural control filters reduced the maximum error to errors for the TSM concentration was error was error was always errors of only a few percent were obtained. The approach proposed here can determine the individual determination error for each sample, is independent of bias errors, can be used for TSM and PIM determination, and allows individual quality control for samples from coastal and estuarine waters. It should be possible to use the approach in oceanic or fresh water environments as well. The possibility of individual quality control will allow mass-specific optical properties to be determined with
Quantifying how the full local distribution of daily precipitation is changing and its uncertainties
Stainforth, David; Chapman, Sandra; Watkins, Nicholas
2016-04-01
The study of the consequences of global warming would benefit from quantification of geographical patterns of change at specific thresholds or quantiles, and better understandings of the intrinsic uncertainties in such quantities. For precipitation a range of indices have been developed which focus on high percentiles (e.g. rainfall falling on days above the 99th percentile) and on absolute extremes (e.g. maximum annual one day precipitation) but scientific assessments are best undertaken in the context of changes in the whole climatic distribution. Furthermore, the relevant thresholds for climate-vulnerable policy decisions, adaptation planning and impact assessments, vary according to the specific sector and location of interest. We present a methodology which maintains the flexibility to provide information at different thresholds for different downstream users, both scientists and decision makers. We develop a method[1,2] for analysing local climatic timeseries to assess which quantiles of the local climatic distribution show the greatest and most robust changes in daily precipitation data. We extract from the data quantities that characterize the changes in time of the likelihood of daily precipitation above a threshold and of the amount of precipitation on those days. Our method is a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of how fast different quantiles of precipitation distributions are changing. This involves not only determining which quantiles and geographical locations show the greatest and smallest changes, but also those at which uncertainty undermines the ability to make confident statements about any change there may be. We demonstrate this approach using E-OBS gridded data[3] which are timeseries of local daily
A hierarchical bayesian model to quantify uncertainty of stream water temperature forecasts.
Directory of Open Access Journals (Sweden)
Guillaume Bal
Full Text Available Providing generic and cost effective modelling approaches to reconstruct and forecast freshwater temperature using predictors as air temperature and water discharge is a prerequisite to understanding ecological processes underlying the impact of water temperature and of global warming on continental aquatic ecosystems. Using air temperature as a simple linear predictor of water temperature can lead to significant bias in forecasts as it does not disentangle seasonality and long term trends in the signal. Here, we develop an alternative approach based on hierarchical Bayesian statistical time series modelling of water temperature, air temperature and water discharge using seasonal sinusoidal periodic signals and time varying means and amplitudes. Fitting and forecasting performances of this approach are compared with that of simple linear regression between water and air temperatures using i an emotive simulated example, ii application to three French coastal streams with contrasting bio-geographical conditions and sizes. The time series modelling approach better fit data and does not exhibit forecasting bias in long term trends contrary to the linear regression. This new model also allows for more accurate forecasts of water temperature than linear regression together with a fair assessment of the uncertainty around forecasting. Warming of water temperature forecast by our hierarchical Bayesian model was slower and more uncertain than that expected with the classical regression approach. These new forecasts are in a form that is readily usable in further ecological analyses and will allow weighting of outcomes from different scenarios to manage climate change impacts on freshwater wildlife.
Tian, Liang; Wilkinson, Richard; Yang, Zhibing; Power, Henry; Fagerlund, Fritjof; Niemi, Auli
2017-08-01
We explore the use of Gaussian process emulators (GPE) in the numerical simulation of CO2 injection into a deep heterogeneous aquifer. The model domain is a two-dimensional, log-normally distributed stochastic permeability field. We first estimate the cumulative distribution functions (CDFs) of the CO2 breakthrough time and the total CO2 mass using a computationally expensive Monte Carlo (MC) simulation. We then show that we can accurately reproduce these CDF estimates with a GPE, using only a small fraction of the computational cost required by traditional MC simulation. In order to build a GPE that can predict the simulator output from a permeability field consisting of 1000s of values, we use a truncated Karhunen-Loève (K-L) expansion of the permeability field, which enables the application of the Bayesian functional regression approach. We perform a cross-validation exercise to give an insight of the optimization of the experiment design for selected scenarios: we find that it is sufficient to use 100s values for the size of training set and that it is adequate to use as few as 15 K-L components. Our work demonstrates that GPE with truncated K-L expansion can be effectively applied to uncertainty analysis associated with modelling of multiphase flow and transport processes in heterogeneous media.
Quantifying uncertainty in mean earthquake interevent times for a finite sample
Naylor, M.; Main, I. G.; Touati, S.
2009-01-01
Seismic activity is routinely quantified using means in event rate or interevent time. Standard estimates of the error on such mean values implicitly assume that the events used to calculate the mean are independent. However, earthquakes can be triggered by other events and are thus not necessarily independent. As a result, the errors on mean earthquake interevent times do not exhibit Gaussian convergence with increasing sample size according to the central limit theorem. In this paper we investigate how the errors decay with sample size in real earthquake catalogues and how the nature of this convergence varies with the spatial extent of the region under investigation. We demonstrate that the errors in mean interevent times, as a function of sample size, are well estimated by defining an effective sample size, using the autocorrelation function to estimate the number of pieces of independent data that exist in samples of different length. This allows us to accurately project error estimates from finite natural earthquake catalogues into the future and promotes a definition of stability wherein the autocorrelation function is not varying in time. The technique is easy to apply, and we suggest that it is routinely applied to define errors on mean interevent times as part of seismic hazard assessment studies. This is particularly important for studies that utilize small catalogue subsets (fewer than ˜1000 events) in time-dependent or high spatial resolution (e.g., for catastrophe modeling) hazard assessment.
Ludwig, Ralf
2010-05-01
According to future climate projections, Mediterranean countries are at high risk for an even pronounced susceptibility to changes in the hydrological budget and extremes. These changes are expected to have severe direct impacts on the management of water resources. Threats include severe droughts and extreme flooding, salinization of coastal aquifers, degradation of fertile soils and desertification due to poor and unsustainable water management practices. It can be foreseen that, unless appropriate adaptation measures are undertaken, the changes in the hydrologic cycle will give rise to an increasing potential for tension and conflict among the political and economic actors in this vulnerable region. The presented project initiative CLIMB, funded under EC's 7th Framework Program (FP7-ENV-2009-1), has started in January 2010. In its 4-year design, it shall analyze ongoing and future climate induced changes in hydrological budgets and extremes across the Mediterranean and neighboring regions. This is undertaken in study sites located in Sardinia, Northern Italy, Southern France, Tunisia, Egypt and the Palestinian-administered area Gaza. The work plan is targeted to selected river or aquifer catchments, where the consortium will employ a combination of novel field monitoring and remote sensing concepts, data assimilation, integrated hydrologic (and biophysical) modeling and socioeconomic factor analyses to reduce existing uncertainties in climate change impact analysis. Advanced climate scenario analysis will be employed and available ensembles of regional climate model simulations will be downscaling. This process will provide the drivers for an ensemble of hydro(-geo)logical models with different degrees of complexity in terms of process description and level of integration. The results of hydrological modeling and socio-economic factor analysis will enable the development of a GIS-based Vulnerability and Risk Assessment Tool. This tool will serve as a platform
Fenicia, Fabrizio; Reichert, Peter; Kavetski, Dmitri; Albert, Calro
2016-04-01
The calibration of hydrological models based on signatures (e.g. Flow Duration Curves - FDCs) is often advocated as an alternative to model calibration based on the full time series of system responses (e.g. hydrographs). Signature based calibration is motivated by various arguments. From a conceptual perspective, calibration on signatures is a way to filter out errors that are difficult to represent when calibrating on the full time series. Such errors may for example occur when observed and simulated hydrographs are shifted, either on the "time" axis (i.e. left or right), or on the "streamflow" axis (i.e. above or below). These shifts may be due to errors in the precipitation input (time or amount), and if not properly accounted in the likelihood function, may cause biased parameter estimates (e.g. estimated model parameters that do not reproduce the recession characteristics of a hydrograph). From a practical perspective, signature based calibration is seen as a possible solution for making predictions in ungauged basins. Where streamflow data are not available, it may in fact be possible to reliably estimate streamflow signatures. Previous research has for example shown how FDCs can be reliably estimated at ungauged locations based on climatic and physiographic influence factors. Typically, the goal of signature based calibration is not the prediction of the signatures themselves, but the prediction of the system responses. Ideally, the prediction of system responses should be accompanied by a reliable quantification of the associated uncertainties. Previous approaches for signature based calibration, however, do not allow reliable estimates of streamflow predictive distributions. Here, we illustrate how the Bayesian approach can be employed to obtain reliable streamflow predictive distributions based on signatures. A case study is presented, where a hydrological model is calibrated on FDCs and additional signatures. We propose an approach where the likelihood
Schaffrath, K. R.; Belmont, P.; Wheaton, J. M.
2013-12-01
High-resolution topography data (lidar) are being collected over increasingly larger geographic areas. These data contain an immense amount of information regarding the topography of bare-earth and vegetated surfaces. Repeat lidar data (collected at multiple times for the same location) enables extraction of an unprecedented level of detailed information about landscape form and function and provides an opportunity to quantify volumetric change and identify hot spots of erosion and deposition. However, significant technological and scientific challenges remain in the analysis of repeat lidar data over enormous areas (>1000 square kilometers), not the least of which involves robust quantification of uncertainty. Excessive sedimentation has been documented in the Minnesota River and many reaches of the mainstem and tributaries are listed as impaired for turbidity and eutrophication under the Clean Water Act of 1972. The Blue Earth River and its tributaries (Greater Blue Earth basin) have been identified as one of the main sources of sediment to the Minnesota River. Much of the Greater Blue Earth basin is located in Blue Earth County (1,982 square kilometers) where airborne lidar data were collected in 2005 and 2012, with average bare-earth point densities of 1 point per square meter and closer to 2 points per square meter, respectively. One of the largest floods on record (100-year recurrence interval) occurred in September 2010. A sediment budget for the Greater Blue Earth basin is being developed to inform strategies to reduce current sediment loads and better predict how the basin may respond to changing climate and management practices. Here we evaluate the geomorphic changes that occurred between 2005 and 2012 to identify hotspots of erosion and deposition, and to quantify some of the terms in the sediment budget. To make meaningful interpretations of the differences between the 2005 and 2012 lidar digital elevation models (DEMs), total uncertainty must be
Kigobe, M.; McIntyre, N.; Wheater, H. S.
2009-04-01
Interest in the application of climate and hydrological models in the Nile basin has risen in the recent past; however, the first drawback for most efforts has been the estimation of historic precipitation patterns. In this study we have applied stochastic models to infill and extend observed data sets to generate inputs for hydrological modelling. Several stochastic climate models within the Generalised Linear Modelling (GLM) framework have been applied to reproduce spatial and temporal patterns of precipitation in the Kyoga basin. A logistic regression model (describing rainfall occurrence) and a gamma distribution (describing rainfall amounts) are used to model rainfall patterns. The parameters of the models are functions of spatial and temporal covariates, and are fitted to the observed rainfall data using log-likelihood methods. Using the fitted model, multi-site rainfall sequences over the Kyoga basin are generated stochastically as a function of the dominant seasonal, climatic and geographic controls. The rainfall sequences generated are then used to drive a semi distributed hydrological model using the Soil Water and Assessment Tool (SWAT). The sensitivity of runoff to uncertainty associated with missing precipitation records is thus tested. In an application to the Lake Kyoga catchment, the performance of the hydrological model highly depends on the spatial representation of the input precipitation patterns, model parameterisation and the performance of the GLM stochastic models used to generate the input rainfall. The results obtained so far disclose that stochastic models can be developed for several climatic regions within the Kyoga basin; and, given identification of a stochastic rainfall model; input uncertainty due to precipitation can be usefully quantified. The ways forward for rainfall modelling and hydrological simulation in Uganda and the Upper Nile are discussed. Key Words: Precipitation, Generalised Linear Models, Input Uncertainty, Soil Water
Uncertainty of soil erosion modelling using open source high resolution and aggregated DEMs
Directory of Open Access Journals (Sweden)
Arun Mondal
2017-05-01
Full Text Available Digital Elevation Model (DEM is one of the important parameters for soil erosion assessment. Notable uncertainties are observed in this study while using three high resolution open source DEMs. The Revised Universal Soil Loss Equation (RUSLE model has been applied to analysis the assessment of soil erosion uncertainty using open source DEMs (SRTM, ASTER and CARTOSAT and their increasing grid space (pixel size from the actual. The study area is a part of the Narmada river basin in Madhya Pradesh state, which is located in the central part of India and the area covered 20,558 km2. The actual resolution of DEMs is 30 m and their increasing grid spaces are taken as 90, 150, 210, 270 and 330 m for this study. Vertical accuracy of DEMs has been assessed using actual heights of the sample points that have been taken considering planimetric survey based map (toposheet. Elevations of DEMs are converted to the same vertical datum from WGS 84 to MSL (Mean Sea Level, before the accuracy assessment and modelling. Results indicate that the accuracy of the SRTM DEM with the RMSE of 13.31, 14.51, and 18.19 m in 30, 150 and 330 m resolution respectively, is better than the ASTER and the CARTOSAT DEMs. When the grid space of the DEMs increases, the accuracy of the elevation and calculated soil erosion decreases. This study presents a potential uncertainty introduced by open source high resolution DEMs in the accuracy of the soil erosion assessment models. The research provides an analysis of errors in selecting DEMs using the original and increased grid space for soil erosion modelling.
Directory of Open Access Journals (Sweden)
Younghun Jung
2014-07-01
Full Text Available Generalized likelihood uncertainty estimation (GLUE is one of the widely-used methods for quantifying uncertainty in flood inundation mapping. However, the subjective nature of its application involving the definition of the likelihood measure and the criteria for defining acceptable versus unacceptable models can lead to different results in quantifying uncertainty bounds. The objective of this paper is to perform a sensitivity analysis of the effect of the choice of likelihood measures and cut-off thresholds used in selecting behavioral and non-behavioral models in the GLUE methodology. By using a dataset for a reach along the White River in Seymour, Indiana, multiple prior distributions, likelihood measures and cut-off thresholds are used to investigate the role of subjective decisions in applying the GLUE methodology for uncertainty quantification related to topography, streamflow and Manning’s n. Results from this study show that a normal pdf produces a narrower uncertainty bound compared to a uniform pdf for an uncertain variable. Similarly, a likelihood measure based on water surface elevations is found to be less affected compared to other likelihood measures that are based on flood inundation area and width. Although the findings from this study are limited due to the use of a single test case, this paper provides a framework that can be utilized to gain a better understanding of the uncertainty while applying the GLUE methodology in flood inundation mapping.
Energy Technology Data Exchange (ETDEWEB)
Piepel, Gregory F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cooley, Scott K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kuhn, William L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rector, David R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Heredia-Langner, Alejandro [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-05-01
This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to “address uncertainties and increase confidence in the projected, full-scale mixing performance and operations” in the Waste Treatment and Immobilization Plant (WTP).
Xiao, H.; Wu, J.-L.; Wang, J.-X.; Sun, R.; Roy, C. J.
2016-11-01
Despite their well-known limitations, Reynolds-Averaged Navier-Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations. Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach has
Xue, Jie; Gui, Dongwei; Lei, Jiaqiang; Feng, Xinlong; Zeng, Fanjiang; Zhou, Jie; Mao, Donglei
2016-10-01
The existence and development of oases in arid plain areas depends mainly on the runoff generated from alpine regions. Quantifying the uncertainties of runoff simulation under climatic change is crucial for better utilization of water resources and management of oases in arid areas. In the present study, based on the ungauged Qira River Basin in Xinjiang, China, a modified version of the Delta statistical downscaling method was applied to reconstruct the monthly mean temperature (MMT), monthly accumulated precipitation (MAP), and monthly accumulated evaporation (MAE) of two target stations. Then, the uncertainty in runoff simulation, implemented using the Three-Layered Feedforward Neural Network model with the Back-Propagation learning algorithm, was quantified. The modified Delta method reproduced the MMT, MAP, and MAE time series of the two target stations very well during the calibrated periods, and the reconstructed uncertainty ranges were small among reconstructed datasets using data from 12 observation stations. The monthly accumulated runoff simulated by the reconstructed MMT, MAP, and MAE as input variables of the model possessed unpredictable uncertainty. Although the use of multi-data ensembles in model inputs are considered an effective way to minimize uncertainties, it could be concluded that, in this case, the efficiency of such an approach was limited because of errors in the meteorological data and the deficiency of the model's structure. The uncertainty range in the runoff peak was unable to capture the actual monthly runoff. Nevertheless, this study represents a significant attempt to reproduce historical meteorological data and to evaluate the uncertainties in runoff simulation through multiple input ensembles in an ungauged basin. It can be used as reference meteorological data for researching long-term climate change and producing runoff forecasts for assessing the risk of droughts and/or floods, as well as the existence and management of plain
Sharma, A.; Woldemeskel, F. M.; Sivakumar, B.; Mehrotra, R.
2014-12-01
We outline a new framework for assessing uncertainties in model simulations, be they hydro-ecological simulations for known scenarios, or climate simulations for assumed scenarios representing the future. This framework is illustrated here using GCM projections for future climates for hydrologically relevant variables (precipitation and temperature), with the uncertainty segregated into three dominant components - model uncertainty, scenario uncertainty (representing greenhouse gas emission scenarios), and ensemble uncertainty (representing uncertain initial conditions and states). A novel uncertainty metric, the Square Root Error Variance (SREV), is used to quantify the uncertainties involved. The SREV requires: (1) Interpolating raw and corrected GCM outputs to a common grid; (2) Converting these to percentiles; (3) Estimating SREV for model, scenario, initial condition and total uncertainty at each percentile; and (4) Transforming SREV to a time series. The outcome is a spatially varying series of SREVs associated with each model that can be used to assess how uncertain the system is at each simulated point or time. This framework, while illustrated in a climate change context, is completely applicable for assessment of uncertainties any modelling framework may be subject to. The proposed method is applied to monthly precipitation and temperature from 6 CMIP3 and 13 CMIP5 GCMs across the world. For CMIP3, B1, A1B and A2 scenarios whereas for CMIP5, RCP2.6, RCP4.5 and RCP8.5 representing low, medium and high emissions are considered. For both CMIP3 and CMIP5, model structure is the largest source of uncertainty, which reduces significantly after correcting for biases. Scenario uncertainly increases, especially for temperature, in future due to divergence of the three emission scenarios analysed. While CMIP5 precipitation simulations exhibit a small reduction in total uncertainty over CMIP3, there is almost no reduction observed for temperature projections
Xiao, H; Wang, J -X; Sun, R; Roy, C J
2015-01-01
Despite their well-known limitations, Reynolds-Averaged Navier-Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering applications. For many practical flows, the turbulence models are by far the most important source of uncertainty. In this work we develop an open-box, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations. Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Si...
Zhao, Yu; Zhou, Yaduan; Qiu, Liping; Zhang, Jie
2017-09-01
A comprehensive uncertainty analysis was conducted on emission inventories for industrial sources at national (China), provincial (Jiangsu), and city (Nanjing) scales for 2012. Based on various methods and data sources, Monte-Carlo simulation was applied at sector level for national inventory, and at plant level (whenever possible) for provincial and city inventories. The uncertainties of national inventory were estimated at -17-37% (expressed as 95% confidence intervals, CIs), -21-35%, -19-34%, -29-40%, -22-47%, -21-54%, -33-84%, and -32-92% for SO2, NOX, CO, TSP (total suspended particles), PM10, PM2.5, black carbon (BC), and organic carbon (OC) emissions respectively for the whole country. At provincial and city levels, the uncertainties of corresponding pollutant emissions were estimated at -15-18%, -18-33%, -16-37%, -20-30%, -23-45%, -26-50%, -33-79%, and -33-71% for Jiangsu, and -17-22%, -10-33%, -23-75%, -19-36%, -23-41%, -28-48%, -45-82%, and -34-96% for Nanjing, respectively. Emission factors (or associated parameters) were identified as the biggest contributors to the uncertainties of emissions for most source categories except iron & steel production in the national inventory. Compared to national one, uncertainties of total emissions in the provincial and city-scale inventories were not significantly reduced for most species with an exception of SO2. For power and other industrial boilers, the uncertainties were reduced, and the plant-specific parameters played more important roles to the uncertainties. Much larger PM10 and PM2.5 emissions for Jiangsu were estimated in this provincial inventory than other studies, implying the big discrepancies on data sources of emission factors and activity data between local and national inventories. Although the uncertainty analysis of bottom-up emission inventories at national and local scales partly supported the ;top-down; estimates using observation and/or chemistry transport models, detailed investigations and
Mascio, Jeana; Mace, Gerald G.
2017-02-01
Interpretations of remote sensing measurements collected in sample volumes containing ice-phase hydrometeors are very sensitive to assumptions regarding the distributions of mass with ice crystal dimension, otherwise known as mass-dimensional or m-D relationships. How these microphysical characteristics vary in nature is highly uncertain, resulting in significant uncertainty in algorithms that attempt to derive bulk microphysical properties from remote sensing measurements. This uncertainty extends to radar reflectivity factors forward calculated from model output because the statistics of the actual m-D in nature is not known. To investigate the variability in m-D relationships in cirrus clouds, reflectivity factors measured by CloudSat are combined with particle size distributions (PSDs) collected by coincident in situ aircraft by using an optimal estimation-based (OE) retrieval of the m-D power law. The PSDs were collected by 12 flights of the Stratton Park Engineering Company Learjet during the Small Particles in Cirrus campaign. We find that no specific habit emerges as preferred, and instead, we find that the microphysical characteristics of ice crystal populations tend to be distributed over a continuum-defying simple categorization. With the uncertainties derived from the OE algorithm, the uncertainties in forward-modeled backscatter cross section and, in turn, radar reflectivity is calculated by using a bootstrapping technique, allowing us to infer the uncertainties in forward-modeled radar reflectivity that would be appropriately applied to remote sensing simulator algorithms.
Farahmand, Touraj; Hamilton, Stuart
2016-04-01
Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs). In general, the index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate and recommended when more than one specific discharge can be measured for a specific stage such as backwater and unsteady flow conditions caused by but not limited to the following; stream confluences, streams flowing into lakes or reservoirs, tide-affected streams, regulated streamflows (dams or control structures), or streams affected by meteorological forcing, such as strong prevailing winds. In existing index velocity modeling techniques, two models (ratings) are required; index velocity model and stage-area model. The outputs from each of these models, mean channel velocity (Vm) and cross-sectional area (A), are then multiplied together to compute a discharge. Mean channel velocity (Vm) can generally be determined by a multivariate regression parametric model such as linear regression in the simplest case. The main challenges in the existing index velocity modeling techniques are; 1) Preprocessing and QA/QC of continuous index velocity data and synchronizing them with discharge measurements. 2) Nonlinear relationship between mean velocity and index velocity which is not uncommon at monitoring locations. 3)Model exploration and analysis in order to find the optimal regression model predictor(s) and model type (linear vs nonlinear and if nonlinear number of the parameters). 3) Model changes caused by dynamical changes in the environment (geomorphic, biological) over time 5) Deployment of the final model into the Data Management Systems (DMS) for real-time discharge calculation 6) Objective estimation of uncertainty caused by: field measurement errors; structural uncertainty; parameter uncertainty; and continuous sensor data
Kirchner, J. W.
2016-01-01
Environmental heterogeneity is ubiquitous, but environmental systems are often analyzed as if they were homogeneous instead, resulting in aggregation errors that are rarely explored and almost never quantified. Here I use simple benchmark tests to explore this general problem in one specific context: the use of seasonal cycles in chemical or isotopic tracers (such as Cl-, δ18O, or δ2H) to estimate timescales of storage in catchments. Timescales of catchment storage are typically quantified by the mean transit time, meaning the average time that elapses between parcels of water entering as precipitation and leaving again as streamflow. Longer mean transit times imply greater damping of seasonal tracer cycles. Thus, the amplitudes of tracer cycles in precipitation and streamflow are commonly used to calculate catchment mean transit times. Here I show that these calculations will typically be wrong by several hundred percent, when applied to catchments with realistic degrees of spatial heterogeneity. This aggregation bias arises from the strong nonlinearity in the relationship between tracer cycle amplitude and mean travel time. I propose an alternative storage metric, the young water fraction in streamflow, defined as the fraction of runoff with transit times of less than roughly 0.2 years. I show that this young water fraction (not to be confused with event-based "new water" in hydrograph separations) is accurately predicted by seasonal tracer cycles within a precision of a few percent, across the entire range of mean transit times from almost zero to almost infinity. Importantly, this relationship is also virtually free from aggregation error. That is, seasonal tracer cycles also accurately predict the young water fraction in runoff from highly heterogeneous mixtures of subcatchments with strongly contrasting transit-time distributions. Thus, although tracer cycle amplitudes yield biased and unreliable estimates of catchment mean travel times in heterogeneous
McCollum, David L.; Jewell, Jessica; Krey, Volker; Bazilian, Morgan; Fay, Marianne; Riahi, Keywan
2016-07-01
Oil prices have fluctuated remarkably in recent years. Previous studies have analysed the impacts of future oil prices on the energy system and greenhouse gas emissions, but none have quantitatively assessed how the broader, energy-system-wide impacts of diverging oil price futures depend on a suite of critical uncertainties. Here we use the MESSAGE integrated assessment model to study several factors potentially influencing this interaction, thereby shedding light on which future unknowns hold the most importance. We find that sustained low or high oil prices could have a major impact on the global energy system over the next several decades; and depending on how the fuel substitution dynamics play out, the carbon dioxide consequences could be significant (for example, between 5 and 20% of the budget for staying below the internationally agreed 2 ∘C target). Whether or not oil and gas prices decouple going forward is found to be the biggest uncertainty.
Directory of Open Access Journals (Sweden)
Enrico Zio
2008-01-01
Full Text Available In the present work, the uncertainties affecting the safety margins estimated from thermal-hydraulic code calculations are captured quantitatively by resorting to the order statistics and the bootstrap technique. The proposed framework of analysis is applied to the estimation of the safety margin, with its confidence interval, of the maximum fuel cladding temperature reached during a complete group distribution blockage scenario in a RBMK-1500 nuclear reactor.
Ershadi, Ali
2013-05-01
The influence of uncertainty in land surface temperature, air temperature, and wind speed on the estimation of sensible heat flux is analyzed using a Bayesian inference technique applied to the Surface Energy Balance System (SEBS) model. The Bayesian approach allows for an explicit quantification of the uncertainties in input variables: a source of error generally ignored in surface heat flux estimation. An application using field measurements from the Soil Moisture Experiment 2002 is presented. The spatial variability of selected input meteorological variables in a multitower site is used to formulate the prior estimates for the sampling uncertainties, and the likelihood function is formulated assuming Gaussian errors in the SEBS model. Land surface temperature, air temperature, and wind speed were estimated by sampling their posterior distribution using a Markov chain Monte Carlo algorithm. Results verify that Bayesian-inferred air temperature and wind speed were generally consistent with those observed at the towers, suggesting that local observations of these variables were spatially representative. Uncertainties in the land surface temperature appear to have the strongest effect on the estimated sensible heat flux, with Bayesian-inferred values differing by up to ±5°C from the observed data. These differences suggest that the footprint of the in situ measured land surface temperature is not representative of the larger-scale variability. As such, these measurements should be used with caution in the calculation of surface heat fluxes and highlight the importance of capturing the spatial variability in the land surface temperature: particularly, for remote sensing retrieval algorithms that use this variable for flux estimation.
Quantifying soil carbon loss and uncertainty from a peatland wildfire using multi-temporal LiDAR
Reddy, Ashwan D.; Hawbaker, Todd J.; Wurster, F.; Zhu, Zhiliang; Ward, S.; Newcomb, Doug; Murray, R.
2015-01-01
Peatlands are a major reservoir of global soil carbon, yet account for just 3% of global land cover. Human impacts like draining can hinder the ability of peatlands to sequester carbon and expose their soils to fire under dry conditions. Estimating soil carbon loss from peat fires can be challenging due to uncertainty about pre-fire surface elevations. This study uses multi-temporal LiDAR to obtain pre- and post-fire elevations and estimate soil carbon loss caused by the 2011 Lateral West fire in the Great Dismal Swamp National Wildlife Refuge, VA, USA. We also determine how LiDAR elevation error affects uncertainty in our carbon loss estimate by randomly perturbing the LiDAR point elevations and recalculating elevation change and carbon loss, iterating this process 1000 times. We calculated a total loss using LiDAR of 1.10 Tg C across the 25 km2 burned area. The fire burned an average of 47 cm deep, equivalent to 44 kg C/m2, a value larger than the 1997 Indonesian peat fires (29 kg C/m2). Carbon loss via the First-Order Fire Effects Model (FOFEM) was estimated to be 0.06 Tg C. Propagating the LiDAR elevation error to the carbon loss estimates, we calculated a standard deviation of 0.00009 Tg C, equivalent to 0.008% of total carbon loss. We conclude that LiDAR elevation error is not a significant contributor to uncertainty in soil carbon loss under severe fire conditions with substantial peat consumption. However, uncertainties may be more substantial when soil elevation loss is of a similar or smaller magnitude than the reported LiDAR error.
Paeth, Heiko; Vogt, Gernot; Paxian, Andreas; Hertig, Elke; Seubert, Stefanie; Jacobeit, Jucundus
2017-04-01
Climate change projections are subject to uncertainty arising from climate model deficiencies, unknown initial conditions and scenario assumptions. In the IPCC reports and many other publications climate changes and uncertainty ranges are usually displayed in terms of multi-model ensemble means and confidence intervals, respectively. In this study, we present a more quantitative assessment and statistical testing of climate change signals in the light of uncertainty. The approach is based on a two-way analysis of variance, referring to 24 climate models from the CMIP3 multi-model ensemble, and extents over the 21st century. The method also distinguishes between different climate variables, time scales and emission scenarios and is combined with a simple bias correction algorithm. The Mediterranean region has been chosen as a case study because it represents an assumed hot spot of future climate change, where temperature is projected to rise substantially and precipitation may decrease dramatically by the end of the 21st century. It is found that future temperature variations are mainly determined by radiative forcing, accounting for up to 60% of total variability, especially in the western Mediterranean Basin. In contrast, future precipitation variability is almost completely attributable to model uncertainty and model internal variability, both being important in more or less equal shares. This general finding is slightly depending on the prescribed emission scenario and strictly sensitive to the considered time scale. In contrast to precipitation, the temperature signal can be enhanced noticeably when bias-correcting the models' climatology during the 20th century: the greenhouse signal then accounts for up to 75% of total temperature variability in the regional mean.
Jackson, C. S.; Tobis, M.
2011-12-01
It is an untested assumption in climate model evaluation that climate model biases affect its credibility. Models with the smaller biases are often regarded as being more plausible than models with larger biases. However not all biases affect predictions. It is only those biases that are involved with feedback mechanisms can lead to scatter in its predictions of change. To date no metric of model skill has been defined that can predict a model's sensitivity greenhouse gas forcing. Being able to do so will be an important step to how we can use observations to define a model's credibility. We shall present results of a calculation in which we attempt to isolate the contribution of errors in particular regions and fields to uncertainties in CAM3.1 equilibrium sensitivity to a doubling of CO2 forcing. In this calculation, observations, Bayesian inference, and stochastic sampling are used to identify a large ensemble of CAM3.1 configurations that represent uncertainties in selecting 15 model parameters important to clouds, convection, and radiation. A slab ocean configuration of CAM3.1 is then used to estimate the effects of these parametric uncertainties on projections of global warming through its equilibrium response to 2 x CO2 forcing. We then correlate the scatter in the control climate at each grid point and field to the scatter in climate sensitivities. The presentation will focus on the analysis of these results.
Directory of Open Access Journals (Sweden)
Richard M. Palin
2016-07-01
Full Text Available Pseudosection modelling is rapidly becoming an essential part of a petrologist's toolkit and often forms the basis of interpreting the tectonothermal evolution of a rock sample, outcrop, or geological region. Of the several factors that can affect the accuracy and precision of such calculated phase diagrams, “geological” uncertainty related to natural petrographic variation at the hand sample- and/or thin section-scale is rarely considered. Such uncertainty influences the sample's bulk composition, which is the primary control on its equilibrium phase relationships and thus the interpreted pressure–temperature (P–T conditions of formation. Two case study examples—a garnet–cordierite granofels and a garnet–staurolite–kyanite schist—are used to compare the relative importance that geological uncertainty has on bulk compositions determined via (1 X-ray fluorescence (XRF or (2 point counting techniques. We show that only minor mineralogical variation at the thin-section scale propagates through the phase equilibria modelling procedure and affects the absolute P–T conditions at which key assemblages are stable. Absolute displacements of equilibria can approach ±1 kbar for only a moderate degree of modal proportion uncertainty, thus being essentially similar to the magnitudes reported for analytical uncertainties in conventional thermobarometry. Bulk compositions determined from multiple thin sections of a heterogeneous garnet–staurolite–kyanite schist show a wide range in major-element oxides, owing to notable variation in mineral proportions. Pseudosections constructed for individual point count-derived bulks accurately reproduce this variability on a case-by-case basis, though averaged proportions do not correlate with those calculated at equivalent peak P–T conditions for a whole-rock XRF-derived bulk composition. The main discrepancies relate to varying proportions of matrix phases (primarily mica relative to
Directory of Open Access Journals (Sweden)
I.-W. Jung
2010-08-01
Full Text Available How will the combined impacts of land use change and climate change influence changes in urban flood frequency and what is the main uncertainty source of the results? We attempt to answer to these questions in two catchments with different degrees of urbanization, the Fanno catchment with 84% urban land use and the Johnson catchment with 36% urban land use, both located in the Pacific Northwest of the US. Five uncertainty sources – general circulation model (GCM structures, future greenhouse gas (GHG emission scenarios, land use change scenarios, natural variability, and hydrologic model parameters – are considered to compare the relative source of uncertainty in flood frequency projections. Two land use change scenarios conservation and development, representing possible future land use changes are used for analysis. Results show the highest increase in flood frequency under the combination of medium high GHG emission (A1B and development scenarios, and the lowest increase under the combination of low GHG emission (B1 and conservation scenarios. Although the combined impact is more significant to flood frequency change than individual scenarios, it does not linearly increase flood frequency. Changes in flood frequency are more sensitive to climate change than land use change in the two catchments for 2050s (2040–2069. Shorter term flood frequency change, 2 and 5 year floods, is highly affected by GCM structure, while longer term flood frequency change above 25 year floods is dominated by natural variability. Projected flood frequency changes more significantly in Johnson creek than Fanno creek. This result indicates that, under expected climate change conditions, an adaptive urban planning based on the conservation scenario could be more effective in less developed Johnson catchment than in the already developed Fanno catchment.
Carling, Paul; Kleinhans, Maarten; Leyland, Julian; Besozzi, Louison; Duranton, Pierre; Trieu, Hai; Teske, Roy
2014-05-01
Understanding of flow resistance of forested floodplains is essential for floodplain flow routing and floodplain reforestation projects. Although the flow resistance of grass-lined channels is well-known, flow retention due to flow-blocking by trees is poorly understood. Flow behaviour through tree-filled channels or over forested floodplain surfaces has largely been addressed using laboratory studies of artificial surfaces and vegetation. Herein we take advantage of a broad, shallow earthen experimental outdoor channel with headwater and tailwater controls. The channel was disused and left undisturbed for more than 20 years. During this time period, small deciduous trees and a soil cover of grass, herbs and leaf-litter established naturally. We measured flow resistance and fluid retention in fifteen controlled water discharge experiments for the following conditions: (a) natural cover of herbs and trees; (b) trees only and; (c) earthen channel only. In the b-experiments the herbaceous groundcover was first removed carefully and in the c-experiments the trees were first cut flush with the earthen channel floor. Rhodamine-B dye was used to tag the flow and the resultant fluorescence of water samples were systematically assayed through time at two stations along the length of the channel. Dilution-curve data were analysed within the Aggregated Dead Zone (ADZ) framework to yield bulk flow parameters including dispersion, fluid retention and flow resistance parameters after the procedure of Richardson & Carling (2006). The primary response of the bulk flow to vegetation removal was an increase in bulk velocity, with depth and wetted width decreasing imperceptibly at the resolution of measurement. An overall reduction in flow resistance and retention occurred as discharge increased in all experiments and flow retention. Retentiveness was more prominent during low flow and for all three experimental conditions tended to converge on a constant low value for high
Energy Technology Data Exchange (ETDEWEB)
Anderson, D.R.; Trauth, K.M. (Sandia National Labs., Albuquerque, NM (United States)); Hora, S.C. (Hawaii Univ., Hilo, HI (United States))
1991-01-01
Iterative, annual performance-assessment calculations are being performed for the Waste Isolation Pilot Plant (WIPP), a planned underground repository in southeastern New Mexico, USA for the disposal of transuranic waste. The performance-assessment calculations estimate the long-term radionuclide releases from the disposal system to the accessible environment. Because direct experimental data in some areas are presently of insufficient quantity to form the basis for the required distributions. Expert judgment was used to estimate the concentrations of specific radionuclides in a brine exiting a repository room or drift as it migrates up an intruding borehole, and also the distribution coefficients that describe the retardation of radionuclides in the overlying Culebra Dolomite. The variables representing these concentrations and coefficients have been shown by 1990 sensitivity analyses to be among the set of parameters making the greatest contribution to the uncertainty in WIPP performance-assessment predictions. Utilizing available information, the experts (one expert panel addressed concentrations and a second panel addressed retardation) developed an understanding of the problem and were formally elicited to obtain probability distributions that characterize the uncertainty in fixed, but unknown, quantities. The probability distributions developed by the experts are being incorporated into the 1991 performance-assessment calculations. 16 refs., 4 tabs.
Energy Technology Data Exchange (ETDEWEB)
Nenes, Athanasios
2017-06-23
The goal of this proposed project is to assess the climatic importance and sensitivity of aerosol indirect effect (AIE) to cloud and aerosol processes and feedbacks, which include organic aerosol hygroscopicity, cloud condensation nuclei (CCN) activation kinetics, Giant CCN, cloud-scale entrainment, ice nucleation in mixed-phase and cirrus clouds, and treatment of subgrid variability of vertical velocity. A key objective was to link aerosol, cloud microphysics and dynamics feedbacks in CAM5 with a suite of internally consistent and integrated parameterizations that provide the appropriate degrees of freedom to capture the various aspects of the aerosol indirect effect. The proposal integrated new parameterization elements into the cloud microphysics, moist turbulence and aerosol modules used by the NCAR Community Atmospheric Model version 5 (CAM5). The CAM5 model was then used to systematically quantify the uncertainties of aerosol indirect effects through a series of sensitivity tests with present-day and preindustrial aerosol emissions. New parameterization elements were developed as a result of these efforts, and new diagnostic tools & methodologies were also developed to quantify the impacts of aerosols on clouds and climate within fully coupled models. Observations were used to constrain key uncertainties in the aerosol-cloud links. Advanced sensitivity tools were developed and implements to probe the drivers of cloud microphysical variability with unprecedented temporal and spatial scale. All these results have been published in top and high impact journals (or are in the final stages of publication). This proposal has also supported a number of outstanding graduate students.
A case study to quantify prediction bounds caused by model-form uncertainty of a portal frame
Van Buren, Kendra L.; Hall, Thomas M.; Gonzales, Lindsey M.; Hemez, François M.; Anton, Steven R.
2015-01-01
Numerical simulations, irrespective of the discipline or application, are often plagued by arbitrary numerical and modeling choices. Arbitrary choices can originate from kinematic assumptions, for example the use of 1D beam, 2D shell, or 3D continuum elements, mesh discretization choices, boundary condition models, and the representation of contact and friction in the simulation. This work takes a step toward understanding the effect of arbitrary choices and model-form assumptions on the accuracy of numerical predictions. The application is the simulation of the first four resonant frequencies of a one-story aluminum portal frame structure under free-free boundary conditions. The main challenge of the portal frame structure resides in modeling the joint connections, for which different modeling assumptions are available. To study this model-form uncertainty, and compare it to other types of uncertainty, two finite element models are developed using solid elements, and with differing representations of the beam-to-column and column-to-base plate connections: (i) contact stiffness coefficients or (ii) tied nodes. Test-analysis correlation is performed to compare the lower and upper bounds of numerical predictions obtained from parametric studies of the joint modeling strategies to the range of experimentally obtained natural frequencies. The approach proposed is, first, to characterize the experimental variability of the joints by varying the bolt torque, method of bolt tightening, and the sequence in which the bolts are tightened. The second step is to convert what is learned from these experimental studies to models that "envelope" the range of observed bolt behavior. We show that this approach, that combines small-scale experiments, sensitivity analysis studies, and bounding-case models, successfully produces lower and upper bounds of resonant frequency predictions that match those measured experimentally on the frame structure. (Approved for unlimited, public
Directory of Open Access Journals (Sweden)
A. Amengual
2008-08-01
Full Text Available In the framework of AMPHORE, an INTERREG III B EU project devoted to the hydrometeorological modeling study of heavy precipitation episodes resulting in flood events and the improvement of the operational hydrometeorological forecasts for the prediction and prevention of flood risks in the Western Mediterranean area, a hydrometeorological model intercomparison has been carried out, in order to estimate the uncertainties associated with the discharge predictions. The analysis is performed for an intense precipitation event selected as a case study within the project, which affected northern Italy and caused a flood event in the upper Reno river basin, a medium size catchment in the Emilia-Romagna Region.
Two different hydrological models have been implemented over the basin: HEC-HMS and TOPKAPI which are driven in two ways. Firstly, stream-flow simulations obtained by using precipitation observations as input data are evaluated, in order to be aware of the performance of the two hydrological models. Secondly, the rainfall-runoff models have been forced with rainfall forecast fields provided by mesoscale atmospheric model simulations in order to evaluate the reliability of the discharge forecasts resulting by the one-way coupling. The quantitative precipitation forecasts (QPFs are provided by the numerical mesoscale models COSMO and MM5.
Furthermore, different configurations of COSMO and MM5 have been adopted, trying to improve the description of the phenomena determining the precipitation amounts. In particular, the impacts of using different initial and boundary conditions, different mesoscale models and of increasing the horizontal model resolutions are investigated. The accuracy of QPFs is assessed in a threefold procedure. First, these are checked against the observed spatial rainfall accumulations over northern Italy. Second, the spatial and temporal simulated distributions are also examined over the catchment of interest
Cook, Bruce D.; Bolstad, Paul V.; Naesset, Erik; Anderson, Ryan S.; Garrigues, Sebastian; Morisette, Jeffrey T.; Nickeson, Jaime; Davis, Kenneth J.
2009-01-01
Spatiotemporal data from satellite remote sensing and surface meteorology networks have made it possible to continuously monitor global plant production, and to identify global trends associated with land cover/use and climate change. Gross primary production (GPP) and net primary production (NPP) are routinely derived from the MOderate Resolution Imaging Spectroradiometer (MODIS) onboard satellites Terra and Aqua, and estimates generally agree with independent measurements at validation sites across the globe. However, the accuracy of GPP and NPP estimates in some regions may be limited by the quality of model input variables and heterogeneity at fine spatial scales. We developed new methods for deriving model inputs (i.e., land cover, leaf area, and photosynthetically active radiation absorbed by plant canopies) from airborne laser altimetry (LiDAR) and Quickbird multispectral data at resolutions ranging from about 30 m to 1 km. In addition, LiDAR-derived biomass was used as a means for computing carbon-use efficiency. Spatial variables were used with temporal data from ground-based monitoring stations to compute a six-year GPP and NPP time series for a 3600 ha study site in the Great Lakes region of North America. Model results compared favorably with independent observations from a 400 m flux tower and a process-based ecosystem model (BIOME-BGC), but only after removing vapor pressure deficit as a constraint on photosynthesis from the MODIS global algorithm. Fine resolution inputs captured more of the spatial variability, but estimates were similar to coarse-resolution data when integrated across the entire vegetation structure, composition, and conversion efficiencies were similar to upland plant communities. Plant productivity estimates were noticeably improved using LiDAR-derived variables, while uncertainties associated with land cover generalizations and wetlands in this largely forested landscape were considered less important.
Cook, Bruce D.; Bolstad, Paul V.; Naesset, Erik; Anderson, Ryan S.; Garrigues, Sebastian; Morisette, Jeffrey T.; Nickeson, Jaime; Davis, Kenneth J.
2009-01-01
Spatiotemporal data from satellite remote sensing and surface meteorology networks have made it possible to continuously monitor global plant production, and to identify global trends associated with land cover/use and climate change. Gross primary production (GPP) and net primary production (NPP) are routinely derived from the MOderate Resolution Imaging Spectroradiometer (MODIS) onboard satellites Terra and Aqua, and estimates generally agree with independent measurements at validation sites across the globe. However, the accuracy of GPP and NPP estimates in some regions may be limited by the quality of model input variables and heterogeneity at fine spatial scales. We developed new methods for deriving model inputs (i.e., land cover, leaf area, and photosynthetically active radiation absorbed by plant canopies) from airborne laser altimetry (LiDAR) and Quickbird multispectral data at resolutions ranging from about 30 m to 1 km. In addition, LiDAR-derived biomass was used as a means for computing carbon-use efficiency. Spatial variables were used with temporal data from ground-based monitoring stations to compute a six-year GPP and NPP time series for a 3600 ha study site in the Great Lakes region of North America. Model results compared favorably with independent observations from a 400 m flux tower and a process-based ecosystem model (BIOME-BGC), but only after removing vapor pressure deficit as a constraint on photosynthesis from the MODIS global algorithm. Fine resolution inputs captured more of the spatial variability, but estimates were similar to coarse-resolution data when integrated across the entire vegetation structure, composition, and conversion efficiencies were similar to upland plant communities. Plant productivity estimates were noticeably improved using LiDAR-derived variables, while uncertainties associated with land cover generalizations and wetlands in this largely forested landscape were considered less important.
Directory of Open Access Journals (Sweden)
J. Timmermans
2013-04-01
Full Text Available Accurate estimation of global evapotranspiration is considered to be of great importance due to its key role in the terrestrial and atmospheric water budget. Global estimation of evapotranspiration on the basis of observational data can only be achieved by using remote sensing. Several algorithms have been developed that are capable of estimating the daily evapotranspiration from remote sensing data. Evaluation of remote sensing algorithms in general is problematic because of differences in spatial and temporal resolutions between remote sensing observations and field measurements. This problem can be solved in part by using soil-vegetation-atmosphere transfer (SVAT models, because on the one hand these models provide evapotranspiration estimations also under cloudy conditions and on the other hand can scale between different temporal resolutions. In this paper, the Soil Canopy Observation, Photochemistry and Energy fluxes (SCOPE model is used for the evaluation of the Surface Energy Balance System (SEBS model. The calibrated SCOPE model was employed to simulate remote sensing observations and to act as a validation tool. The advantages of the SCOPE model in this validation are (a the temporal continuity of the data, and (b the possibility of comparing different components of the energy balance. The SCOPE model was run using data from a whole growth season of a maize crop. It is shown that the original SEBS algorithm produces large uncertainties in the turbulent flux estimations caused by parameterizations of the ground heat flux and sensible heat flux. In the original SEBS formulation the fractional vegetation cover is used to calculate the ground heat flux. As this variable saturates very fast for increasing leaf area index (LAI, the ground heat flux is underestimated. It is shown that a parameterization based on LAI reduces the estimation error over the season from RMSE = 25 W m−2 to RMSE = 18 W m−2. In the original SEBS formulation the
Tang, Hui; Weiss, Robert; Xiao, Heng
2016-01-01
Tsunami deposits are recordings of tsunami events that contain information about flow conditions. Deciphering quantitative information from tsunami deposits is especially important for analyzing paleo-tsunami events in which deposits comprise the only leftover physical evidence. The physical meaning of the deciphered quantities depends on the physical assumptions that are applied. The aim of our study is to estimate the characteristics of tsunamis and quantify the errors and uncertainties that inherent within them. To achieve this goal, we apply the TSUFLIND-EnKF inversion model to study the deposition of an idealized deposit created by a single tsunami wave and one real case from 2004 India ocean tsunami. TSUFLIND-EnKF model combines TSUFLIND for the deposition module with the Ensemble Kalman Filtering (EnKF) method. In our modeling, we assume that grain-size distribution and thickness from the idealized deposits at different depths can be used as an observational variable. Our tentative results indicate tha...
Directory of Open Access Journals (Sweden)
H. Xu
2011-01-01
Full Text Available Quantitative evaluations of the impacts of climate change on water resources are primarily constrained by uncertainty in climate projections from GCMs. In this study we assess uncertainty in the impacts of climate change on river discharge in two catchments of the Yangtze and Yellow River Basins that feature contrasting climate regimes (humid and semi-arid. Specifically we quantify uncertainty associated with GCM structure from a subset of CMIP3 AR4 GCMs (HadCM3, HadGEM1, CCSM3.0, IPSL, ECHAM5, CSIRO, CGCM3.1, SRES emissions scenarios (A1B, A2, B1, B2 and prescribed increases in global mean air temperature (1 °C to 6 °C. Climate projections, applied to semi-distributed hydrological models (SWAT 2005 in both catchments, indicate trends toward warmer and wetter conditions. For prescribed warming scenarios of 1 °C to 6 °C, linear increases in mean annual river discharge, relative to baseline (1961–1990, for the River Xiangxi and River Huangfuchuan are +9% and 11% per +1 °C respectively. Intra-annual changes include increases in flood (Q05 discharges for both rivers as well as a shift in the timing of flood discharges from summer to autumn and a rise (24 to 93% in dry season (Q95 discharge for the River Xiangxi. Differences in projections of mean annual river discharge between SRES emission scenarios using HadCM3 are comparatively minor for the River Xiangxi (13 to 17% rise from baseline but substantial (73 to 121% for the River Huangfuchuan. With one minor exception of a slight (−2% decrease in river discharge projected using HadGEM1 for the River Xiangxi, mean annual river discharge is projected to increase in both catchments under both the SRES A1B emission scenario and 2° rise in global mean air temperature using all AR4 GCMs on the CMIP3 subset. For the River Xiangxi, there is substantial uncertainty associated with GCM structure in the magnitude of the rise in flood (Q05 discharges (−1 to 41% under SRES A1B and −3 to 41% under 2
Directory of Open Access Journals (Sweden)
H. Xu
2010-09-01
Full Text Available Quantitative evaluations of the impacts of climate change on water resources are primarily constrained by uncertainty in climate projections from GCMs. In this study we assess uncertainty in the impacts of climate change on river discharge in two catchments of the River Yangtze and Yellow Basins that feature contrasting climate regimes (humid and semi-arid. Specifically we quantify uncertainty associated with GCM structure from a subset of CMIP3 AR4 GCMs (HadCM3, HadGEM1, CCSM3.0, IPSL, ECHAM5, CSIRO, CGCM3.1, SRES emissions scenarios (A1B, A2, B1, B2 and prescribed increases in global mean air temperature (1 °C to 6 °C. Climate projections, applied to semi-distributed hydrological models (SWAT 2005 in both catchments, indicate trends toward warmer and wetter conditions. For prescribed warming scenarios of 1 °C to 6 °C, linear increases in mean annual river discharge, relative to baseline (1961–1990, for the River Xiangxi and River Huangfuchuan are +9% and 11% per +1 °C, respectively. Intra-annual changes include increases in flood (Q05 discharges for both rivers as well as a shift in the timing of flood discharges from summer to autumn and a rise (24 to 93% in dry season (Q95 discharge for the River Xiangxi. Differences in projections of mean annual river discharge between SRES emission scenarios using HadCM3 are comparatively minor for the River Xiangxi (13% to 17% rise from baseline but substantial (73% to 121% for the River Huangfuchuan. With one minor exception of a slight (−2% decrease in river discharge projected using HadGEM1 for the River Xiangxi, mean annual river discharge is projected to increase in both catchments under both the SRES A1B emission scenario and 2° rise in global mean air temperature using all AR4 GCMs on the CMIP3 subset. For the River Xiangxi, there is great uncertainty associated with GCM structure in the magnitude of the rise in flood (Q05 discharges (−1% to 41% under SRES A1B and −3% to 41% under 2
Brasington, J.; Hicks, M.; Wheaton, J. M.; Williams, R. D.; Vericat, D.
2013-12-01
Repeat surveys of channel morphology provide a means to quantify fluvial sediment storage and enable inferences about changes in long-term sediment supply, watershed delivery and bed level adjustment; information vital to support effective river and land management. Over shorter time-scales, direct differencing of fluvial terrain models may also offer a route to predict reach-averaged sediment transport rates and quantify the patterns of channel morphodynamics and the processes that force them. Recent and rapid advances in geomatics have facilitated these goals by enabling the acquisition of topographic data at spatial resolutions and precisions suitable for characterising river morphology at the scale of individual grains over multi-kilometre reaches. Despite improvements in topographic surveying, inverting the terms of the sediment budget to derive estimates of sediment transport and link these to morphodynamic processes is, nonetheless, often confounded by limited knowledge of either the sediment supply or efflux across a boundary of the control volume, or unobserved cut-and-fill taking place between surveys. This latter problem is particularly poorly constrained, as field logistics frequently preclude surveys at a temporal frequency sufficient to capture changes in sediment storage associated with each competent event, let alone changes during individual floods. In this paper, we attempt to quantify the principal sources of uncertainty in morphologically-derived bedload transport rates for the large, labile, gravel-bed braided Rees River which drains the Southern Alps of NZ. During the austral summer of 2009-10, a unique timeseries of 10 high quality DEMs was derived for a 3 x 0.7 km reach of the Rees, using a combination of mobile terrestrial laser scanning, aDcp soundings and aerial image analysis. Complementary measurements of the forcing flood discharges and estimates of event-based particle step lengths were also acquired during the field campaign
Energy Technology Data Exchange (ETDEWEB)
Awunor, O., E-mail: onuora.awunor@stees.nhs.uk [The Medical Physics Department, The James Cook University Hospital, Marton Road, Middlesbrough TS4 3BW, England (United Kingdom); Berger, D. [Department of Radiotherapy, General Hospital of Vienna, Vienna A-1090 (Austria); Kirisits, C. [Department of Radiotherapy, Comprehensive Cancer Center, Medical University of Vienna, Vienna A-1090 (Austria)
2015-08-15
Purpose: The reconstruction of radiation source position in the treatment planning system is a key part of the applicator reconstruction process in high dose rate (HDR) brachytherapy treatment of cervical carcinomas. The steep dose gradients, of as much as 12%/mm, associated with typical cervix treatments emphasize the importance of accurate and precise determination of source positions. However, a variety of methodologies with a range in associated measurement uncertainties, of up to ±2.5 mm, are currently employed by various centers to do this. In addition, a recent pilot study by Awunor et al. [“Direct reconstruction and associated uncertainties of {sup 192}Ir source dwell positions in ring applicators using gafchromic film in the treatment planning of HDR brachytherapy cervix patients,” Phys. Med. Biol. 58, 3207–3225 (2013)] reported source positional differences of up to 2.6 mm between ring sets of the same type and geometry. This suggests a need for a comprehensive study to assess and quantify systematic source position variations between commonly used ring applicators and HDR afterloaders across multiple centers. Methods: Eighty-six rings from 20 European brachytherapy centers were audited in the form of a postal audit with each center collecting the data independently. The data were collected by setting up the rings using a bespoke jig and irradiating gafchromic films at predetermined dwell positions using four afterloader types, MicroSelectron, Flexitron, GammaMed, and MultiSource, from three manufacturers, Nucletron, Varian, and Eckert & Ziegler BEBIG. Five different ring types in six sizes (Ø25–Ø35 mm) and two angles (45° and 60°) were used. Coordinates of irradiated positions relative to the ring center were determined and collated, and source position differences quantified by ring type, size, and angle. Results: The mean expanded measurement uncertainty (k = 2) along the direction of source travel was ±1.4 mm. The standard deviation
Quantifying Uncertainties in Ocean Predictions
2006-03-01
Journal of Marine Systems 40:171–212. Bishop, C.H., B.J. Etherton, and S.J. Majumdar. 2001. Adaptive...80. Dickey, T. 2003. Emerging ocean observations for in- terdisciplinary data assimilation systems. Journal of Marine Systems 40–41:5–48 Egbert, G.D...subspace of the three-dimensional multiscale ocean variability: Mas- sachusetts Bay. Journal of Marine Systems 29(1-4; special issue):385–422.
Energy Technology Data Exchange (ETDEWEB)
Park, Sungsu [Univ. Corporation for Atmospheric Research, Boulder, CO (United States)
2014-12-12
The main goal of this project is to systematically quantify the major uncertainties of aerosol indirect effects due to the treatment of moist turbulent processes that drive aerosol activation, cloud macrophysics and microphysics in response to anthropogenic aerosol perturbations using the CAM5/CESM1. To achieve this goal, the P.I. hired a postdoctoral research scientist (Dr. Anna Fitch) who started her work from the Nov.1st.2012. In order to achieve the project goal, the first task that the Postdoc. and the P.I. did was to quantify the role of subgrid vertical velocity variance on the activation and nucleation of cloud liquid droplets and ice crystals and its impact on the aerosol indirect effect in CAM5. First, we analyzed various LES cases (from dry stable to cloud-topped PBL) to check whether this isotropic turbulence assumption used in CAM5 is really valid. It turned out that this isotropic turbulence assumption is not universally valid. Consequently, from the analysis of LES, we derived an empirical formulation relaxing the isotropic turbulence assumption used for the CAM5 aerosol activation and ice nucleation, and implemented the empirical formulation into CAM5/CESM1, and tested in the single-column and global simulation modes, and examined how it changed aerosol indirect effects in the CAM5/CESM1. These results were reported in the poster section in the 18th Annual CESM workshop held in Breckenridge, CO during Jun.17-20.2013. While we derived an empirical formulation from the analysis of couple of LES from the first task, the general applicability of that empirical formulation was questionable, because it was obtained from the limited number of LES simulations. The second task we did was to derive a more fundamental analytical formulation relating vertical velocity variance to TKE using other information starting from basic physical principles. This was a somewhat challenging subject, but if this could be done in a successful way, it could be directly
Directory of Open Access Journals (Sweden)
Oscar Salviano Silva Filho
2010-03-01
Full Text Available Um problema de planejamento agregado da produção, com incertezas sobre a flutuação da demanda, é formulado através de um modelo de otimização estocástica, com critério quadrático e restrições lineares. Dificuldades para encontrar uma solução ótima global para o problema levam à proposição de uma abordagem adaptativa de fácil implementação computacional que é baseada na formulação de um problema determinístico equivalente e que tem sua solução periodicamente revisada por meio de um procedimento clássico da literatura. Um exemplo, em que o sistema de balanço de estoque está sujeito à forte e fraca variabilidade da demanda real, é empregado para analisar o comportamento da abordagem proposta. Por fim, os resultados obtidos são comparados com outra abordagem subótima, cuja principal característica é não permitir revisões periódicas.A problem of aggregate production planning, with uncertainty about the fluctuation in demand, is formulated by a stochastic optimization model, with quadratic criterion and linear constraints. Difficulties in finding a global optimum solution to the problem lead to the proposal of an adaptive approach which is easy to implement computationally and which is based on the formulation of a deterministic equivalent problem, the solution for which is periodically reviewed through a classical procedure of literature. An example, where the inventory balance system is subject to weak and strong variability in actual demand, is employed to analyze the behavior of the proposed approach. Finally, the results provided by the proposed approach are compared with another suboptimal approach, the main characteristic of which is not allowing periodic reviews.
Directory of Open Access Journals (Sweden)
A. J. Gomez-Pelaez
2013-03-01
Full Text Available Atmospheric CO in situ measurements are carried out at the Izaña (Tenerife global GAW (Global Atmosphere Watch Programme of the World Meteorological Organization – WMO mountain station using a Reduction Gas Analyser (RGA. In situ measurements at Izaña are representative of the subtropical Northeast Atlantic free troposphere, especially during nighttime. We present the measurement system configuration, the response function, the calibration scheme, the data processing, the Izaña 2008–2011 CO nocturnal time series, and the mean diurnal cycle by months. We have developed a rigorous uncertainty analysis for carbon monoxide measurements carried out at the Izaña station, which could be applied to other GAW stations. We determine the combined standard measurement uncertainty taking into consideration four contributing components: uncertainty of the WMO standard gases interpolated over the range of measurement, the uncertainty that takes into account the agreement between the standard gases and the response function used, the uncertainty due to the repeatability of the injections, and the propagated uncertainty related to the temporal consistency of the response function parameters (which also takes into account the covariance between the parameters. The mean value of the combined standard uncertainty decreased significantly after March 2009, from 2.37 nmol mol−1 to 1.66 nmol mol−1, due to improvements in the measurement system. A fifth type of uncertainty we call representation uncertainty is considered when some of the data necessary to compute the temporal mean are absent. Any computed mean has also a propagated uncertainty arising from the uncertainties of the data used to compute the mean. The law of propagation depends on the type of uncertainty component (random or systematic. In situ hourly means are compared with simultaneous and collocated NOAA flask samples. The uncertainty of the differences is computed and used to determine
2016-09-01
financial goals.76 Aside from benefiting the consumers, transactional data helps lending institutions with managing credit risks . Typical risk estimates...vulnerable aggregation method in the financial industry, and possible approaches to manage the risks of this method are discussed subsequently...accesses from these two parties are indistinguishable to financial institutions .55 To manage this issue, proposed options include two-password model
Sussmann, Ralf; Reichert, Andreas; Rettinger, Markus
2016-09-01
Quantitative knowledge of water vapor radiative processes in the atmosphere throughout the terrestrial and solar infrared spectrum is still incomplete even though this is crucial input to the radiation codes forming the core of both remote sensing methods and climate simulations. Beside laboratory spectroscopy, ground-based remote sensing field studies in the context of so-called radiative closure experiments are a powerful approach because this is the only way to quantify water absorption under cold atmospheric conditions. For this purpose, we have set up at the Zugspitze (47.42° N, 10.98° E; 2964 m a.s.l.) a long-term radiative closure experiment designed to cover the infrared spectrum between 400 and 7800 cm-1 (1.28-25 µm). As a benefit for such experiments, the atmospheric states at the Zugspitze frequently comprise very low integrated water vapor (IWV; minimum = 0.1 mm, median = 2.3 mm) and very low aerosol optical depth (AOD = 0.0024-0.0032 at 7800 cm-1 at air mass 1). All instruments for radiance measurements and atmospheric-state measurements are described along with their measurement uncertainties. Based on all parameter uncertainties and the corresponding radiance Jacobians, a systematic residual radiance uncertainty budget has been set up to characterize the sensitivity of the radiative closure over the whole infrared spectral range. The dominant uncertainty contribution in the spectral windows used for far-infrared (FIR) continuum quantification is from IWV uncertainties, while T profile uncertainties dominate in the mid-infrared (MIR). Uncertainty contributions to near-infrared (NIR) radiance residuals are dominated by water vapor line parameters in the vicinity of the strong water vapor bands. The window regions in between these bands are dominated by solar Fourier transform infrared (FTIR) calibration uncertainties at low NIR wavenumbers, while uncertainties due to AOD become an increasing and dominant contribution towards higher NIR wavenumbers
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
There are a number of sources of uncertainty in regional climate change scenarios. When statistical downscaling is used to obtain regional climate change scenarios, the uncertainty may originate from the uncertainties in the global climate models used, the skill of the statistical model, and the forcing scenarios applied to the global climate model. The uncertainty associated with global climate models can be evaluated by examining the differences in the predictors and in the downscaled climate change scenarios based on a set of different global climate models. When standardized global climate model simulations such as the second phase of the Coupled Model Intercomparison Project (CMIP2) are used, the difference in the downscaled variables mainly reflects differences in the climate models and the natural variability in the simulated climates. It is proposed that the spread of the estimates can be taken as a measure of the uncertainty associated with global climate models. The proposed method is applied to the estimation of global-climate-model-related uncertainty in regional precipitation change scenarios in Sweden. Results from statistical downscaling based on 17 global climate models show that there is an overall increase in annual precipitation all over Sweden although a considerable spread of the changes in the precipitation exists. The general increase can be attributed to the increased large-scale precipitation and the enhanced westerly wind. The estimated uncertainty is nearly independent of region. However, there is a seasonal dependence. The estimates for winter show the highest level of confidence, while the estimates for summer show the least.
Hultman, N. E.
2002-12-01
A common complaint about environmental policy is that regulations inadequately reflect scientific uncertainty and scientific consensus. While the causes of this phenomenon are complex and hard to discern, we know that corporations are the primary implementers of environmental regulations; therefore, focusing on how policy relates scientific knowledge to corporate decisions can provide valuable insights. Within the context of the developing international market for greenhouse gas emissions, I examine how corporations would apply finance theory into their investment decisions for carbon abatement projects. Using remotely-sensed ecosystem scale carbon flux measurements, I show how to determine much financial risk of carbon is diversifiable. I also discuss alternative, scientifically sound methods for hedging the non-diversifiable risks in carbon abatement projects. In providing a quantitative common language for scientific and corporate uncertainties, the concept of carbon financial risk provides an opportunity for expanding communication between these elements essential to successful climate policy.
Energy Technology Data Exchange (ETDEWEB)
Duffet, C.
2004-12-01
Reflection tomography allows the determination of a velocity model that fits the travel time data associated with reflections of seismic waves propagating in the subsurface. A least-square formulation is used to compare the observed travel times and the travel times computed by the forward operator based on a ray tracing. This non-linear optimization problem is solved classically by a Gauss-Newton method based on successive linearization of the forward operator. The obtained solution is only one among many possible models. Indeed, the uncertainties on the observed travel times (resulting from an interpretative event picking on seismic records) and more generally the under-determination of the inverse problem lead to uncertainties on the solution. An a posteriori uncertainty analysis is then crucial to delimit the range of possible solutions that fit, with the expected accuracy, the data and the a priori information. A linearized a posteriori analysis is possible by an analysis of the a posteriori covariance matrix, inverse of the Gauss-Newton approximation of the matrix. The computation of this matrix is generally expensive (the matrix is huge for 3D problems) and the physical interpretation of the results is difficult. Then we propose a formalism which allows to compute uncertainties on relevant geological quantities for a reduced computational time. Nevertheless, this approach is only valid in the vicinity of the solution model (linearized framework) and complex cases may require a non-linear approach. An experimental approach consists in solving the inverse problem under constraints to test different geological scenarios. (author)
Soundharajan, B.; Adeloye, A. J.; Remesan, R.
2015-06-01
Climate change is predicted to affect water resources infrastructure due to its effect on rainfall, temperature and evapotranspiration. However, there are huge uncertainties on both the magnitude and direction of these effects. The Pong reservoir on the Beas River in northern India serves irrigation and hydropower needs. The hydrology of the catchment is highly influenced by Himalayan seasonal snow and glaciers, and Monsoon rainfall; the changing pattern of the latter and the predicted disappearance of the former will have profound effects on the performance of the reservoir. This study employed a Monte-Carlo simulation approach to characterise the uncertainties in the future storage requirements and performance of the reservoir. Using a calibrated rainfall-runoff (R-R) model, the baseline runoff scenario was first simulated. The R-R inputs (rainfall and temperature) were then perturbed using plausible delta-changes to produce simulated climate change runoff scenarios. Stochastic models of the runoff were developed and used to generate ensembles of both the current and climate-change perturbed future scenarios. The resulting runoff ensembles were used to simulate the behaviour of the reservoir and determine "populations" of reservoir storage capacity and performance characteristics. Comparing these parameters between the current and the perturbed provided the population of climate change effects which was then analysed to determine the uncertainties. The results show that contrary to the usual practice of using single records, there is wide variability in the assessed impacts. This variability or uncertainty will, no doubt, complicate the development of climate change adaptation measures; however, knowledge of its sheer magnitude as demonstrated in this study will help in the formulation of appropriate policy and technical interventions for sustaining and possibly enhancing water security for irrigation and other uses served by Pong reservoir.
Directory of Open Access Journals (Sweden)
J. E. Williams
2012-11-01
Full Text Available The emission of organic compounds from biogenic processes acts as an important source of trace gases in remote regions away from urban conurbations, and is likely to become more important in future decades due to the further mitigation of anthropogenic emissions that affect air quality and climate forcing. In this study we examine the contribution of biogenic volatile organic compounds (BVOCs towards global tropospheric composition using the global 3-D chemistry transport model TM5 and the recently developed modified CB05 chemical mechanism. By comparing regional BVOC emission estimates we show that biogenic processes act as dominant sources for many regions and exhibit a large variability in the annually and seasonally integrated emission fluxes. By performing sensitivity studies we find that the contribution of BVOC species containing between 1 to 3 carbon atoms has an impact on the resident mixing ratios of tropospheric O_{3} and CO, accounting for ~3% and ~11% of the simulated global distribution, respectively. This is approximately a third of the cumulative effect introduced by isoprene and the monoterpenes. By examining an ensemble of 3-D global chemistry-transport simulations which adopt different global BVOC emission inventories we determine the associated uncertainty introduced towards simulating the composition of the troposphere for the year 2000. By comparing the model ensemble values against a~composite of atmospheric measurements we show that the effects on tropospheric O_{3} are limited to the lower troposphere (with an uncertainty between −2% to 10%, whereas that for tropospheric CO extends up to the upper troposphere (with an uncertainty of between 10 to 45%. Comparing the mixing ratios for low molecular weight alkenes in TM5 against surface measurements taken in Europe implies that the cumulative emission estimates are too low, regardless of the chosen BVOC inventory. This variability in the global
North, Matthew; Petropoulos, George
2014-05-01
Soil Vegetation Atmosphere Transfer (SVAT) models are becoming the preferred scientific tool to assess land surface energy fluxes due to their computational efficiency, accuracy and ability to provide results at fine temporal scales. An all-inclusive validation of those models is a fundamental step before those can be confidently used for any practical application or research purpose alike. SimSphere is an example of a SVAT model, simulating a large array of parameters characterising various land surface interactions over a 24 hour cycle at a 1-D vertical profile. Being able to appreciate the uncertainty of SimSphere predictions, is of vital importance towards increasing confidence in the models' overall use and ability to represent accurate land surface interactions. This is particularly important, given that its use either as a stand-alone tool or synergistically with Earth Observation (EO) data is currently expanding worldwide. In the present study, uncertainty in the SimSphere's predictions is evaluated at seven European sites, representative of a range of ecosystem conditions and biomes types for which in-situ data from the CarboEurope IP operational network acquired during 2011 were available. Selected sites are characterised by varying topographical characteristics, which further allow developing a comprehensive understanding on how topography can affect the models' ability to reproduce the variables which are evaluated. Model simulations are compared to in-situ data collected on cloud free days and on days with high Energy Balance Ratio. We focused here specifically on evaluating SimSphere capability in predicting selected variables of the energy balance, namely the Latent Heat (LE), Sensible heat (H) and Net Radiation (Rn) fluxes. An evaluation of the uncertainty in the model predictions was evaluated on the basis of extensive statistical analysis that was carried out by computing a series of relevant statistical measures. Results obtained confirmed the
Lougheed, Bryan; van der Lubbe, Jeroen; Davies, Gareth
2016-04-01
Accurate geochronologies are crucial for reconstructing the sensitivity of brackish and estuarine environments to rapidly changing past external impacts. A common geochronological method used for such studies is radiocarbon (14C) dating, but its application in brackish environments is severely limited by an inability to quantify spatiotemporal variations in 14C reservoir age, or R(t), due to dynamic interplay between river runoff and marine water. Additionally, old carbon effects and species-specific behavioural processes also influence 14C ages. Using the world's largest brackish water body (the estuarine Baltic Sea) as a test-bed, combined with a comprehensive approach that objectively excludes both old carbon and species-specific effects, we demonstrate that it is possible to use 87Sr/86Sr ratios to quantify R(t) in ubiquitous mollusc shell material, leading to almost one order of magnitude increase in Baltic Sea 14C geochronological precision over the current state-of-the-art. We propose that this novel proxy method can be developed for other brackish water bodies worldwide, thereby improving geochronological control in these climate sensitive, near-coastal environments.
Cook, B.D.; Bolstad, P.V.; Naesset, E.; Anderson, R. Scott; Garrigues, S.; Morisette, J.T.; Nickeson, J.; Davis, K.J.
2009-01-01
Spatiotemporal data from satellite remote sensing and surface meteorology networks have made it possible to continuously monitor global plant production, and to identify global trends associated with land cover/use and climate change. Gross primary production (GPP) and net primary production (NPP) are routinely derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard satellites Terra and Aqua, and estimates generally agree with independent measurements at validation sites across the globe. However, the accuracy of GPP and NPP estimates in some regions may be limited by the quality of model input variables and heterogeneity at fine spatial scales. We developed new methods for deriving model inputs (i.e., land cover, leaf area, and photosynthetically active radiation absorbed by plant canopies) from airborne laser altimetry (LiDAR) and Quickbird multispectral data at resolutions ranging from about 30??m to 1??km. In addition, LiDAR-derived biomass was used as a means for computing carbon-use efficiency. Spatial variables were used with temporal data from ground-based monitoring stations to compute a six-year GPP and NPP time series for a 3600??ha study site in the Great Lakes region of North America. Model results compared favorably with independent observations from a 400??m flux tower and a process-based ecosystem model (BIOME-BGC), but only after removing vapor pressure deficit as a constraint on photosynthesis from the MODIS global algorithm. Fine-resolution inputs captured more of the spatial variability, but estimates were similar to coarse-resolution data when integrated across the entire landscape. Failure to account for wetlands had little impact on landscape-scale estimates, because vegetation structure, composition, and conversion efficiencies were similar to upland plant communities. Plant productivity estimates were noticeably improved using LiDAR-derived variables, while uncertainties associated with land cover generalizations and
Mammarella, Ivan; Peltola, Olli; Nordbo, Annika; Järvi, Leena; Rannik, Üllar
2016-10-01
We have carried out an inter-comparison between EddyUH and EddyPro®, two public software packages for post-field processing of eddy covariance data. Datasets including carbon dioxide, methane and water vapour fluxes measured over 2 months at a wetland in southern Finland and carbon dioxide and water vapour fluxes measured over 3 months at an urban site in Helsinki were processed and analysed. The purpose was to estimate the flux uncertainty due to the use of different software packages and to evaluate the most critical processing steps, determining the largest deviations in the calculated fluxes. Turbulent fluxes calculated with a reference combination of processing steps were in good agreement, the systematic difference between the two software packages being up to 2.0 and 6.7 % for half-hour and cumulative sum values, respectively. The raw data preparation and processing steps were consistent between the software packages, and most of the deviations in the estimated fluxes were due to the flux corrections. Among the different calculation procedures analysed, the spectral correction had the biggest impact for closed-path latent heat fluxes, reaching a nocturnal median value of 15 % at the wetland site. We found up to a 43 % median value of deviation (with respect to the run with all corrections included) if the closed-path carbon dioxide flux is calculated without the dilution correction, while the methane fluxes were up to 10 % lower without both dilution and spectroscopic corrections. The Webb-Pearman-Leuning (WPL) and spectroscopic corrections were the most critical steps for open-path systems. However, we found also large spectral correction factors for the open-path methane fluxes, due to the sensor separation effect.
McCarty, J. L.; Krylov, A.; Prishchepov, A. V.; Banach, D. M.; Potapov, P.; Tyukavina, A.; Rukhovitch, D.; Koroleva, P.; Turubanova, S.; Romanenkov, V.
2015-12-01
Cropland and pasture burning are common agricultural management practices that negatively impact air quality at a local and regional scale, including contributing to short-lived climate pollutants (SLCPs). This research focuses on both cropland and pasture burning in European Russia, Lithuania, and Belarus. Burned area and fire detections were derived from 500 m and 1 km Moderate Resolution Imaging Spectroradiometer (MODIS), 30 m Landsat 7 Enhanced Thematic Mapper Plus (ETM+), and Landsat 8 Operational Land Imager (OLI) data. Carbon, particulate matter, volatile organic carbon (VOCs), and harmful air pollutants (HAPs) emissions were then calculated using MODIS and Landsat-based estimates of fire and land-cover and land-use. Agricultural burning in Belarus, Lithuania, and European Russia showed a strong and consistent seasonal geographic pattern from 2002 to 2012, with the majority of fire detections occurring in March - June and smaller peak in July and August. Over this 11-year period, there was a decrease in both cropland and pasture burning throughout this region. For Smolensk Oblast, a Russian administrative region with comparable agro-environmental conditions to Belarus and Lithuania, a detailed analysis of Landsat-based burned area estimations for croplands and pastures and field data collected in summer 2014 showed that the agricultural burning area can be up to 10 times higher than the 1 km MODIS active fire estimates. In general, European Russia is the main source of agricultural burning emissions compared to Lithuania and Belarus. On average, all cropland burning in European Russia as detected by the MCD45A1 MODIS Burned Area Product emitted 17.66 Gg of PM10 while annual burning of pasture in Smolensk Oblast, Russia as detected by Landsat burn scars emitted 494.85 Gg of PM10, a 96% difference. This highlights that quantifying the contribution of pasture burning and burned area versus cropland burning in agricultural regions is important for accurately
Quantifying Uncertainty in Expert Judgment: Initial Results
2013-03-01
email, and the good offices of faculty colleagues and deans to encourage participation. Light meals and snacks were provided as appropriate for the...entered the following thought-provoking comment on his or her paper feedback ques- tionnaire. Went back to my bad habits for the first test but then
Traffic forecasts under uncertainty and capacity constraints
2009-01-01
Traffic forecasts provide essential input for the appraisal of transport investment projects. However, according to recent empirical evidence, long-term predictions are subject to high levels of uncertainty. This paper quantifies uncertainty in traffic forecasts for the tolled motorway network in Spain. Uncertainty is quantified in the form of a confidence interval for the traffic forecast that includes both model uncertainty and input uncertainty. We apply a stochastic simulation process bas...
Error Analysis of CM Data Products Sources of Uncertainty
Energy Technology Data Exchange (ETDEWEB)
Hunt, Brian D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eckert-Gallup, Aubrey Celia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cochran, Lainy Dromgoole [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kraus, Terrence D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Allen, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Beal, Bill [National Security Technologies, Joint Base Andrews, MD (United States); Okada, Colin [National Security Technologies, LLC. (NSTec), Las Vegas, NV (United States); Simpson, Mathew [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-02-01
This goal of this project is to address the current inability to assess the overall error and uncertainty of data products developed and distributed by DOE’s Consequence Management (CM) Program. This is a widely recognized shortfall, the resolution of which would provide a great deal of value and defensibility to the analysis results, data products, and the decision making process that follows this work. A global approach to this problem is necessary because multiple sources of error and uncertainty contribute to the ultimate production of CM data products. Therefore, this project will require collaboration with subject matter experts across a wide range of FRMAC skill sets in order to quantify the types of uncertainty that each area of the CM process might contain and to understand how variations in these uncertainty sources contribute to the aggregated uncertainty present in CM data products. The ultimate goal of this project is to quantify the confidence level of CM products to ensure that appropriate public and worker protections decisions are supported by defensible analysis.
Adjoint-Based Uncertainty Quantification with MCNP
Energy Technology Data Exchange (ETDEWEB)
Seifried, Jeffrey E. [Univ. of California, Berkeley, CA (United States)
2011-09-01
This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.
Development of an aggregation methodology for risk analysis in aerospace conceptual vehicle design
Chytka, Trina Marsh
2003-10-01
The growing complexity of technical systems has emphasized a need to gather as much information as possible regarding specific systems of interest in order to make robust, sound decisions about their design and deployment. Acquiring as much data as possible requires the use of empirical statistics, historical information and expert opinion. In much of the aerospace conceptual design environment, the lack of historical information and infeasibility of gathering empirical data relegates the data collection to expert opinion. The conceptual design of a space vehicle requires input from several disciplines (weights and sizing, operations, trajectory, etc.). In this multidisciplinary environment, the design variables are often not easily quantified and have a high degree of uncertainty associated with their values. Decision-makers must rely on expert assessments of the uncertainty associated with the design variables to evaluate the risk level of a conceptual design. Since multiple experts are often queried for their evaluation of uncertainty, a means to combine/aggregate multiple expert assessments must be developed. Providing decision-makers with a solitary assessment that captures the consensus of the multiple experts would greatly enhance the ability to evaluate risk associated with a conceptual design. The objective of this research has been to develop an aggregation methodology that efficiently combines the uncertainty assessments of multiple experts in multiple disciplines involved in aerospace conceptual design. Bayesian probability augmented by uncertainty modeling and expert calibration was employed in the methodology construction. Appropriate questionnaire techniques were used to acquire expert opinion; the responses served as input distributions to the aggregation algorithm. Application of the derived techniques were applied as part of a larger expert assessment elicitation and calibration study. Results of this research demonstrate that aggregation of
Bartley, David; Lidén, Göran
2008-08-01
The reporting of measurement uncertainty has recently undergone a major harmonization whereby characteristics of a measurement method obtained during establishment and application are combined componentwise. For example, the sometimes-pesky systematic error is included. A bias component of uncertainty can be often easily established as the uncertainty in the bias. However, beyond simply arriving at a value for uncertainty, meaning to this uncertainty if needed can sometimes be developed in terms of prediction confidence in uncertainty-based intervals covering what is to be measured. To this end, a link between concepts of accuracy and uncertainty is established through a simple yet accurate approximation to a random variable known as the non-central Student's t-distribution. Without a measureless and perpetual uncertainty, the drama of human life would be destroyed. Winston Churchill.
Uncertainty Management and Sensitivity Analysis
DEFF Research Database (Denmark)
Georgiadis, Stylianos; Fantke, Peter
2017-01-01
Uncertainty is always there and LCA is no exception to that. The presence of uncertainties of different types and from numerous sources in LCA results is a fact, but managing them allows to quantify and improve the precision of a study and the robustness of its conclusions. LCA practice sometimes...
Enhanced Named Entity Extraction via Error-Driven Aggregation
Energy Technology Data Exchange (ETDEWEB)
Lemmond, T D; Perry, N C; Guensche, J W; Nitao, J J; Glaser, R E; Kidwell, P; Hanley, W G
2010-02-22
Despite recent advances in named entity extraction technologies, state-of-the-art extraction tools achieve insufficient accuracy rates for practical use in many operational settings. However, they are not generally prone to the same types of error, suggesting that substantial improvements may be achieved via appropriate combinations of existing tools, provided their behavior can be accurately characterized and quantified. In this paper, we present an inference methodology for the aggregation of named entity extraction technologies that is founded upon a black-box analysis of their respective error processes. This method has been shown to produce statistically significant improvements in extraction relative to standard performance metrics and to mitigate the weak performance of entity extractors operating under suboptimal conditions. Moreover, this approach provides a framework for quantifying uncertainty and has demonstrated the ability to reconstruct the truth when majority voting fails.
Betrie, Getnet D; Sadiq, Rehan; Morin, Kevin A; Tesfamariam, Solomon
2014-08-15
Acid rock drainage (ARD) is a major pollution problem globally that has adversely impacted the environment. Identification and quantification of uncertainties are integral parts of ARD assessment and risk mitigation, however previous studies on predicting ARD drainage chemistry have not fully addressed issues of uncertainties. In this study, artificial neural networks (ANN) and support vector machine (SVM) are used for the prediction of ARD drainage chemistry and their predictive uncertainties are quantified using probability bounds analysis. Furthermore, the predictions of ANN and SVM are integrated using four aggregation methods to improve their individual predictions. The results of this study showed that ANN performed better than SVM in enveloping the observed concentrations. In addition, integrating the prediction of ANN and SVM using the aggregation methods improved the predictions of individual techniques.
Lindley, Dennis V
2013-01-01
Praise for the First Edition ""...a reference for everyone who is interested in knowing and handling uncertainty.""-Journal of Applied Statistics The critically acclaimed First Edition of Understanding Uncertainty provided a study of uncertainty addressed to scholars in all fields, showing that uncertainty could be measured by probability, and that probability obeyed three basic rules that enabled uncertainty to be handled sensibly in everyday life. These ideas were extended to embrace the scientific method and to show how decisions, containing an uncertain element, could be rationally made.
Uncertainty in hydrological signatures
McMillan, Hilary; Westerberg, Ida
2015-04-01
Information that summarises the hydrological behaviour or flow regime of a catchment is essential for comparing responses of different catchments to understand catchment organisation and similarity, and for many other modelling and water-management applications. Such information types derived as an index value from observed data are known as hydrological signatures, and can include descriptors of high flows (e.g. mean annual flood), low flows (e.g. mean annual low flow, recession shape), the flow variability, flow duration curve, and runoff ratio. Because the hydrological signatures are calculated from observed data such as rainfall and flow records, they are affected by uncertainty in those data. Subjective choices in the method used to calculate the signatures create a further source of uncertainty. Uncertainties in the signatures may affect our ability to compare different locations, to detect changes, or to compare future water resource management scenarios. The aim of this study was to contribute to the hydrological community's awareness and knowledge of data uncertainty in hydrological signatures, including typical sources, magnitude and methods for its assessment. We proposed a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrated it for a variety of commonly used signatures. The study was made for two data rich catchments, the 50 km2 Mahurangi catchment in New Zealand and the 135 km2 Brue catchment in the UK. For rainfall data the uncertainty sources included point measurement uncertainty, the number of gauges used in calculation of the catchment spatial average, and uncertainties relating to lack of quality control. For flow data the uncertainty sources included uncertainties in stage/discharge measurement and in the approximation of the true stage-discharge relation by a rating curve. The resulting uncertainties were compared across the different signatures and catchments, to quantify uncertainty
Uncertainty in Air Quality Modeling.
Fox, Douglas G.
1984-01-01
Under the direction of the AMS Steering Committee for the EPA Cooperative Agreement on Air Quality Modeling, a small group of scientists convened to consider the question of uncertainty in air quality modeling. Because the group was particularly concerned with the regulatory use of models, its discussion focused on modeling tall stack, point source emissions.The group agreed that air quality model results should be viewed as containing both reducible error and inherent uncertainty. Reducible error results from improper or inadequate meteorological and air quality data inputs, and from inadequacies in the models. Inherent uncertainty results from the basic stochastic nature of the turbulent atmospheric motions that are responsible for transport and diffusion of released materials. Modelers should acknowledge that all their predictions to date contain some associated uncertainty and strive also to quantify uncertainty.How can the uncertainty be quantified? There was no consensus from the group as to precisely how uncertainty should be calculated. One subgroup, which addressed statistical procedures, suggested that uncertainty information could be obtained from comparisons of observations and predictions. Following recommendations from a previous AMS workshop on performance evaluation (Fox. 1981), the subgroup suggested construction of probability distribution functions from the differences between observations and predictions. Further, they recommended that relatively new computer-intensive statistical procedures be considered to improve the quality of uncertainty estimates for the extreme value statistics of interest in regulatory applications.A second subgroup, which addressed the basic nature of uncertainty in a stochastic system, also recommended that uncertainty be quantified by consideration of the differences between observations and predictions. They suggested that the average of the difference squared was appropriate to isolate the inherent uncertainty that
Seehars, Sebastian; Amara, Adam; Refregier, Alexandre
2015-01-01
Quantifying the concordance between different cosmological experiments is important for testing the validity of theoretical models and systematics in the observations. In earlier work, we thus proposed the Surprise, a concordance measure derived from the relative entropy between posterior distributions. We revisit the properties of the Surprise and describe how it provides a general, versatile, and robust measure for the agreement between datasets. We also compare it to other measures of concordance that have been proposed for cosmology. As an application, we extend our earlier analysis and use the Surprise to quantify the agreement between WMAP 9, Planck 13 and Planck 15 constraints on the $\\Lambda$CDM model. Using a principle component analysis in parameter space, we find that the large Surprise between WMAP 9 and Planck 13 (S = 17.6 bits, implying a deviation from consistency at 99.8% confidence) is due to a shift along a direction that is dominated by the amplitude of the power spectrum. The Surprise disa...
Directory of Open Access Journals (Sweden)
Oscar S. Silva Filho
1995-04-01
Full Text Available O artigo trata da determinação de uma política ótima de decisão para um problema de planejamento da produção com restrições de estoque e produção. O horizonte de planejamento é finito, aproximadamente de 1 a 2 anos, com período de discretização mensal. Os dados do problema estão totalmente agregados e a flutuação de demanda ao longo dos períodos do horizonte é aleatória, com distribuição de probabilidade assumida como gaussiana. Assim, o problema estudado é de planejamento estocástico com restrição probabilística na variável de estoque. Mostra-se que é possível, a partir de transformações apropriadas, obter uma formulação determinística equivalente, para a qual uma solução do tipo malha-aberta (que é uma solução aproximada para o problema original pode ser gerada. É também mostrado que as incertezas relacionadas com flutuações futuras de demanda são explicitadas na formulação determinística por meio de uma função restrição para o limite mínimo do nível de estoque. Esta função é essencialmente côncava e crescente e depende da variância da variável de estoque e de uma medida de probabilidade, fixada a priori pelo usuário. Para ilustrar os desenvolvimentos teóricos, um exemplo simples de um sistema de produção do tipo monoproduto é proposto e resolvido por meio de programação dinâmica determinística. Então, a solução malha-aberta (i.e. solução aproximada gerada pelo problema equivalente é comparada com a solução verdadeira do problema estocástico, obtida via algoritmo de programação estocástica.This paper deals with the determination of an optimal decision policy for a production planning problem with inventory and production constraints. The planning time horizon is finite, from 1 to 2 years approximately at monthly periods, which means that all data involved with the problem are totally aggregated and the fluctuating demand for each one period is stochastic, with
An uncertainty inventory demonstration - a primary step in uncertainty quantification
Energy Technology Data Exchange (ETDEWEB)
Langenbrunner, James R. [Los Alamos National Laboratory; Booker, Jane M [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Salazar, Issac F [Los Alamos National Laboratory; Ross, Timothy J [UNM
2009-01-01
Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.
Nathenson, Manuel
1978-01-01
In order to quantify the uncertainty of estimates of the geothermal resource base in identified hydrothermal convection systems, a methodology is presented for combining estimates with uncertainties for temperature, area, and thickness of a geothermal reservoir into an estimate of the stored energy with uncertainty. Probability density functions for temperature, area, and thickness are assumed to be triangular in form. In order to calculate the probability distribution function for the stored energy in a single system or in many systems, a computer program for aggregating the input distribution functions using the Monte-Carlo method has been developed. To calculate the probability distribution of stored energy in a single system, an analytical expression is also obtained that is useful for calibrating the Monte Carlo approximation. For the probability distributions of stored energy in a single and in many systems, the central limit approximation is shown to give results ranging from good to poor.
Nelson, T.I.; Bolen, W.P.
2007-01-01
Construction aggregates, primarily stone, sand and gravel, are recovered from widespread naturally occurring mineral deposits and processed for use primarily in the construction industry. They are mined, crushed, sorted by size and sold loose or combined with portland cement or asphaltic cement to make concrete products to build roads, houses, buildings, and other structures. Much smaller quantities are used in agriculture, cement manufacture, chemical and metallurgical processes, glass production and many other products.
Liu, Baoding
2015-01-01
When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, c...
Interpreting uncertainty terms.
Holtgraves, Thomas
2014-08-01
Uncertainty terms (e.g., some, possible, good, etc.) are words that do not have a fixed referent and hence are relatively ambiguous. A model is proposed that specifies how, from the hearer's perspective, recognition of facework as a potential motive for the use of an uncertainty term results in a calibration of the intended meaning of that term. Four experiments are reported that examine the impact of face threat, and the variables that affect it (e.g., power), on the manner in which a variety of uncertainty terms (probability terms, quantifiers, frequency terms, etc.) are interpreted. Overall, the results demonstrate that increased face threat in a situation will result in a more negative interpretation of an utterance containing an uncertainty term. That the interpretation of so many different types of uncertainty terms is affected in the same way suggests the operation of a fundamental principle of language use, one with important implications for the communication of risk, subjective experience, and so on.
Bayesian Uncertainty Analyses Via Deterministic Model
Krzysztofowicz, R.
2001-05-01
Rational decision-making requires that the total uncertainty about a variate of interest (a predictand) be quantified in terms of a probability distribution, conditional on all available information and knowledge. Suppose the state-of-knowledge is embodied in a deterministic model, which is imperfect and outputs only an estimate of the predictand. Fundamentals are presented of three Bayesian approaches to producing a probability distribution of the predictand via any deterministic model. The Bayesian Processor of Output (BPO) quantifies the total uncertainty in terms of a posterior distribution, conditional on model output. The Bayesian Processor of Ensemble (BPE) quantifies the total uncertainty in terms of a posterior distribution, conditional on an ensemble of model output. The Bayesian Forecasting System (BFS) decomposes the total uncertainty into input uncertainty and model uncertainty, which are characterized independently and then integrated into a predictive distribution.
Impact of discharge data uncertainty on nutrient load uncertainty
Westerberg, Ida; Gustavsson, Hanna; Sonesten, Lars
2016-04-01
Uncertainty in the rating-curve model of the stage-discharge relationship leads to uncertainty in discharge time series. These uncertainties in turn affect many other analyses based on discharge data, such as nutrient load estimations. It is important to understand how large the impact of discharge data uncertainty is on such analyses, since they are often used as the basis to take important environmental management decisions. In the Baltic Sea basin, nutrient load estimates from river mouths are a central information basis for managing and reducing eutrophication in the Baltic Sea. In this study we investigated rating curve uncertainty and its propagation to discharge data uncertainty and thereafter to uncertainty in the load of phosphorous and nitrogen for twelve Swedish river mouths. We estimated rating curve uncertainty using the Voting Point method, which accounts for random and epistemic errors in the stage-discharge relation and allows drawing multiple rating-curve realisations consistent with the total uncertainty. We sampled 40,000 rating curves, and for each sampled curve we calculated a discharge time series from 15-minute water level data for the period 2005-2014. Each discharge time series was then aggregated to daily scale and used to calculate the load of phosphorous and nitrogen from linearly interpolated monthly water samples, following the currently used methodology for load estimation. Finally the yearly load estimates were calculated and we thus obtained distributions with 40,000 load realisations per year - one for each rating curve. We analysed how the rating curve uncertainty propagated to the discharge time series at different temporal resolutions, and its impact on the yearly load estimates. Two shorter periods of daily water quality sampling around the spring flood peak allowed a comparison of load uncertainty magnitudes resulting from discharge data with those resulting from the monthly water quality sampling.
Critical loads - assessment of uncertainty
Energy Technology Data Exchange (ETDEWEB)
Barkman, A.
1998-10-01
The effects of data uncertainty in applications of the critical loads concept were investigated on different spatial resolutions in Sweden and northern Czech Republic. Critical loads of acidity (CL) were calculated for Sweden using the biogeochemical model PROFILE. Three methods with different structural complexity were used to estimate the adverse effects of S0{sub 2} concentrations in northern Czech Republic. Data uncertainties in the calculated critical loads/levels and exceedances (EX) were assessed using Monte Carlo simulations. Uncertainties within cumulative distribution functions (CDF) were aggregated by accounting for the overlap between site specific confidence intervals. Aggregation of data uncertainties within CDFs resulted in lower CL and higher EX best estimates in comparison with percentiles represented by individual sites. Data uncertainties were consequently found to advocate larger deposition reductions to achieve non-exceedance based on low critical loads estimates on 150 x 150 km resolution. Input data were found to impair the level of differentiation between geographical units at all investigated resolutions. Aggregation of data uncertainty within CDFs involved more constrained confidence intervals for a given percentile. Differentiation as well as identification of grid cells on 150 x 150 km resolution subjected to EX was generally improved. Calculation of the probability of EX was shown to preserve the possibility to differentiate between geographical units. Re-aggregation of the 95%-ile EX on 50 x 50 km resolution generally increased the confidence interval for each percentile. Significant relationships were found between forest decline and the three methods addressing risks induced by S0{sub 2} concentrations. Modifying S0{sub 2} concentrations by accounting for the length of the vegetation period was found to constitute the most useful trade-off between structural complexity, data availability and effects of data uncertainty. Data
DEFF Research Database (Denmark)
Nguyen, Daniel Xuyen
This paper presents a model of trade that explains why firms wait to export and why many exporters fail. Firms face uncertain demands that are only realized after the firm enters the destination. The model retools the timing of uncertainty resolution found in productivity heterogeneity models...... in untested destinations. The option to forecast demands causes firms to delay exporting in order to gather more information about foreign demand. Third, since uncertainty is resolved after entry, many firms enter a destination and then exit after learning that they cannot profit. This prediction reconciles...
DEFF Research Database (Denmark)
Heydorn, Kaj; Anglov, Thomas
2002-01-01
Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration unce...
Blokpoel, S.B.; Reymen, Isabelle; Dewulf, Geert P.M.R.; Sariyildiz, S.; Tuncer, B.
2005-01-01
Real estate development is all about assessing and controlling risks and uncertainties. Risk management implies making decisions based on quantified risks to execute riskresponse measures. Uncertainties, on the other hand, cannot be quantified and are therefore unpredictable. In literature, much
Climate model uncertainty vs. conceptual geological uncertainty in hydrological modeling
Directory of Open Access Journals (Sweden)
T. O. Sonnenborg
2015-04-01
Full Text Available Projections of climate change impact are associated with a cascade of uncertainties including CO2 emission scenario, climate model, downscaling and impact model. The relative importance of the individual uncertainty sources is expected to depend on several factors including the quantity that is projected. In the present study the impacts of climate model uncertainty and geological model uncertainty on hydraulic head, stream flow, travel time and capture zones are evaluated. Six versions of a physically based and distributed hydrological model, each containing a unique interpretation of the geological structure of the model area, are forced by 11 climate model projections. Each projection of future climate is a result of a GCM-RCM model combination (from the ENSEMBLES project forced by the same CO2 scenario (A1B. The changes from the reference period (1991–2010 to the future period (2081–2100 in projected hydrological variables are evaluated and the effects of geological model and climate model uncertainties are quantified. The results show that uncertainty propagation is context dependent. While the geological conceptualization is the dominating uncertainty source for projection of travel time and capture zones, the uncertainty on the climate models is more important for groundwater hydraulic heads and stream flow.
Mapping Uncertainty Due to Missing Data in the Global Ocean Health Index.
Frazier, Melanie; Longo, Catherine; Halpern, Benjamin S
2016-01-01
Indicators are increasingly used to measure environmental systems; however, they are often criticized for failing to measure and describe uncertainty. Uncertainty is particularly difficult to evaluate and communicate in the case of composite indicators which aggregate many indicators of ecosystem condition. One of the ongoing goals of the Ocean Health Index (OHI) has been to improve our approach to dealing with missing data, which is a major source of uncertainty. Here we: (1) quantify the potential influence of gapfilled data on index scores from the 2015 global OHI assessment; (2) develop effective methods of tracking, quantifying, and communicating this information; and (3) provide general guidance for implementing gapfilling procedures for existing and emerging indicators, including regional OHI assessments. For the overall OHI global index score, the percent contribution of gapfilled data was relatively small (18.5%); however, it varied substantially among regions and goals. In general, smaller territorial jurisdictions and the food provision and tourism and recreation goals required the most gapfilling. We found the best approach for managing gapfilled data was to mirror the general framework used to organize, calculate, and communicate the Index data and scores. Quantifying gapfilling provides a measure of the reliability of the scores for different regions and components of an indicator. Importantly, this information highlights the importance of the underlying datasets used to calculate composite indicators and can inform and incentivize future data collection.
Mapping Uncertainty Due to Missing Data in the Global Ocean Health Index
Longo, Catherine; Halpern, Benjamin S.
2016-01-01
Indicators are increasingly used to measure environmental systems; however, they are often criticized for failing to measure and describe uncertainty. Uncertainty is particularly difficult to evaluate and communicate in the case of composite indicators which aggregate many indicators of ecosystem condition. One of the ongoing goals of the Ocean Health Index (OHI) has been to improve our approach to dealing with missing data, which is a major source of uncertainty. Here we: (1) quantify the potential influence of gapfilled data on index scores from the 2015 global OHI assessment; (2) develop effective methods of tracking, quantifying, and communicating this information; and (3) provide general guidance for implementing gapfilling procedures for existing and emerging indicators, including regional OHI assessments. For the overall OHI global index score, the percent contribution of gapfilled data was relatively small (18.5%); however, it varied substantially among regions and goals. In general, smaller territorial jurisdictions and the food provision and tourism and recreation goals required the most gapfilling. We found the best approach for managing gapfilled data was to mirror the general framework used to organize, calculate, and communicate the Index data and scores. Quantifying gapfilling provides a measure of the reliability of the scores for different regions and components of an indicator. Importantly, this information highlights the importance of the underlying datasets used to calculate composite indicators and can inform and incentivize future data collection. PMID:27483378
Uncertainty quantification theory, implementation, and applications
Smith, Ralph C
2014-01-01
The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...
Uncertainty in tsunami sediment transport modeling
Jaffe, Bruce E.; Goto, Kazuhisa; Sugawara, Daisuke; Gelfenbaum, Guy R.; La Selle, SeanPaul M.
2016-01-01
Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. We explore sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami, study site, available input data, sediment grain size, and model. Although uncertainty has the potential to be large, published case studies indicate that both forward and inverse tsunami sediment transport models perform well enough to be useful for deciphering tsunami characteristics, including size, from deposits. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and development of hybrid modeling approaches to exploit the strengths of forward and inverse models.
Stereo-particle image velocimetry uncertainty quantification
Bhattacharya, Sayantan; Charonko, John J.; Vlachos, Pavlos P.
2017-01-01
Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric
Return Predictability, Model Uncertainty, and Robust Investment
DEFF Research Database (Denmark)
Lukas, Manuel
Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...
Climate Projections and Uncertainty Communication.
Joslyn, Susan L; LeClerc, Jared E
2016-01-01
Lingering skepticism about climate change might be due in part to the way climate projections are perceived by members of the public. Variability between scientists' estimates might give the impression that scientists disagree about the fact of climate change rather than about details concerning the extent or timing. Providing uncertainty estimates might clarify that the variability is due in part to quantifiable uncertainty inherent in the prediction process, thereby increasing people's trust in climate projections. This hypothesis was tested in two experiments. Results suggest that including uncertainty estimates along with climate projections leads to an increase in participants' trust in the information. Analyses explored the roles of time, place, demographic differences (e.g., age, gender, education level, political party affiliation), and initial belief in climate change. Implications are discussed in terms of the potential benefit of adding uncertainty estimates to public climate projections.
DEFF Research Database (Denmark)
Nguyen, Daniel Xuyen
This paper presents a model of trade that explains why firms wait to export and why many exporters fail. Firms face uncertain demands that are only realized after the firm enters the destination. The model retools the timing of uncertainty resolution found in productivity heterogeneity models...... the high rate of exit seen in the first years of exporting. Finally, when faced with multiple countries in which to export, some firms will choose to sequentially export in order to slowly learn more about its chances for success in untested markets....
Quantifying the adaptive cycle
Angeler, David G.; Allen, Craig R.; Garmestani, Ahjond S.; Gunderson, Lance H.; Hjerne, Olle; Winder, Monika
2015-01-01
The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation) in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994–2011) data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems.
Energy Technology Data Exchange (ETDEWEB)
Thomas, R.E.
1982-03-01
An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software.
Directory of Open Access Journals (Sweden)
Oscar S. Silva Filho
2000-12-01
Full Text Available Dentro de uma cadeia de decisões hierárquicas, uma grande parte dos problemas é dependente da componente tempo e fortemente influenciada por elementos perturbativos tanto endógenos quanto exógenos. Estes problemas podem ser enquadrados dentro de uma classe de problemas de controle ótimo estocástico. As dificuldades para gerar uma solução ótima malha-fechada fazem com que se busque outros procedimentos alternativos. Neste artigo, quatro destes procedimentos são analisados quanto a suas propriedades estruturais. Um estudo de caso, com foco no problema de planejamento agregado da produção, é considerado com o propósito de comparar as soluções ótimas fornecidas pelo procedimento. A solução com melhor desempenho é considerada para análises de cenário, que tem por objetivo ajudar a gerência a ter uma visão de longo prazo quanto ao uso dos recursos materiais da firma.Within a hierarchical decision chain, a major part of problems is dependent on the time component and strongly sensitive to endogenous and exogenous components. These problems can be related to an important class of stochastic optimal control. Difficulties to provide a closed-loop policy for them, lead to look for sub-optimal alternatives. In this paper, four different sub-optimal procedures are investigated in relation to their structural properties. A case study, based on an aggregated production planning problem, is considered with the purpose of comparing the different procedures. The best one is used to provide managerial scenarios about the future use of material resources of the firm.
Uncertainty propagation within the UNEDF models
Haverinen, T
2016-01-01
The parameters of the nuclear energy density have to be adjusted to experimental data. As a result they carry certain uncertainty which then propagates to calculated values of observables. In the present work we quantify the statistical uncertainties on binding energies for three UNEDF Skyrme energy density functionals by taking advantage of the knowledge of the model parameter uncertainties. We find that the uncertainty of UNEDF models increases rapidly when going towards proton or neutron rich nuclei. We also investigate the impact of each model parameter on the total error budget.
Uncertainty propagation within the UNEDF models
Haverinen, T.; Kortelainen, M.
2017-04-01
The parameters of the nuclear energy density have to be adjusted to experimental data. As a result they carry certain uncertainty which then propagates to calculated values of observables. In the present work we quantify the statistical uncertainties of binding energies, proton quadrupole moments and proton matter radius for three UNEDF Skyrme energy density functionals by taking advantage of the knowledge of the model parameter uncertainties. We find that the uncertainty of UNEDF models increases rapidly when going towards proton or neutron rich nuclei. We also investigate the impact of each model parameter on the total error budget.
A Bayesian framework for uncertainty formulation of engineering design proces
Rajabalinejad, M.; Spitas, C.; Kahraman, Cengiz; Kerre, Etienne; Bozbura, Faik Tunc
2012-01-01
Uncertainties in the design process are investigated in this paper. A formal Bayesian method is presented for designers to quantify uncertainties in design process. The uncertainties are implemented in a decision support system that plays a key role in design of complex projects where a large and mu
DEFF Research Database (Denmark)
Puig, Daniel
This report outlines approaches to quantify the uncertainty associated with national greenhouse-gas emission scenario projections. It does so by describing practical applications of those approaches in two countries – Mexico and South Africa. The goal of the report is to promote uncertainty...... quantification, because quantifying uncertainty has the potential to foster more robust climate-change mitigation plans. To this end the report also summarises the rationale for quantifying uncertainty in greenhouse-gas emission scenario projections....
Quantifying uncertainty and sensitivity in sea ice models
Energy Technology Data Exchange (ETDEWEB)
Urrego Blanco, Jorge Rolando [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hunke, Elizabeth Clare [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Urban, Nathan Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-07-15
The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.
Quantifying Uncertainty in Early Lifecycle Cost Estimation (QUELCE)
2011-12-01
John Wiley & Sons, 2008. [ Bibi 2010] Bibi , S., Stamelos, J., Gerolimos, G. & Kollias, V. “BBN based approach for improving the software development...Studies,” IEEE Transactions On Software Engineering 33, 1 (January 2007): 33-53. [Hamdan 2009] Hamdan, K., Bibi , S., Angelis, L., & Stamelos, I. “A
Quantifying Monte Carlo uncertainty in ensemble Kalman filter
Energy Technology Data Exchange (ETDEWEB)
Thulin, Kristian; Naevdal, Geir; Skaug, Hans Julius; Aanonsen, Sigurd Ivar
2009-01-15
This report is presenting results obtained during Kristian Thulin PhD study, and is a slightly modified form of a paper submitted to SPE Journal. Kristian Thulin did most of his portion of the work while being a PhD student at CIPR, University of Bergen. The ensemble Kalman filter (EnKF) is currently considered one of the most promising methods for conditioning reservoir simulation models to production data. The EnKF is a sequential Monte Carlo method based on a low rank approximation of the system covariance matrix. The posterior probability distribution of model variables may be estimated fram the updated ensemble, but because of the low rank covariance approximation, the updated ensemble members become correlated samples from the posterior distribution. We suggest using multiple EnKF runs, each with smaller ensemble size to obtain truly independent samples from the posterior distribution. This allows a point-wise confidence interval for the posterior cumulative distribution function (CDF) to be constructed. We present a methodology for finding an optimal combination of ensemble batch size (n) and number of EnKF runs (m) while keeping the total number of ensemble members ( m x n) constant. The optimal combination of n and m is found through minimizing the integrated mean square error (MSE) for the CDFs and we choose to define an EnKF run with 10.000 ensemble members as having zero Monte Carlo error. The methodology is tested on a simplistic, synthetic 2D model, but should be applicable also to larger, more realistic models. (author). 12 refs., figs.,tabs
Quantifying extinction probabilities from sighting records: inference and uncertainties.
Directory of Open Access Journals (Sweden)
Peter Caley
Full Text Available Methods are needed to estimate the probability that a population is extinct, whether to underpin decisions regarding the continuation of a invasive species eradication program, or to decide whether further searches for a rare and endangered species could be warranted. Current models for inferring extinction probability based on sighting data typically assume a constant or declining sighting rate. We develop methods to analyse these models in a Bayesian framework to estimate detection and survival probabilities of a population conditional on sighting data. We note, however, that the assumption of a constant or declining sighting rate may be hard to justify, especially for incursions of invasive species with potentially positive population growth rates. We therefore explored introducing additional process complexity via density-dependent survival and detection probabilities, with population density no longer constrained to be constant or decreasing. These models were applied to sparse carcass discoveries associated with the recent incursion of the European red fox (Vulpes vulpes into Tasmania, Australia. While a simple model provided apparently precise estimates of parameters and extinction probability, estimates arising from the more complex model were much more uncertain, with the sparse data unable to clearly resolve the underlying population processes. The outcome of this analysis was a much higher possibility of population persistence. We conclude that if it is safe to assume detection and survival parameters are constant, then existing models can be readily applied to sighting data to estimate extinction probability. If not, methods reliant on these simple assumptions are likely overstating their accuracy, and their use to underpin decision-making potentially fraught. Instead, researchers will need to more carefully specify priors about possible population processes.
Quantifying Uncertainty for Early Life Cycle Cost Estimates
2013-04-01
Motorola Six Sigma Master Black Belt . He delivers measurement courses in public and client offerings and provides measurement consulting to external...is a principal engineer at the Software Engineering Institute (SEI). He earned a BS in business and an MS in systems management and is a certified ...X’s in brackets indicate an inverse relationship between the BBN output factor and the corresponding COCOMO II driver. The black X’s indicate a
The ROMS IAS Data Assimilation And Prediction System: Quantifying Uncertainty
2009-09-30
experience in applying sophisticated 4D-Var methods in mesoscale coastal circulation environments (Di Lorenzo et al., 2007; Haidvogel et al., 2008...Di Lorenzo , E., A.M. Moore, H.G. Arango, B.D. Cornuelle, A.J. Miller, B. Powell, B.S. Chua and A.F. Bennett, 2007: Weak and strong constraint data...adjoint system. Tellus, 56A, 189-201. Moore, A.M., H.G. Arango, G. Broquet, B.S. Powell, J. Zavala -Garay and A.T. Weaver, 2009a: The Regional Ocean
Event-scale power law recession analysis: quantifying methodological uncertainty
Dralle, David N.; Karst, Nathaniel J.; Charalampous, Kyriakos; Veenstra, Andrew; Thompson, Sally E.
2017-01-01
The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power-law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices. Considering study results, we recommend a combination of four key methodological decisions to maximize the quality of fitted recession curves, and to minimize bias in the related populations of fitted recession parameters.
Marine particle aggregate breakup in turbulent flows
Rau, Matthew; Ackleson, Steven; Smith, Geoffrey
2016-11-01
The dynamics of marine particle aggregate formation and breakup due to turbulence is studied experimentally. Aggregates of clay particles, initially in a quiescent aggregation tank, are subjected to fully developed turbulent pipe flow at Reynolds numbers of up to 25,000. This flow arrangement simulates the exposure of marine aggregates in coastal waters to a sudden turbulent event. Particle size distributions are measured by in-situ sampling of the small-angle forward volume scattering function and the volume concentration of the suspended particulate matter is quantified through light attenuation measurements. Results are compared to measurements conducted under laminar and turbulent flow conditions. At low shear rates, larger sized particles indicate that aggregation initially governs the particle dynamics. Breakup is observed when large aggregates are exposed to the highest levels of shear in the experiment. Models describing the aggregation and breakup rates of marine particles due to turbulence are evaluated with the population balance equation and results from the simulation and experiment are compared. Additional model development will more accurately describe aggregation dynamics for remote sensing applications in turbulent marine environments.
Lithological Uncertainty Expressed by Normalized Compression Distance
Jatnieks, J.; Saks, T.; Delina, A.; Popovs, K.
2012-04-01
Lithological composition and structure of the Quaternary deposits is highly complex and heterogeneous in nature, especially as described in borehole log data. This work aims to develop a universal solution for quantifying uncertainty based on mutual information shared between the borehole logs. This approach presents tangible information directly useful in generalization of the geometry and lithology of the Quaternary sediments for use in regional groundwater flow models as a qualitative estimate of lithological uncertainty involving thousands of borehole logs would be humanly impossible due to the amount of raw data involved. Our aim is to improve parametrization of recharge in the Quaternary strata. This research however holds appeal for other areas of reservoir modelling, as demonstrated in the 2011 paper by Wellmann & Regenauer-Lieb. For our experiments we used extracts of the Quaternary strata from general-purpose geological borehole log database maintained by the Latvian Environment, Geology and Meteorology Centre, spanning the territory of Latvia. Lithological codes were generalised into 2 aggregation levels consisting of 5 and 20 rock types respectively. Our calculation of borehole log similarity relies on the concept of information distance proposed by Bennet et al. in 1998. This was developed into a practical data mining application by Cilibrasi in the 2007 dissertation. The resulting implementation called CompLearn utilities provide a calculation of the Normalized Compression Distance (NCD) metric. It relies on the universal data compression algorithms for estimating mutual information content in the data. This approach has proven to be universally successful for parameter free data mining in disciplines from molecular biology to network intrusion monitoring. To improve this approach for use in geology it is beneficial to apply several transformations as pre-processing steps to the borehole log data. Efficiency of text stream compressors, such as
Biological framework for soil aggregation: Implications for ecological functions.
Ghezzehei, Teamrat; Or, Dani
2016-04-01
Soil aggregation is heuristically understood as agglomeration of primary particles bound together by biotic and abiotic cementing agents. The organization of aggregates is believed to be hierarchical in nature; whereby primary particles bond together to form secondary particles and subsequently merge to form larger aggregates. Soil aggregates are not permanent structures, they continuously change in response to internal and external forces and other drivers, including moisture, capillary pressure, temperature, biological activity, and human disturbances. Soil aggregation processes and the resulting functionality span multiple spatial and temporal scales. The intertwined biological and physical nature of soil aggregation, and the time scales involved precluded a universally applicable and quantifiable framework for characterizing the nature and function of soil aggregation. We introduce a biophysical framework of soil aggregation that considers the various modes and factors of the genesis, maturation and degradation of soil aggregates including wetting/drying cycles, soil mechanical processes, biological activity and the nature of primary soil particles. The framework attempts to disentangle mechanical (compaction and soil fragmentation) from in-situ biophysical aggregation and provides a consistent description of aggregate size, hierarchical organization, and life time. It also enables quantitative description of biotic and abiotic functions of soil aggregates including diffusion and storage of mass and energy as well as role of aggregates as hot spots of nutrient accumulation, biodiversity, and biogeochemical cycles.
Targeting patterns: A path forward for uncertainty quantification in carbon cycle science? (Invited)
Michalak, A. M.; Fang, Y.; Miller, S. M.; Ray, J.; Shiga, Y. P.; Yadav, V.; Zscheischler, J.
2013-12-01
The central challenge in carbon cycle science is to understand where, why, and how the terrestrial biosphere and oceans are taking up approximately half of the carbon being emitted through human activity. Such understanding would make it possible to decrease the uncertainty associated with predictions of carbon-climate feedbacks, and therefore reduce one of the key uncertainties in atmospheric carbon abundance and climate predictions. A second emerging challenge is that of quantifying the anthropogenic emissions themselves, and their changes over time, in support of efforts aimed at limiting emissions. Much work has focused on quantifying carbon exchange (a.k.a. fluxes) on scales ranging from local to continental using observed variability in atmospheric concentrations of carbon gases as a constraint. Uncertainties, however, have not decreased substantially over time. The difficulty associated with constraining the carbon budget can be attributed in part to the fact that (1) net fluxes are a small residual of large gross variability in emissions and uptake, especially for carbon dioxide (2) the intermediate (i.e. local to continental) scales of interest to carbon cycle studies are not well constrained by existing observing systems, and (3) the budgets inferred through inverse modeling studies are very sensitive to the atmospheric boundary conditions of the examined regions. While these modeling challenges can be addressed over time through improvements in observational and modeling approaches, the question that emerges is: What can be done to address some of the core questions given the state of current resources? One potential approach is, rather than focus on spatially and temporally aggregated quantities (i.e. magnitude of net fluxes over given regions during given time periods), to focus instead on the ability to identify the spatiotemporal patterns of net fluxes, and linking these to the underlying driving processes. In other words, one might shift the primary
A web-application for visualizing uncertainty in numerical ensemble models
Alberti, Koko; Hiemstra, Paul; de Jong, Kor; Karssenberg, Derek
2013-04-01
Numerical ensemble models are used in the analysis and forecasting of a wide range of environmental processes. Common use cases include assessing the consequences of nuclear accidents, pollution releases into the ocean or atmosphere, forest fires, volcanic eruptions, or identifying areas at risk from such hazards. In addition to the increased use of scenario analyses and model forecasts, the availability of supplementary data describing errors and model uncertainties is increasingly commonplace. Unfortunately most current visualization routines are not capable of properly representing uncertain information. As a result, uncertainty information is not provided at all, not readily accessible, or it is not communicated effectively to model users such as domain experts, decision makers, policy makers, or even novice users. In an attempt to address these issues a lightweight and interactive web-application has been developed. It makes clear and concise uncertainty visualizations available in a web-based mapping and visualization environment, incorporating aggregation (upscaling) techniques to adjust uncertainty information to the zooming level. The application has been built on a web mapping stack of open source software, and can quantify and visualize uncertainties in numerical ensemble models in such a way that both expert and novice users can investigate uncertainties present in a simple ensemble dataset. As a test case, a dataset was used which forecasts the spread of an airborne tracer across Western Europe. Extrinsic uncertainty representations are used in which dynamic circular glyphs are overlaid on model attribute maps to convey various uncertainty concepts. It supports both basic uncertainty metrics such as standard deviation, standard error, width of the 95% confidence interval and interquartile range, as well as more experimental ones aimed at novice users. Ranges of attribute values can be specified, and the circular glyphs dynamically change size to
Sensitivity-Uncertainty Techniques for Nuclear Criticality Safety
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rising, Michael Evan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Alwin, Jennifer Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-08-07
The sensitivity and uncertainty analysis course will introduce students to k_{eff} sensitivity data, cross-section uncertainty data, how k_{eff} sensitivity data and k_{eff} uncertainty data are generated and how they can be used. Discussion will include how sensitivity/uncertainty data can be used to select applicable critical experiments, to quantify a defensible margin to cover validation gaps and weaknesses, and in development of upper subcritical limits.
Suspensions of colloidal particles and aggregates
Babick, Frank
2016-01-01
This book addresses the properties of particles in colloidal suspensions. It has a focus on particle aggregates and the dependency of their physical behaviour on morphological parameters. For this purpose, relevant theories and methodological tools are reviewed and applied to selected examples. The book is divided into four main chapters. The first of them introduces important measurement techniques for the determination of particle size and interfacial properties in colloidal suspensions. A further chapter is devoted to the physico-chemical properties of colloidal particles—highlighting the interfacial phenomena and the corresponding interactions between particles. The book’s central chapter examines the structure-property relations of colloidal aggregates. This comprises concepts to quantify size and structure of aggregates, models and numerical tools for calculating the (light) scattering and hydrodynamic properties of aggregates, and a discussion on van-der-Waals and double layer interactions between ...
Quantifying interspecific coagulation efficiency of phytoplankton
DEFF Research Database (Denmark)
Hansen, J.L.S.; Kiørboe, Thomas
1997-01-01
Non-sticky latex beads and sticky diatoms were used as models to describe mutual coagulation between sticky and non-sticky particles. in mixed suspensions of beads and Thalassiosira nordenskjoeldii, both types of particles coagulated into mixed aggregates at specific rates, from which the intersp......Non-sticky latex beads and sticky diatoms were used as models to describe mutual coagulation between sticky and non-sticky particles. in mixed suspensions of beads and Thalassiosira nordenskjoeldii, both types of particles coagulated into mixed aggregates at specific rates, from which....... nordenskjoeldii. Mutual coagulation between Skeletonema costatum and the non-sticky cel:ls of Ditylum brightwellii also proceeded with hall the efficiency of S. costatum alone. The latex beads were suitable to be used as 'standard particles' to quantify the ability of phytoplankton to prime aggregation...
The Role of Uncertainty in Climate Science
Oreskes, N.
2012-12-01
Scientific discussions of climate change place considerable weight on uncertainty. The research frontier, by definition, rests at the interface between the known and the unknown and our scientific investigations necessarily track this interface. Yet, other areas of active scientific research are not necessarily characterized by a similar focus on uncertainty; previous assessments of science for policy, for example, do not reveal such extensive efforts at uncertainty quantification. Why has uncertainty loomed so large in climate science? This paper argues that the extensive discussions of uncertainty surrounding climate change are at least in part a response to the social and political context of climate change. Skeptics and contrarians focus on uncertainty as a political strategy, emphasizing or exaggerating uncertainties as a means to undermine public concern about climate change and delay policy action. The strategy works in part because it appeals to a certain logic: if our knowledge is uncertain, then it makes sense to do more research. Change, as the tobacco industry famously realized, requires justification; doubt favors the status quo. However, the strategy also works by pulling scientists into an "uncertainty framework," inspiring them to respond to the challenge by addressing and quantifying the uncertainties. The problem is that all science is uncertain—nothing in science is ever proven absolutely, positively—so as soon as one uncertainty is addressed, another can be raised, which is precisely what contrarians have done over the past twenty years.
Distinguishing aggregate formation and aggregate clearance using cell based assays
E. Eenjes, E.; J.M. Dragich; H. Kampinga (Harm); A. Yamamoto, A.
2016-01-01
textabstractThe accumulation of ubiquitinated proteinaceous inclusions represents a complex process, reflecting the disequilibrium between aggregate formation and aggregate clearance. Although decreasing aggregate formation or augmenting aggregate clearance will ultimately lead to diminished aggrega
Yuen, W.; Ma, Q.; Du, K.; Koloutsou-Vakakis, S.; Rood, M. J.
2015-12-01
Measurements of particulate matter (PM) emissions generated from fugitive sources are of interest in air pollution studies, since such emissions vary widely both spatially and temporally. This research focuses on determining the uncertainties in quantifying fugitive PM emission factors (EFs) generated from mobile vehicles using a vertical scanning micro-pulse lidar (MPL). The goal of this research is to identify the greatest sources of uncertainty of the applied lidar technique in determining fugitive PM EFs, and to recommend methods to reduce the uncertainties in this measurement. The MPL detects the PM plume generated by mobile fugitive sources that are carried downwind to the MPL's vertical scanning plane. Range-resolved MPL signals are measured, corrected, and converted to light extinction coefficients, through inversion of the lidar equation and calculation of the lidar ratio. In this research, both the near-end and far-end lidar equation inversion methods are considered. Range-resolved PM mass concentrations are then determined from the extinction coefficient measurements using the measured mass extinction efficiency (MEE) value, which is an intensive PM property. MEE is determined by collocated PM mass concentration and light extinction measurements, provided respectively by a DustTrak and an open-path laser transmissometer. These PM mass concentrations are then integrated with wind information, duration of plume event, and vehicle distance travelled to obtain fugitive PM EFs. To obtain the uncertainty of PM EFs, uncertainties in MPL signals, lidar ratio, MEE, and wind variation are considered. Error propagation method is applied to each of the above intermediate steps to aggregate uncertainty sources. Results include determination of uncertainties in each intermediate step, and comparison of uncertainties between the use of near-end and far-end lidar equation inversion methods.
Quantifiers, Anaphora and Intensionality
Dalrymple, M; Pereira, F C N; Saraswat, V; Dalrymple, Mary; Lamping, John; Pereira, Fernando; Saraswat, Vijay
1995-01-01
The relationship between Lexical-Functional Grammar (LFG) {\\em functional structures} (f-structures) for sentences and their semantic interpretations can be expressed directly in a fragment of linear logic in a way that correctly explains the constrained interactions between quantifier scope ambiguity, bound anaphora and intensionality. This deductive approach to semantic interpretaion obviates the need for additional mechanisms, such as Cooper storage, to represent the possible scopes of a quantified NP, and explains the interactions between quantified NPs, anaphora and intensional verbs such as `seek'. A single specification in linear logic of the argument requirements of intensional verbs is sufficient to derive the correct reading predictions for intensional-verb clauses both with nonquantified and with quantified direct objects. In particular, both de dicto and de re readings are derived for quantified objects. The effects of type-raising or quantifying-in rules in other frameworks here just follow as li...
Macro Expectations, Aggregate Uncertainty, and Expected Term Premia
DEFF Research Database (Denmark)
Dick, Christian D.; Schmeling, Maik; Schrimpf, Andreas
Based on individual expectations from the Survey of Professional Forecasters, we construct a realtime proxy for expected term premium changes on long-term bonds. We empirically investigate the relation of these bond term premium expectations with expectations about key macroeconomic variables as ...
Macro Expectations, Aggregate Uncertainty, and Expected Term Premia
DEFF Research Database (Denmark)
Dick, Christian D.; Schmeling, Maik; Schrimpf, Andreas
2013-01-01
Based on individual expectations from the Survey of Professional Forecasters, we construct a realtime proxy for expected term premium changes on long-term bonds. We empirically investigate the relation of these bond term premium expectations with expectations about key macroeconomic variables as ...
Gosink, Luke; Bensema, Kevin; Pulsipher, Trenton; Obermaier, Harald; Henry, Michael; Childs, Hank; Joy, Kenneth I
2013-12-01
Numerical ensemble forecasting is a powerful tool that drives many risk analysis efforts and decision making tasks. These ensembles are composed of individual simulations that each uniquely model a possible outcome for a common event of interest: e.g., the direction and force of a hurricane, or the path of travel and mortality rate of a pandemic. This paper presents a new visual strategy to help quantify and characterize a numerical ensemble's predictive uncertainty: i.e., the ability for ensemble constituents to accurately and consistently predict an event of interest based on ground truth observations. Our strategy employs a Bayesian framework to first construct a statistical aggregate from the ensemble. We extend the information obtained from the aggregate with a visualization strategy that characterizes predictive uncertainty at two levels: at a global level, which assesses the ensemble as a whole, as well as a local level, which examines each of the ensemble's constituents. Through this approach, modelers are able to better assess the predictive strengths and weaknesses of the ensemble as a whole, as well as individual models. We apply our method to two datasets to demonstrate its broad applicability.
Practical problems in aggregating expert opinions
Energy Technology Data Exchange (ETDEWEB)
Booker, J.M.; Picard, R.R.; Meyer, M.A.
1993-11-01
Expert opinion is data given by a qualified person in response to a technical question. In these analyses, expert opinion provides information where other data are either sparse or non-existent. Improvements in forecasting result from the advantageous addition of expert opinion to observed data in many areas, such as meteorology and econometrics. More generally, analyses of large, complex systems often involve experts on various components of the system supplying input to a decision process; applications include such wide-ranging areas as nuclear reactor safety, management science, and seismology. For large or complex applications, no single expert may be knowledgeable enough about the entire application. In other problems, decision makers may find it comforting that a consensus or aggregation of opinions is usually better than a single opinion. Many risk and reliability studies require a single estimate for modeling, analysis, reporting, and decision making purposes. For problems with large uncertainties, the strategy of combining as diverse a set of experts as possible hedges against underestimation of that uncertainty. Decision makers are frequently faced with the task of selecting the experts and combining their opinions. However, the aggregation is often the responsibility of an analyst. Whether the decision maker or the analyst does the aggregation, the input for it, such as providing weights for experts or estimating other parameters, is imperfect owing to a lack of omniscience. Aggregation methods for expert opinions have existed for over thirty years; yet many of the difficulties with their use remain unresolved. The bulk of these problem areas are summarized in the sections that follow: sensitivities of results to assumptions, weights for experts, correlation of experts, and handling uncertainties. The purpose of this paper is to discuss the sources of these problems and describe their effects on aggregation.
Managing project risks and uncertainties
Directory of Open Access Journals (Sweden)
Mike Mentis
2015-01-01
Full Text Available This article considers threats to a project slipping on budget, schedule and fit-for-purpose. Threat is used here as the collective for risks (quantifiable bad things that can happen and uncertainties (poorly or not quantifiable bad possible events. Based on experience with projects in developing countries this review considers that (a project slippage is due to uncertainties rather than risks, (b while eventuation of some bad things is beyond control, managed execution and oversight are still the primary means to keeping within budget, on time and fit-for-purpose, (c improving project delivery is less about bigger and more complex and more about coordinated focus, effectiveness and developing thought-out heuristics, and (d projects take longer and cost more partly because threat identification is inaccurate, the scope of identified threats is too narrow, and the threat assessment product is not integrated into overall project decision-making and execution. Almost by definition, what is poorly known is likely to cause problems. Yet it is not just the unquantifiability and intangibility of uncertainties causing project slippage, but that they are insufficiently taken into account in project planning and execution that cause budget and time overruns. Improving project performance requires purpose-driven and managed deployment of scarce seasoned professionals. This can be aided with independent oversight by deeply experienced panelists who contribute technical insights and can potentially show that diligence is seen to be done.
Relating confidence to measured information uncertainty in qualitative reasoning
Energy Technology Data Exchange (ETDEWEB)
Chavez, Gregory M [Los Alamos National Laboratory; Zerkle, David K [Los Alamos National Laboratory; Key, Brian P [Los Alamos National Laboratory; Shevitz, Daniel W [Los Alamos National Laboratory
2010-10-07
Qualitative reasoning makes use of qualitative assessments provided by subject matter experts to model factors such as security risk. Confidence in a result is important and useful when comparing competing results. Quantifying the confidence in an evidential reasoning result must be consistent and based on the available information. A novel method is proposed to relate confidence to the available information uncertainty in the result using fuzzy sets. Information uncertainty can be quantified through measures of non-specificity and conflict. Fuzzy values for confidence are established from information uncertainty values that lie between the measured minimum and maximum information uncertainty values.
Verification of uncertainty budgets
DEFF Research Database (Denmark)
Heydorn, Kaj; Madsen, B.S.
2005-01-01
The quality of analytical results is expressed by their uncertainty, as it is estimated on the basis of an uncertainty budget; little effort is, however, often spent on ascertaining the quality of the uncertainty budget. The uncertainty budget is based on circumstantial or historical data, and th...
Parameter and Uncertainty Estimation in Groundwater Modelling
DEFF Research Database (Denmark)
Jensen, Jacob Birk
The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... and uncertainty estimation. Essential issues relating to calibration are discussed. The classical regression methods are described; however, the main focus is on the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. The next two chapters describe case studies in which the GLUE methodology...
Uncertainty Quantification in Hybrid Dynamical Systems
Sahai, Tuhin
2011-01-01
Uncertainty quantification (UQ) techniques are frequently used to ascertain output variability in systems with parametric uncertainty. Traditional algorithms for UQ are either system-agnostic and slow (such as Monte Carlo) or fast with stringent assumptions on smoothness (such as polynomial chaos and Quasi-Monte Carlo). In this work, we develop a fast UQ approach for hybrid dynamical systems by extending the polynomial chaos methodology to these systems. To capture discontinuities, we use a wavelet-based Wiener-Haar expansion. We develop a boundary layer approach to propagate uncertainty through separable reset conditions. We also introduce a transport theory based approach for propagating uncertainty through hybrid dynamical systems. Here the expansion yields a set of hyperbolic equations that are solved by integrating along characteristics. The solution of the partial differential equation along the characteristics allows one to quantify uncertainty in hybrid or switching dynamical systems. The above method...
Uncertainty quantification in hybrid dynamical systems
Sahai, Tuhin; Pasini, José Miguel
2013-03-01
Uncertainty quantification (UQ) techniques are frequently used to ascertain output variability in systems with parametric uncertainty. Traditional algorithms for UQ are either system-agnostic and slow (such as Monte Carlo) or fast with stringent assumptions on smoothness (such as polynomial chaos and Quasi-Monte Carlo). In this work, we develop a fast UQ approach for hybrid dynamical systems by extending the polynomial chaos methodology to these systems. To capture discontinuities, we use a wavelet-based Wiener-Haar expansion. We develop a boundary layer approach to propagate uncertainty through separable reset conditions. We also introduce a transport theory based approach for propagating uncertainty through hybrid dynamical systems. Here the expansion yields a set of hyperbolic equations that are solved by integrating along characteristics. The solution of the partial differential equation along the characteristics allows one to quantify uncertainty in hybrid or switching dynamical systems. The above methods are demonstrated on example problems.
The topology of geology 2: Topological uncertainty
Thiele, Samuel T.; Jessell, Mark W.; Lindsay, Mark; Wellmann, J. Florian; Pakyuz-Charrier, Evren
2016-10-01
Uncertainty is ubiquitous in geology, and efforts to characterise and communicate it are becoming increasingly important. Recent studies have quantified differences between perturbed geological models to gain insight into uncertainty. We build on this approach by quantifying differences in topology, a property that describes geological relationships in a model, introducing the concept of topological uncertainty. Data defining implicit geological models were perturbed to simulate data uncertainties, and the amount of topological variation in the resulting model suite measured to provide probabilistic assessments of specific topological hypotheses, sources of topological uncertainty and the classification of possible model realisations based on their topology. Overall, topology was found to be highly sensitive to small variations in model construction parameters in realistic models, with almost all of the several thousand realisations defining distinct topologies. In particular, uncertainty related to faults and unconformities was found to have profound topological implications. Finally, possible uses of topology as a geodiversity metric and validation filter are discussed, and methods of incorporating topological uncertainty into physical models are suggested.
Fuzzy Uncertainty Evaluation for Fault Tree Analysis
Energy Technology Data Exchange (ETDEWEB)
Kim, Ki Beom; Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of); Jae, Moo Sung [Hanyang University, Seoul (Korea, Republic of)
2015-05-15
This traditional probabilistic approach can calculate relatively accurate results. However it requires a long time because of repetitive computation due to the MC method. In addition, when informative data for statistical analysis are not sufficient or some events are mainly caused by human error, the probabilistic approach may not be possible because uncertainties of these events are difficult to be expressed by probabilistic distributions. In order to reduce the computation time and quantify uncertainties of top events when basic events whose uncertainties are difficult to be expressed by probabilistic distributions exist, the fuzzy uncertainty propagation based on fuzzy set theory can be applied. In this paper, we develop a fuzzy uncertainty propagation code and apply the fault tree of the core damage accident after the large loss of coolant accident (LLOCA). The fuzzy uncertainty propagation code is implemented and tested for the fault tree of the radiation release accident. We apply this code to the fault tree of the core damage accident after the LLOCA in three cases and compare the results with those computed by the probabilistic uncertainty propagation using the MC method. The results obtained by the fuzzy uncertainty propagation can be calculated in relatively short time, covering the results obtained by the probabilistic uncertainty propagation.
Opportunities for Price Manipulation by Aggregators in Electricity Markets
Ruhi, Navid Azizan; Chen, Niangjun; Wierman, Adam
2016-01-01
Aggregators are playing an increasingly crucial role in the integration of renewable generation in power systems. However, the intermittent nature of renewable generation makes market interactions of aggregators difficult to monitor and regulate, raising concerns about potential market manipulation by aggregators. In this paper, we study this issue by quantifying the profit an aggregator can obtain through strategic curtailment of generation in an electricity market. We show that, while the problem of maximizing the benefit from curtailment is hard in general, efficient algorithms exist when the topology of the network is radial (acyclic). Further, we highlight that significant increases in profit are possible via strategic curtailment in practical settings.
Pia, Maria Grazia; Begalli, Marcia; Lechner, Anton; Quintieri, Lina; Saracco, Paolo
2010-01-01
The issue of how epistemic uncertainties affect the outcome of Monte Carlo simulation is discussed by means of a concrete use case: the simulation of the longitudinal energy deposition profile of low energy protons. A variety of electromagnetic and hadronic physics models is investigated, and their effects are analyzed. Possible systematic effects are highlighted. The results identify requirements for experimental measurements capable of reducing epistemic uncertainties in the simulation.
Decomposing generalized quantifiers
Westerståhl, D.
2008-01-01
This note explains the circumstances under which a type <1> quantifier can be decomposed into a type <1, 1> quantifier and a set, by fixing the first argument of the former to the latter. The motivation comes from the semantics of Noun Phrases (also called Determiner Phrases) in natural languages, b
Decomposing generalized quantifiers
Westerståhl, D.
2008-01-01
This note explains the circumstances under which a type <1> quantifier can be decomposed into a type <1, 1> quantifier and a set, by fixing the first argument of the former to the latter. The motivation comes from the semantics of Noun Phrases (also called Determiner Phrases) in natural languages,
Understanding quantifiers in language
Szymanik, J.; Zajenkowski, M.; Taatgen, N.; van Rijn, H.
2009-01-01
We compare time needed for understanding different types of quantifiers. We show that the computational distinction between quantifiers recognized by finite-automata and push-down automata is psychologically relevant. Our research improves upon hypothesis and explanatory power of recent neuroimaging
National Aeronautics and Space Administration — This paper presents a novel set of uncertainty measures to quantify the impact of input uncertainty on nonlinear prognosis systems. A Particle Filtering-based method...
Lucieer, A.; Veen, L.E.
2009-01-01
Uncertainty and vagueness are important concepts when dealing with transition zones between vegetation communities or land-cover classes. In this study, classification uncertainty is quantified by applying a supervised fuzzy classification algorithm. New visualization techniques are proposed and pre
Forecasting Uncertainty in Electricity Smart Meter Data by Boosting Additive Quantile Regression
Taieb, Souhaib Ben
2016-03-02
Smart electricity meters are currently deployed in millions of households to collect detailed individual electricity consumption data. Compared with traditional electricity data based on aggregated consumption, smart meter data are much more volatile and less predictable. There is a need within the energy industry for probabilistic forecasts of household electricity consumption to quantify the uncertainty of future electricity demand in order to undertake appropriate planning of generation and distribution. We propose to estimate an additive quantile regression model for a set of quantiles of the future distribution using a boosting procedure. By doing so, we can benefit from flexible and interpretable models, which include an automatic variable selection. We compare our approach with three benchmark methods on both aggregated and disaggregated scales using a smart meter data set collected from 3639 households in Ireland at 30-min intervals over a period of 1.5 years. The empirical results demonstrate that our approach based on quantile regression provides better forecast accuracy for disaggregated demand, while the traditional approach based on a normality assumption (possibly after an appropriate Box-Cox transformation) is a better approximation for aggregated demand. These results are particularly useful since more energy data will become available at the disaggregated level in the future.
Responding to the Challenge of True Uncertainty
DEFF Research Database (Denmark)
Hallin, Carina Antonia; Andersen, Torben Juul
We construe a conceptual framework for responding effectively to true uncertainty in the business environment. We drill down to the essential micro-foundational capabilities - sensing and seizing of dynamic capabilities - and link them to classical strategic issue management theory with suggestions...... on aggregation of stakeholder sensing and predictions of emergent strategic issues can positively influence the two capabilities and help the firm adapt in the face of uncertainty and unpredictability. Robust measures predicating performance based on information from key stakeholders involved in the firm’s core...
Uncertainty vs. Information (Invited)
Nearing, Grey
2017-04-01
Information theory is the branch of logic that describes how rational epistemic states evolve in the presence of empirical data (Knuth, 2005), and any logic of science is incomplete without such a theory. Developing a formal philosophy of science that recognizes this fact results in essentially trivial solutions to several longstanding problems are generally considered intractable, including: • Alleviating the need for any likelihood function or error model. • Derivation of purely logical falsification criteria for hypothesis testing. • Specification of a general quantitative method for process-level model diagnostics. More generally, I make the following arguments: 1. Model evaluation should not proceed by quantifying and/or reducing error or uncertainty, and instead should be approached as a problem of ensuring that our models contain as much information as our experimental data. I propose that the latter is the only question a scientist actually has the ability to ask. 2. Instead of building geophysical models as solutions to differential equations that represent conservation laws, we should build models as maximum entropy distributions constrained by conservation symmetries. This will allow us to derive predictive probabilities directly from first principles. Knuth, K. H. (2005) 'Lattice duality: The origin of probability and entropy', Neurocomputing, 67, pp. 245-274.
Uncertainty and sampling issues in tank characterization
Energy Technology Data Exchange (ETDEWEB)
Liebetrau, A.M.; Pulsipher, B.A.; Kashporenko, D.M. [and others
1997-06-01
A defensible characterization strategy must recognize that uncertainties are inherent in any measurement or estimate of interest and must employ statistical methods for quantifying and managing those uncertainties. Estimates of risk and therefore key decisions must incorporate knowledge about uncertainty. This report focuses statistical methods that should be employed to ensure confident decision making and appropriate management of uncertainty. Sampling is a major source of uncertainty that deserves special consideration in the tank characterization strategy. The question of whether sampling will ever provide the reliable information needed to resolve safety issues is explored. The issue of sample representativeness must be resolved before sample information is reliable. Representativeness is a relative term but can be defined in terms of bias and precision. Currently, precision can be quantified and managed through an effective sampling and statistical analysis program. Quantifying bias is more difficult and is not being addressed under the current sampling strategies. Bias could be bounded by (1) employing new sampling methods that can obtain samples from other areas in the tanks, (2) putting in new risers on some worst case tanks and comparing the results from existing risers with new risers, or (3) sampling tanks through risers under which no disturbance or activity has previously occurred. With some bound on bias and estimates of precision, various sampling strategies could be determined and shown to be either cost-effective or infeasible.
Quantification of Uncertainty in Thermal Building Simulation
DEFF Research Database (Denmark)
Brohus, Henrik; Haghighat, F.; Frier, Christian
In order to quantify uncertainty in thermal building simulation stochastic modelling is applied on a building model. An application of stochastic differential equations is presented in Part 1 comprising a general heat balance for an arbitrary number of loads and zones in a building to determine...
Connected Car: Quantified Self becomes Quantified Car
Directory of Open Access Journals (Sweden)
Melanie Swan
2015-02-01
Full Text Available The automotive industry could be facing a situation of profound change and opportunity in the coming decades. There are a number of influencing factors such as increasing urban and aging populations, self-driving cars, 3D parts printing, energy innovation, and new models of transportation service delivery (Zipcar, Uber. The connected car means that vehicles are now part of the connected world, continuously Internet-connected, generating and transmitting data, which on the one hand can be helpfully integrated into applications, like real-time traffic alerts broadcast to smartwatches, but also raises security and privacy concerns. This paper explores the automotive connected world, and describes five killer QS (Quantified Self-auto sensor applications that link quantified-self sensors (sensors that measure the personal biometrics of individuals like heart rate and automotive sensors (sensors that measure driver and passenger biometrics or quantitative automotive performance metrics like speed and braking activity. The applications are fatigue detection, real-time assistance for parking and accidents, anger management and stress reduction, keyless authentication and digital identity verification, and DIY diagnostics. These kinds of applications help to demonstrate the benefit of connected world data streams in the automotive industry and beyond where, more fundamentally for human progress, the automation of both physical and now cognitive tasks is underway.
Liffen, C. L.; Hunter, M.
1980-01-01
Described is a school project to investigate aggregations in flatworms which may be influenced by light intensity, temperature, and some form of chemical stimulus released by already aggregating flatworms. Such investigations could be adopted to suit many educational levels of science laboratory activities. (DS)
Platelet activation and aggregation
DEFF Research Database (Denmark)
Jensen, Maria Sander; Larsen, O H; Christiansen, Kirsten
2013-01-01
This study introduces a new laboratory model of whole blood platelet aggregation stimulated by endogenously generated thrombin, and explores this aspect in haemophilia A in which impaired thrombin generation is a major hallmark. The method was established to measure platelet aggregation initiated...
Methodology for characterizing modeling and discretization uncertainties in computational simulation
Energy Technology Data Exchange (ETDEWEB)
ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.
2000-03-01
This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.
Aggregates from mineral wastes
Directory of Open Access Journals (Sweden)
Baic Ireneusz
2016-01-01
Full Text Available The problem concerning the growing demand for natural aggregates and the need to limit costs, including transportation from remote deposits, cause the increase in growth of interest in aggregates from mineral wastes as well as in technologies of their production and recovery. The paper presents the issue related to the group of aggregates other than natural. A common name is proposed for such material: “alternative aggregates”. The name seems to be fully justified due to adequacy of this term because of this raw materials origin and role, in comparison to the meaning of natural aggregates based on gravel and sand as well as crushed stones. The paper presents characteristics of the market and basic application of aggregates produced from mineral wastes, generated in the mining, power and metallurgical industries as well as material from demolished objects.
Marine Synechococcus Aggregation
Neuer, S.; Deng, W.; Cruz, B. N.; Monks, L.
2016-02-01
Cyanobacteria are considered to play an important role in the oceanic biological carbon pump, especially in oligotrophic regions. But as single cells are too small to sink, their carbon export has to be mediated by aggregate formation and possible consumption by zooplankton producing sinking fecal pellets. Here we report results on the aggregation of the ubiquitous marine pico-cyanobacterium Synechococcus as a model organism. We first investigated the mechanism behind such aggregation by studying the potential role of transparent exopolymeric particles (TEP) and the effects of nutrient (nitrogen or phosphorus) limitation on the TEP production and aggregate formation of these pico-cyanobacteria. We further studied the aggregation and subsequent settling in roller tanks and investigated the effects of the clays kaolinite and bentonite in a series of concentrations. Our results show that despite of the lowered growth rates, Synechococcus in nutrient limited cultures had larger cell-normalized TEP production, formed a greater volume of aggregates, and resulted in higher settling velocities compared to results from replete cultures. In addition, we found that despite their small size and lack of natural ballasting minerals, Synechococcus cells could still form aggregates and sink at measureable velocities in seawater. Clay minerals increased the number and reduced the size of aggregates, and their ballasting effects increased the sinking velocity and carbon export potential of aggregates. In comparison with the Synechococcus, we will also present results of the aggregation of the pico-cyanobacterium Prochlorococcus in roller tanks. These results contribute to our understanding in the physiology of marine Synechococcus as well as their role in the ecology and biogeochemistry in oligotrophic oceans.
An Ontology for Uncertainty in Climate Change Projections
King, A. W.
2011-12-01
Paraphrasing Albert Einstein's aphorism about scientific quantification: not all uncertainty that counts can be counted, and not all uncertainty that can be counted counts. The meaning of the term "uncertainty" in climate change science and assessment is itself uncertain. Different disciplines and perspectives bring different nuances if not meanings of the term to the conversation. For many scientists, uncertainty is somehow associated with statistical dispersion and standard error. For many users of climate change information, uncertainty is more related to their confidence, or lack thereof, in climate models. These "uncertainties" may be related, but they are not identical, and there is considerable room for confusion and misunderstanding. A knowledge framework, a system of concepts and vocabulary, for communicating uncertainty can add structure to the characterization and quantification of uncertainty and aid communication among scientists and users. I have developed an ontology for uncertainty in climate change projections derived largely from the report of the W3C Uncertainty Reasoning for the World Wide Web Incubator Group (URW3-XG) dealing with the problem of uncertainty representation and reasoning on the World Wide Web. I have adapted this ontology for uncertainty about information to uncertainty about climate change. Elements of the ontology apply with little or no translation to the information of climate change projections, with climate change almost a use case. Other elements can be translated into language used in climate-change discussions; translating aleatory uncertainty in the UncertaintyNature class as irreducible uncertainty is an example. I have added classes for source of uncertainty (UncertaintySource) (different model physics, for example) and metrics of uncertainty (UncertaintyMetric), at least, in the case of the latter, for those instances of uncertainty that can be quantified (i.e., counted). The statistical standard deviation isa member
Assessing uncertainties in land cover projections.
Alexander, Peter; Prestele, Reinhard; Verburg, Peter H; Arneth, Almut; Baranzelli, Claudia; Batista E Silva, Filipe; Brown, Calum; Butler, Adam; Calvin, Katherine; Dendoncker, Nicolas; Doelman, Jonathan C; Dunford, Robert; Engström, Kerstin; Eitelberg, David; Fujimori, Shinichiro; Harrison, Paula A; Hasegawa, Tomoko; Havlik, Petr; Holzhauer, Sascha; Humpenöder, Florian; Jacobs-Crisioni, Chris; Jain, Atul K; Krisztin, Tamás; Kyle, Page; Lavalle, Carlo; Lenton, Tim; Liu, Jiayi; Meiyappan, Prasanth; Popp, Alexander; Powell, Tom; Sands, Ronald D; Schaldach, Rüdiger; Stehfest, Elke; Steinbuks, Jevgenijs; Tabeau, Andrzej; van Meijl, Hans; Wise, Marshall A; Rounsevell, Mark D A
2017-02-01
Understanding uncertainties in land cover projections is critical to investigating land-based climate mitigation policies, assessing the potential of climate adaptation strategies and quantifying the impacts of land cover change on the climate system. Here, we identify and quantify uncertainties in global and European land cover projections over a diverse range of model types and scenarios, extending the analysis beyond the agro-economic models included in previous comparisons. The results from 75 simulations over 18 models are analysed and show a large range in land cover area projections, with the highest variability occurring in future cropland areas. We demonstrate systematic differences in land cover areas associated with the characteristics of the modelling approach, which is at least as great as the differences attributed to the scenario variations. The results lead us to conclude that a higher degree of uncertainty exists in land use projections than currently included in climate or earth system projections. To account for land use uncertainty, it is recommended to use a diverse set of models and approaches when assessing the potential impacts of land cover change on future climate. Additionally, further work is needed to better understand the assumptions driving land use model results and reveal the causes of uncertainty in more depth, to help reduce model uncertainty and improve the projections of land cover. © 2016 John Wiley & Sons Ltd.
Nanoparticles: Uncertainty Risk Analysis
DEFF Research Database (Denmark)
Grieger, Khara Deanne; Hansen, Steffen Foss; Baun, Anders
2012-01-01
Scientific uncertainty plays a major role in assessing the potential environmental risks of nanoparticles. Moreover, there is uncertainty within fundamental data and information regarding the potential environmental and health risks of nanoparticles, hampering risk assessments based on standard a...
Directory of Open Access Journals (Sweden)
Sharma M
2005-01-01
Full Text Available Protein aggregate myopathies (PAM are an emerging group of muscle diseases characterized by structural abnormalities. Protein aggregate myopathies are marked by the aggregation of intrinsic proteins within muscle fibers and fall into four major groups or conditions: (1 desmin-related myopathies (DRM that include desminopathies, a-B crystallinopathies, selenoproteinopathies caused by mutations in the, a-B crystallin and selenoprotein N1 genes, (2 hereditary inclusion body myopathies, several of which have been linked to different chromosomal gene loci, but with as yet unidentified protein product, (3 actinopathies marked by mutations in the sarcomeric ACTA1 gene, and (4 myosinopathy marked by a mutation in the MYH-7 gene. While PAM forms 1 and 2 are probably based on impaired extralysosomal protein degradation, resulting in the accumulation of numerous and diverse proteins (in familial types in addition to respective mutant proteins, PAM forms 3 and 4 may represent anabolic or developmental defects because of preservation of sarcomeres outside of the actin and myosin aggregates and dearth or absence of other proteins in these actin or myosin aggregates, respectively. The pathogenetic principles governing protein aggregation within muscle fibers and subsequent structural sarcomeres are still largely unknown in both the putative catabolic and anabolic forms of PAM. Presence of inclusions and their protein composition in other congenital myopathies such as reducing bodies, cylindrical spirals, tubular aggregates and others await clarification. The hitherto described PAMs were first identified by immunohistochemistry of proteins and subsequently by molecular analysis of their genes.
Charged Dust Aggregate Interactions
Matthews, Lorin; Hyde, Truell
2015-11-01
A proper understanding of the behavior of dust particle aggregates immersed in a complex plasma first requires a knowledge of the basic properties of the system. Among the most important of these are the net electrostatic charge and higher multipole moments on the dust aggregate as well as the manner in which the aggregate interacts with the local electrostatic fields. The formation of elongated, fractal-like aggregates levitating in the sheath electric field of a weakly ionized RF generated plasma discharge has recently been observed experimentally. The resulting data has shown that as aggregates approach one another, they can both accelerate and rotate. At equilibrium, aggregates are observed to levitate with regular spacing, rotating about their long axis aligned parallel to the sheath electric field. Since gas drag tends to slow any such rotation, energy must be constantly fed into the system in order to sustain it. A numerical model designed to analyze this motion provides both the electrostatic charge and higher multipole moments of the aggregate while including the forces due to thermophoresis, neutral gas drag, and the ion wakefield. This model will be used to investigate the ambient conditions leading to the observed interactions. This research is funded by NSF Grant 1414523.
Uncertainty in simulating wheat yields under climate change : Letter
Asseng, S.; Ewert, F.; Rosenzweig, C.; Jones, J.W.; Supit, I.
2013-01-01
Projections of climate change impacts on crop yields are inherently uncertain1. Uncertainty is often quantified when projecting future greenhouse gas emissions and their influence on climate2. However, multi-model uncertainty analysis of crop responses to climate change is rare because systematic
Uncertainty-guided sampling to improve digital soil maps
Stumpf, Felix; Schmidt, Karsten; Goebes, Philipp; Behrens, Thorsten; Schönbrodt-stitt, Sarah; Wadoux, Alexandre; Xiang, Wei; Scholten, Thomas
2017-01-01
Digital soil mapping (DSM) products represent estimates of spatially distributed soil properties. These estimations comprise an element of uncertainty that is not evenly distributed over the area covered by DSM. If we quantify the uncertainty spatially explicit, this information can be used to impro
Uncertainties in projecting climate-change impacts in marine ecosystems
DEFF Research Database (Denmark)
Payne, Mark; Barange, Manuel; Cheung, William W. L.
2016-01-01
Projections of the impacts of climate change on marine ecosystems are a key prerequisite for the planning of adaptation strategies, yet they are inevitably associated with uncertainty. Identifying, quantifying, and communicating this uncertainty is key to both evaluating the risk associated with ...
Aggregated Computational Toxicology Online Resource
U.S. Environmental Protection Agency — Aggregated Computational Toxicology Online Resource (AcTOR) is EPA's online aggregator of all the public sources of chemical toxicity data. ACToR aggregates data...
Parton Distribution Function Uncertainties
Giele, Walter T.; Kosower, David A.; Giele, Walter T.; Keller, Stephane A.; Kosower, David A.
2001-01-01
We present parton distribution functions which include a quantitative estimate of its uncertainties. The parton distribution functions are optimized with respect to deep inelastic proton data, expressing the uncertainties as a density measure over the functional space of parton distribution functions. This leads to a convenient method of propagating the parton distribution function uncertainties to new observables, now expressing the uncertainty as a density in the prediction of the observable. New measurements can easily be included in the optimized sets as added weight functions to the density measure. Using the optimized method nowhere in the analysis compromises have to be made with regard to the treatment of the uncertainties.
Aggregation of Environmental Model Data for Decision Support
Alpert, J. C.
2013-12-01
Weather forecasts and warnings must be prepared and then delivered so as to reach their intended audience in good time to enable effective decision-making. An effort to mitigate these difficulties was studied at a Workshop, 'Sustaining National Meteorological Services - Strengthening WMO Regional and Global Centers' convened, June , 2013, by the World Bank, WMO and the US National Weather Service (NWS). The skill and accuracy of atmospheric forecasts from deterministic models have increased and there are now ensembles of such models that improve decisions to protect life, property and commerce. The NWS production of numerical weather prediction products result in model output from global and high resolution regional ensemble forecasts. Ensembles are constructed by changing the initial conditions to make a 'cloud' of forecasts that attempt to span the space of possible atmospheric realizations which can quantify not only the most likely forecast, but also the uncertainty. This has led to an unprecedented increase in data production and information content from higher resolution, multi-model output and secondary calculations. One difficulty is to obtain the needed subset of data required to estimate the probability of events, and report the information. The calibration required to reliably estimate the probability of events, and honing of threshold adjustments to reduce false alarms for decision makers is also needed. To meet the future needs of the ever-broadening user community and address these issues on a national and international basis, the weather service implemented the NOAA Operational Model Archive and Distribution System (NOMADS). NOMADS provides real-time and retrospective format independent access to climate, ocean and weather model data and delivers high availability content services as part of NOAA's official real time data dissemination at its new NCWCP web operations center. An important aspect of the server's abilities is to aggregate the matrix of
Institute of Scientific and Technical Information of China (English)
LIDian-qing; ZHANGSheng-kun
2004-01-01
The classical probability theory cannot effectively quantify the parameter uncertainty in probability of detection.Furthermore,the conventional data analytic method and expert judgment method fail to handle the problem of model uncertainty updating with the information from nondestructive inspection.To overcome these disadvantages,a Bayesian approach was proposed to quantify the parameter uncertainty in probability of detection.Furthermore,the formulae of the multiplication factors to measure the statistical uncertainties in the probability of detection following the Weibull distribution were derived.A Bayesian updating method was applied to compute the posterior probabilities of model weights and the posterior probability density functions of distribution parameters of probability of detection.A total probability model method was proposed to analyze the problem of multi-layered model uncertainty updating.This method was then applied to the problem of multilayered corrosion model uncertainty updating for ship structures.The results indicate that the proposed method is very effective in analyzing the problem of multi-layered model uncertainty updating.
Energy Technology Data Exchange (ETDEWEB)
Andres, T.H
2002-05-01
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
Uncertainty and Cognitive Control
Directory of Open Access Journals (Sweden)
Faisal eMushtaq
2011-10-01
Full Text Available A growing trend of neuroimaging, behavioural and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1 There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2 There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3 The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the need for control; (4 Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders.
Recycled aggregates concrete: aggregate and mix properties
Directory of Open Access Journals (Sweden)
González-Fonteboa, B.
2005-09-01
Full Text Available This study of structural concrete made with recycled concrete aggregate focuses on two issues: 1. The characterization of such aggregate on the Spanish market. This involved conducting standard tests to determine density, water absorption, grading, shape, flakiness and hardness. The results obtained show that, despite the considerable differences with respect to density and water absorption between these and natural aggregates, on the whole recycled aggregate is apt for use in concrete production. 2. Testing to determine the values of basic concrete properties: mix design parameters were established for structural concrete in non-aggressive environments. These parameters were used to produce conventional concrete, and then adjusted to manufacture recycled concrete aggregate (RCA concrete, in which 50% of the coarse aggregate was replaced by the recycled material. Tests were conducted to determine the physical (density of the fresh and hardened material, water absorption and mechanical (compressive strength, splitting tensile strength and modulus of elasticity properties. The results showed that, from the standpoint of its physical and mechanical properties, concrete in which RCA accounted for 50% of the coarse aggregate compared favourably to conventional concrete.
Se aborda el estudio de hormigones estructurales fabricados con áridos reciclados procedentes de hormigón, incidiéndose en dos aspectos: 1. Caracterización de tales áridos, procedentes del mercado español. Para ello se llevan a cabo ensayos de densidad, absorción, granulometría, coeficiente de forma, índice de lajas y dureza. Los resultados obtenidos han puesto de manifiesto que, a pesar de que existen diferencias notables (sobre todo en cuanto a densidad y absorción con los áridos naturales, las características de los áridos hacen posible la fabricación de hormigones. 2. Ensayos sobre propiedades básicas de los hormigones: se establecen parámetros de dosificaci
Blade tip timing (BTT) uncertainties
Russhard, Pete
2016-06-01
Blade Tip Timing (BTT) is an alternative technique for characterising blade vibration in which non-contact timing probes (e.g. capacitance or optical probes), typically mounted on the engine casing (figure 1), and are used to measure the time at which a blade passes each probe. This time is compared with the time at which the blade would have passed the probe if it had been undergoing no vibration. For a number of years the aerospace industry has been sponsoring research into Blade Tip Timing technologies that have been developed as tools to obtain rotor blade tip deflections. These have been successful in demonstrating the potential of the technology, but rarely produced quantitative data, along with a demonstration of a traceable value for measurement uncertainty. BTT technologies have been developed under a cloak of secrecy by the gas turbine OEM's due to the competitive advantages it offered if it could be shown to work. BTT measurements are sensitive to many variables and there is a need to quantify the measurement uncertainty of the complete technology and to define a set of guidelines as to how BTT should be applied to different vehicles. The data shown in figure 2 was developed from US government sponsored program that bought together four different tip timing system and a gas turbine engine test. Comparisons showed that they were just capable of obtaining measurement within a +/-25% uncertainty band when compared to strain gauges even when using the same input data sets.
Protein Colloidal Aggregation Project
Oliva-Buisson, Yvette J. (Compiler)
2014-01-01
To investigate the pathways and kinetics of protein aggregation to allow accurate predictive modeling of the process and evaluation of potential inhibitors to prevalent diseases including cataract formation, chronic traumatic encephalopathy, Alzheimer's Disease, Parkinson's Disease and others.
Propagation of Tau aggregates.
Goedert, Michel; Spillantini, Maria Grazia
2017-05-30
Since 2009, evidence has accumulated to suggest that Tau aggregates form first in a small number of brain cells, from where they propagate to other regions, resulting in neurodegeneration and disease. Propagation of Tau aggregates is often called prion-like, which refers to the capacity of an assembled protein to induce the same abnormal conformation in a protein of the same kind, initiating a self-amplifying cascade. In addition, prion-like encompasses the release of protein aggregates from brain cells and their uptake by neighbouring cells. In mice, the intracerebral injection of Tau inclusions induced the ordered assembly of monomeric Tau, followed by its spreading to distant brain regions. Short fibrils constituted the major species of seed-competent Tau. The existence of several human Tauopathies with distinct fibril morphologies has led to the suggestion that different molecular conformers (or strains) of aggregated Tau exist.
Siegel, Irving H.
The arithmetic processes of aggregation and averaging are basic to quantitative investigations of employment, unemployment, and related concepts. In explaining these concepts, this report stresses need for accuracy and consistency in measurements, and describes tools for analyzing alternative measures. (BH)
Uncertainty quantification for environmental models
Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming
2012-01-01
Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10
Quantifying linguistic coordination
DEFF Research Database (Denmark)
Fusaroli, Riccardo; Tylén, Kristian
). We employ nominal recurrence analysis (Orsucci et al 2005, Dale et al 2011) on the decision-making conversations between the participants. We report strong correlations between various indexes of recurrence and collective performance. We argue this method allows us to quantify the qualities......Language has been defined as a social coordination device (Clark 1996) enabling innovative modalities of joint action. However, the exact coordinative dynamics over time and their effects are still insufficiently investigated and quantified. Relying on the data produced in a collective decision...
Quantifying synergistic mutual information
Griffith, Virgil
2012-01-01
Quantifying cooperation among random variables in predicting a single target random variable is an important problem in many biological systems with 10s to 1000s of co-dependent variables. We review the prior literature of information theoretical measures of synergy and introduce a novel synergy measure, entitled *synergistic mutual information* and compare it against the three existing measures of cooperation. We apply all four measures against a suite of binary circuits to demonstrate our measure alone quantifies the intuitive concept of synergy across all examples.
Is Time Predictability Quantifiable?
DEFF Research Database (Denmark)
Schoeberl, Martin
2012-01-01
-case execution time. To compare different approaches we would like to quantify time predictability. That means we need to measure time predictability. In this paper we discuss the different approaches for these measurements and conclude that time predictability is practically not quantifiable. We can only......Computer architects and researchers in the realtime domain start to investigate processors and architectures optimized for real-time systems. Optimized for real-time systems means time predictable, i.e., architectures where it is possible to statically derive a tight bound of the worst...... compare the worst-case execution time bounds of different architectures....
Cell aggregation and sedimentation.
Davis, R H
1995-01-01
The aggregation of cells into clumps or flocs has been exploited for decades in such applications as biological wastewater treatment, beer brewing, antibiotic fermentation, and enhanced sedimentation to aid in cell recovery or retention. More recent research has included the use of cell aggregation and sedimentation to selectively separate subpopulations of cells. Potential biotechnological applications include overcoming contamination, maintaining plasmid-bearing cells in continuous fermentors, and selectively removing nonviable hybridoma cells from perfusion cultures.
Uncertainty evaluation of the thermal expansion of simulated fuel
Energy Technology Data Exchange (ETDEWEB)
Park, Chang Je; Kang, Kweon Ho; Song, Kee Chan [Korea Atomic Energy Research Institute, 150 Dukjin-dong, Yuseung-gu, Daejon 305-353 (Korea, Republic of)
2006-08-15
Thermal expansions of simulated fuel (SS1) are measured by using a dilatometer (DIL402C) from room temperature to 1900K. The main procedure of an uncertainty evaluation was followed by the strategy of the UO{sub 2} fuel. There exist uncertainties in the measurement, which should be quantified based on statistics. Referring to the ISO (International Organization for Standardization) guide, the uncertainties of the thermal expansion are quantified in three parts: the initial length, the length variation, and the system calibration factor. Each part is divided into two types. The A type uncertainty is derived from the statistical iterative measurement of an uncertainty and the B type uncertainty comes from a non-statistical uncertainty including a calibration and test reports. For the uncertainty evaluation, the digital calipers had been calibrated by the KOLAS (Korea Laboratory Accreditation Scheme) to obtain not only the calibration values but also the type B uncertainty. The whole system, the dilatometer (DIL402C), is composed of many complex sub-systems and in fact it is difficult to consider all the uncertainties of sub-systems. Thus, a calibration of the system was performed with a standard material (Al{sub 2}O{sub 3}), which is provided by NETZSCH. From the above standard uncertainties, the combined standard uncertainties were calculated by using the law of a propagation of an uncertainty. Finally, the expanded uncertainty was calculated by using the effective degree of freedom and the t-distribution for a given confidence level. The uncertainty of the thermal expansion for a simulated fuel was also compared with those of UO{sub 2} fuel. (author)
Uncertainties in life cycle assessment of waste management systems
DEFF Research Database (Denmark)
Clavreul, Julie; Christensen, Thomas Højlund
2011-01-01
Life cycle assessment has been used to assess environmental performances of waste management systems in many studies. The uncertainties inherent to its results are often pointed out but not always quantified, which should be the case to ensure a good decisionmaking process. This paper proposes...... a method to assess all parameter uncertainties and quantify the overall uncertainty of the assessment. The method is exemplified in a case study, where the goal is to determine if anaerobic digestion of organic waste is more beneficial than incineration in Denmark, considering only the impact on global...
Validation, Uncertainty, and Quantitative Reliability at Confidence (QRC)
Energy Technology Data Exchange (ETDEWEB)
Logan, R W; Nitta, C K
2002-12-06
This paper represents a summary of our methodology for Verification and Validation and Uncertainty Quantification. A graded scale methodology is presented and related to other concepts in the literature. We describe the critical nature of quantified Verification and Validation with Uncertainty Quantification at specified Confidence levels in evaluating system certification status. Only after Verification and Validation has contributed to Uncertainty Quantification at specified confidence can rational tradeoffs of various scenarios be made. Verification and Validation methods for various scenarios and issues are applied in assessments of Quantified Reliability at Confidence and we summarize briefly how this can lead to a Value Engineering methodology for investment strategy.
A new approach to quantification of mAb aggregates using peptide affinity probes.
Cheung, Crystal S F; Anderson, Kyle W; Patel, Pooja M; Cade, Keale L; Phinney, Karen W; Turko, Illarion V
2017-02-10
Using mAbs as therapeutic molecules is complicated by the propensity of mAbs to aggregate at elevated concentrations, which can lead to a variety of adverse events in treatment. Here, we describe a proof-of-concept for new methodology to detect and quantify mAb aggregation. Assay development included using an aggregated mAb as bait for screening of phage display peptide library and identifying those peptides with random sequence which can recognize mAb aggregates. Once identified, the selected peptides can be used for developing quantitative methods to assess mAb aggregation. Results indicate that a peptide binding method coupled with mass spectrometric detection of bound peptide can quantify mAb aggregation and potentially be useful for monitoring aggregation propensity of therapeutic protein candidates.
DEFF Research Database (Denmark)
Vlaeminck, S.; Terada, Akihiko; Smets, Barth F.
2010-01-01
In partial nitritation/anammox systems, aerobic and anoxic ammonium-oxidizing bacteria (AerAOB and AnAOB) remove ammonium from wastewater. In this process, large granular microbial aggregates enhance the performance, but little is known about this type of granulation so far. In this study......, aggregates of three reactors (A, B, C) with different inoculation and operation were studied. The test objectives were to quantify the AerAOB and AnAOB abundance and the activity balance for the different aggregate sizes, and to relate aggregate morphology, size distribution, and architecture putatively...... to the inoculation and operation of the reactors. Fluorescent in-situ hybridization (FISH) was applied on aggregate sections to quantify AerAOB and AnAOB, as well as to visualize the aggregate architecture. The activity balance of the aggregates was calculated as the nitrite accumulation rate ratio (NARR), i...
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
Uncertainties in projecting climate-change impacts in marine ecosystems
DEFF Research Database (Denmark)
Payne, Mark; Barange, Manuel; Cheung, William W. L.;
2016-01-01
Projections of the impacts of climate change on marine ecosystems are a key prerequisite for the planning of adaptation strategies, yet they are inevitably associated with uncertainty. Identifying, quantifying, and communicating this uncertainty is key to both evaluating the risk associated...... with a projection and building confidence in its robustness. We review how uncertainties in such projections are handled in marine science. We employ an approach developed in climate modelling by breaking uncertainty down into (i) structural (model) uncertainty, (ii) initialization and internal variability...... uncertainty is rarely treated explicitly and reducing this type of uncertainty may deliver gains on the seasonal-to-decadal time-scale.Weconclude that all parts of marine science could benefit from a greater exchange of ideas, particularly concerning such a universal problem such as the treatment...
Uncertainties in fission-product decay-heat calculations
Energy Technology Data Exchange (ETDEWEB)
Oyamatsu, K.; Ohta, H.; Miyazono, T.; Tasaka, K. [Nagoya Univ. (Japan)
1997-03-01
The present precision of the aggregate decay heat calculations is studied quantitatively for 50 fissioning systems. In this evaluation, nuclear data and their uncertainty data are taken from ENDF/B-VI nuclear data library and those which are not available in this library are supplemented by a theoretical consideration. An approximate method is proposed to simplify the evaluation of the uncertainties in the aggregate decay heat calculations so that we can point out easily nuclei which cause large uncertainties in the calculated decay heat values. In this paper, we attempt to clarify the justification of the approximation which was not very clear at the early stage of the study. We find that the aggregate decay heat uncertainties for minor actinides such as Am and Cm isotopes are 3-5 times as large as those for {sup 235}U and {sup 239}Pu. The recommended values by Atomic Energy Society of Japan (AESJ) were given for 3 major fissioning systems, {sup 235}U(t), {sup 239}Pu(t) and {sup 238}U(f). The present results are consistent with the AESJ values for these systems although the two evaluations used different nuclear data libraries and approximations. Therefore, the present results can also be considered to supplement the uncertainty values for the remaining 17 fissioning systems in JNDC2, which were not treated in the AESJ evaluation. Furthermore, we attempt to list nuclear data which cause large uncertainties in decay heat calculations for the future revision of decay and yield data libraries. (author)
Heisenberg's uncertainty principle
Busch, Paul; Heinonen, Teiko; Lahti, Pekka
2007-01-01
Heisenberg's uncertainty principle is usually taken to express a limitation of operational possibilities imposed by quantum mechanics. Here we demonstrate that the full content of this principle also includes its positive role as a condition ensuring that mutually exclusive experimental options can be reconciled if an appropriate trade-off is accepted. The uncertainty principle is shown to appear in three manifestations, in the form of uncertainty relations: for the widths of the position and...
Optical dynamics of molecular aggregates
de Boer, Steven
2006-01-01
The subject of this thesis is the spectroscopy and dynamics of molecular aggregates in amorphous matrices. Aggregates of three different molecules were studied. The molecules are depicted in Fig. (1.1). Supersaturated solutions of these molecules show aggregate formation. Aggregation is a process si
Observing Convective Aggregation
Holloway, Christopher E.; Wing, Allison A.; Bony, Sandrine; Muller, Caroline; Masunaga, Hirohiko; L'Ecuyer, Tristan S.; Turner, David D.; Zuidema, Paquita
2017-06-01
Convective self-aggregation, the spontaneous organization of initially scattered convection into isolated convective clusters despite spatially homogeneous boundary conditions and forcing, was first recognized and studied in idealized numerical simulations. While there is a rich history of observational work on convective clustering and organization, there have been only a few studies that have analyzed observations to look specifically for processes related to self-aggregation in models. Here we review observational work in both of these categories and motivate the need for more of this work. We acknowledge that self-aggregation may appear to be far-removed from observed convective organization in terms of time scales, initial conditions, initiation processes, and mean state extremes, but we argue that these differences vary greatly across the diverse range of model simulations in the literature and that these comparisons are already offering important insights into real tropical phenomena. Some preliminary new findings are presented, including results showing that a self-aggregation simulation with square geometry has too broad distribution of humidity and is too dry in the driest regions when compared with radiosonde records from Nauru, while an elongated channel simulation has realistic representations of atmospheric humidity and its variability. We discuss recent work increasing our understanding of how organized convection and climate change may interact, and how model discrepancies related to this question are prompting interest in observational comparisons. We also propose possible future directions for observational work related to convective aggregation, including novel satellite approaches and a ground-based observational network.
Blokpoel, S.B.; Reymen, I.M.M.J.; Dewulf, G.P.M.R.
2005-01-01
Real estate development is all about assessing and controlling risks and uncertainties. Risk management implies making decisions based on quantified risks to execute riskresponse measures. Uncertainties, on the other hand, cannot be quantified and are therefore unpredictable. In literature, much att
Uncertainty in artificial intelligence
Kanal, LN
1986-01-01
How to deal with uncertainty is a subject of much controversy in Artificial Intelligence. This volume brings together a wide range of perspectives on uncertainty, many of the contributors being the principal proponents in the controversy.Some of the notable issues which emerge from these papers revolve around an interval-based calculus of uncertainty, the Dempster-Shafer Theory, and probability as the best numeric model for uncertainty. There remain strong dissenting opinions not only about probability but even about the utility of any numeric method in this context.
[Ethics, empiricism and uncertainty].
Porz, R; Zimmermann, H; Exadaktylos, A K
2011-01-01
Accidents can lead to difficult boundary situations. Such situations often take place in the emergency units. The medical team thus often and inevitably faces professional uncertainty in their decision-making. It is essential to communicate these uncertainties within the medical team, instead of downplaying or overriding existential hurdles in decision-making. Acknowledging uncertainties might lead to alert and prudent decisions. Thus uncertainty can have ethical value in treatment or withdrawal of treatment. It does not need to be covered in evidence-based arguments, especially as some singular situations of individual tragedies cannot be grasped in terms of evidence-based medicine. © Georg Thieme Verlag KG Stuttgart · New York.
Uncertainty in simulating wheat yields under climate change
DEFF Research Database (Denmark)
Asseng, A; Ewert, F; Rosenzweig, C
2013-01-01
of environments, particularly if the input information is sufficient. However, simulated climate change impacts vary across models owing to differences in model structures and parameter values. A greater proportion of the uncertainty in climate change impact projections was due to variations among crop models...... than to variations among downscaled general circulation models. Uncertainties in simulated impacts increased with CO2 concentrations and associated warming. These impact uncertainties can be reduced by improving temperature and CO2 relationships in models and better quantified through use of multi......Projections of climate change impacts on crop yields are inherently uncertain1. Uncertainty is often quantified when projecting future greenhouse gas emissions and their influence on climate2. However, multi-model uncertainty analysis of crop responses to climate change is rare because systematic...
SOIL AGGREGATE AND ITS RESPONSE TO LAND MANAGEMENT PRACTICES
Institute of Scientific and Technical Information of China (English)
Chaofu Wei; Ming Gao; Jingan Shao; Deti Xie; Genxing Pan
2006-01-01
This paper provides a broad review of the existing study on soil aggregate and its responses to land management practices. Soil aggregate is used for structural unit, which is a group of primary soil particles that cohere to each other more strongly than other surrounding particles. The mechanism of soil particle aggregation may be expressed by a hierarchical model, which is based upon the hypothesis that macroaggregates (＞250 μm) are collections of smaller microaggregates (＜250 μm) held together with organic binding agents. Primary particles form microaggregates and then macroaggregates. Carbon (C)-rich young plant residues form and stabilize macroaggregates, whereas old organic C is occluded in the microaggregates. The interaction of aggregate dynamics with soil organic carbon (SOC) is complex and embraces a range of spatial and temporal processes within macroaggregates and microaggregates. The nature and properties of aggregates are determined by the quantity and quality of coarse residues and humic compounds and by the degree of their interaction with soil particles. The mechanisms resulting in the binding of primary soil particles into stable aggregates vary with soil parent material, climate, vegetation, and land management practices. Land management practices, including tillage methods, residue management, amendments, and soil fertility management, enhance soil aggregation. However, there is still much uncertainty in the dynamics of organic matter in macroaggregation and microaggregation, and research is still needed to understand further the mechanisms of aggregate formation and its responses to human activities.
Broadband Clutter due to Aggregations of Fish
2015-01-15
Fish Richard H. Love BayouAcoustics Abita Springs, LA 70420 Phone: (985...Long Term Goals Develop an understanding of physical parameters of aggregations of fish that control...Acoustic Uncertainty due to Marine Mammals and Fish ,” which was informally known as
Institute of Scientific and Technical Information of China (English)
ZhinhongLi; DongWu; Yuhansun; JunWang; YiLiu; BaozhongDong; Zhinhong
2001-01-01
Silica aggregates were prepared by base-catalyzed hydrolysis and condensation of alkoxides in alcohol.Polyethylene glycol(PEG) was used as organic modifier.The sols were characterized using Small Angle X-ray Scattering (SAXS) with synchrotron radiation as X-ray source.The structure evolution during the sol-gel process was determined and described in terms of the fractal geometry.As-produced silica aggregates were found to be mass fractals.The fractl dimensions spanned the regime 2.1-2.6 corresponding to more branched and compact structures.Both RLCA and Eden models dominated the kinetic growth under base-catalyzed condition.
Uncertainty Quantification in Fatigue Crack Growth Prognosis
Directory of Open Access Journals (Sweden)
Shankar Sankararaman
2011-01-01
Full Text Available This paper presents a methodology to quantify the uncertainty in fatigue crack growth prognosis, applied to structures with complicated geometry and subjected to variable amplitude multi-axial loading. Finite element analysis is used to address the complicated geometry and calculate the stress intensity factors. Multi-modal stress intensity factors due to multi-axial loading are combined to calculate an equivalent stress intensity factor using a characteristic plane approach. Crack growth under variable amplitude loading is modeled using a modified Paris law that includes retardation effects. During cycle-by-cycle integration of the crack growth law, a Gaussian process surrogate model is used to replace the expensive finite element analysis. The effect of different types of uncertainty – physical variability, data uncertainty and modeling errors – on crack growth prediction is investigated. The various sources of uncertainty include, but not limited to, variability in loading conditions, material parameters, experimental data, model uncertainty, etc. Three different types of modeling errors – crack growth model error, discretization error and surrogate model error – are included in analysis. The different types of uncertainty are incorporated into the crack growth prediction methodology to predict the probability distribution of crack size as a function of number of load cycles. The proposed method is illustrated using an application problem, surface cracking in a cylindrical structure.
DEFF Research Database (Denmark)
Ambühl, Simon; Kofoed, Jens Peter; Sørensen, John Dalsgaard
2015-01-01
Wave models used for site assessments are subjected to model uncertainties, which need to be quantified when using wave model results for probabilistic reliability assessments. This paper focuses on determination of wave model uncertainties. Four different wave models are considered, and validation...... uncertainties can be implemented in probabilistic reliability assessments....
Impact of Noise Power Uncertainty on the Performance of Wideband Spectrum Segmentation
Directory of Open Access Journals (Sweden)
S. Tascioglu
2010-12-01
Full Text Available The objective of this work is to investigate the impact of noise uncertainty on the performance of a wideband spectrum segmentation technique. We define metrics to quantify the degradation due to noise uncertainty and evaluate the performance using simulations. Our simulation results show that the noise uncertainty has detrimental effects especially for low SNR users.
Uncertainty Estimation in SiGe HBT Small-Signal Modeling
DEFF Research Database (Denmark)
Masood, Syed M.; Johansen, Tom Keinicke; Vidkjær, Jens;
2005-01-01
An uncertainty estimation and sensitivity analysis is performed on multi-step de-embedding for SiGe HBT small-signal modeling. The uncertainty estimation in combination with uncertainty model for deviation in measured S-parameters, quantifies the possible error value in de-embedded two...
Van Nooyen, R.R.P.; Hrachowitz, M.; Kolechkina, A.G.
2014-01-01
Even without uncertainty about the model structure or parameters, the output of a hydrological model run still contains several sources of uncertainty. These are: measurement errors affecting the input, the transition from continuous time and space to discrete time and space, which causes loss of in
Capel, H.W.; Cramer, J.S.; Estevez-Uscanga, O.
1995-01-01
'Uncertainty and chance' is a subject with a broad span, in that there is no academic discipline or walk of life that is not beset by uncertainty and chance. In this book a range of approaches is represented by authors from varied disciplines: natural sciences, mathematics, social sciences and medic
Guide for Uncertainty Communication
Wardekker, J.A.|info:eu-repo/dai/nl/306644398; Kloprogge, P.|info:eu-repo/dai/nl/306644312; Petersen, A.C.; Janssen, P.H.M.; van der Sluijs, J.P.|info:eu-repo/dai/nl/073427489
2013-01-01
Dealing with uncertainty, in terms of analysis and communication, is an important and distinct topic for PBL Netherlands Environmental Assessment Agency. Without paying adequate attention to the role and implications of uncertainty, research and assessment results may be of limited value and could
Computing with Epistemic Uncertainty
2015-01-01
modified the input uncertainties in any way. And by avoiding the need for simulation, various assumptions and selection of specific sampling...strategies that may affect results are also avoided . According with the Principle of Maximum Uncertainty , epistemic intervals represent the highest input...
Estimating uncertainty of inference for validation
Energy Technology Data Exchange (ETDEWEB)
Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM
2010-09-30
We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the
Energy Technology Data Exchange (ETDEWEB)
Liu Baoding [Tsinghua Univ., Beijing (China). Uncertainty Theory Lab.
2007-07-01
Uncertainty theory is a branch of mathematics based on normality, monotonicity, self-duality, and countable subadditivity axioms. The goal of uncertainty theory is to study the behavior of uncertain phenomena such as fuzziness and randomness. The main topics include probability theory, credibility theory, and chance theory. For this new edition the entire text has been totally rewritten. More importantly, the chapters on chance theory and uncertainty theory are completely new. This book provides a self-contained, comprehensive and up-to-date presentation of uncertainty theory. The purpose is to equip the readers with an axiomatic approach to deal with uncertainty. Mathematicians, researchers, engineers, designers, and students in the field of mathematics, information science, operations research, industrial engineering, computer science, artificial intelligence, and management science will find this work a stimulating and useful reference. (orig.)
Economic uncertainty and econophysics
Schinckus, Christophe
2009-10-01
The objective of this paper is to provide a methodological link between econophysics and economics. I will study a key notion of both fields: uncertainty and the ways of thinking about it developed by the two disciplines. After having presented the main economic theories of uncertainty (provided by Knight, Keynes and Hayek), I show how this notion is paradoxically excluded from the economic field. In economics, uncertainty is totally reduced by an a priori Gaussian framework-in contrast to econophysics, which does not use a priori models because it works directly on data. Uncertainty is then not shaped by a specific model, and is partially and temporally reduced as models improve. This way of thinking about uncertainty has echoes in the economic literature. By presenting econophysics as a Knightian method, and a complementary approach to a Hayekian framework, this paper shows that econophysics can be methodologically justified from an economic point of view.
Physical Uncertainty Bounds (PUB)
Energy Technology Data Exchange (ETDEWEB)
Vaughan, Diane Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Dean L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.
Measurement uncertainty and probability
Willink, Robin
2013-01-01
A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.
Sciacchitano, Andrea; Wieneke, Bernhard
2016-08-01
This paper discusses the propagation of the instantaneous uncertainty of PIV measurements to statistical and instantaneous quantities of interest derived from the velocity field. The expression of the uncertainty of vorticity, velocity divergence, mean value and Reynolds stresses is derived. It is shown that the uncertainty of vorticity and velocity divergence requires the knowledge of the spatial correlation between the error of the x and y particle image displacement, which depends upon the measurement spatial resolution. The uncertainty of statistical quantities is often dominated by the random uncertainty due to the finite sample size and decreases with the square root of the effective number of independent samples. Monte Carlo simulations are conducted to assess the accuracy of the uncertainty propagation formulae. Furthermore, three experimental assessments are carried out. In the first experiment, a turntable is used to simulate a rigid rotation flow field. The estimated uncertainty of the vorticity is compared with the actual vorticity error root-mean-square, with differences between the two quantities within 5-10% for different interrogation window sizes and overlap factors. A turbulent jet flow is investigated in the second experimental assessment. The reference velocity, which is used to compute the reference value of the instantaneous flow properties of interest, is obtained with an auxiliary PIV system, which features a higher dynamic range than the measurement system. Finally, the uncertainty quantification of statistical quantities is assessed via PIV measurements in a cavity flow. The comparison between estimated uncertainty and actual error demonstrates the accuracy of the proposed uncertainty propagation methodology.
Energy Technology Data Exchange (ETDEWEB)
Vinai, P
2007-10-15
) plant transient in which the void feedback mechanism plays an important role. In all three cases, it has been shown that a more detailed, realistic and accurate representation of output uncertainty can he achieved with the proposed methodology, than is possible based on an 'expert-opinion' approach. Moreover, the importance of state space partitioning has been clearly brought out, by comparing results with those obtained assuming a single pdf for the entire database. The analysis of the Omega integral test has demonstrated that the drift-flux model's uncertainty remains important even while introducing other representative uncertainties. The developed methodology well retains its advantageous features during consideration of different uncertainty sources. The Peach Bottom turbine trip study represents a valuable demonstration of the applicability of the developed methodology to NPP transient analysis. In this application, the novel density estimator was also employed for estimating the pdf that underlies the uncertainty of the maximum power during the transient. The results obtained have been found to provide more detailed insights than can be gained from the 'classical' approach. Another feature of the turbine trip analysis has been a qualitative study of the impact of possible neutronics cross-section uncertainties on the power calculation. Besides the important influence of the uncertainty in void fraction predictions on the accuracy of the coupled transient's simulation, the uncertainties in neutronics parameters and models can be crucial as well. This points at the need for quantifying uncertainties in neutronics calculations and to aggregate them with those assessed for the thermal-hydraulic phenomena for the simulation of such multi-physics transients.
Supporting chemical process design under uncertainty
Wechsung,A.; Oldenburg, J; J. Yu; Polt,A.
2010-01-01
A major challenge in chemical process design is to make design decisions based on partly incomplete or imperfect design input data. Still, process engineers are expected to design safe, dependable and cost-efficient processes under these conditions. The complexity of typical process models limits intuitive engineering estimates to judge the impact of uncertain parameters on the proposed design. In this work, an approach to quantify the effect of uncertainty on a process design in order to enh...
Geoinformation Generalization by Aggregation
Directory of Open Access Journals (Sweden)
Tomislav Jogun
2016-12-01
Full Text Available Geoinformation generalization can be divided into model generalization and cartographic generalization. Model generalization is the supervised reduction of data in a model, while cartographic generalization is the reduction of the complexity of map content adapted to the map scale, and/or use by various generalization operators (procedures. The topic of this paper is the aggregation of geoinformation. Generally, aggregation is the joining of nearby, congenial objects, when the distance between them is smaller than the minimum sizes. Most researchers in geoinformation generalization have focused on line features. However, the appearance of web-maps with point features and choropleth maps has led to the development of concepts and algorithms for the generalization of point and polygonal features. This paper considers some previous theoretical premises and actual examples of aggregation for point, line and polygonal features. The algorithms for aggregation implemented in commercial and free GIS software were tested. In the conclusion, unresolved challenges that occur in dynamic cartographic visualizations and cases of unusual geometrical features are highlighted.
Seizinger, Alexander; Kley, Wilhelm
2013-01-01
Aims: The aim of this work is to gain a deeper insight into how much different aggregate types are affected by erosion. Especially, it is important to study the influence of the velocity of the impacting projectiles. We also want to provide models for dust growth in protoplanetary disks with simple recipes to account for erosion effects. Methods: To study the erosion of dust aggregates we employed a molecular dynamics approach that features a detailed micro-physical model of the interaction of spherical grains. For the first time, the model has been extended by introducing a new visco-elastic damping force which requires a proper calibration. Afterwards, different sample generation methods were used to cover a wide range of aggregate types. Results: The visco-elastic damping force introduced in this work turns out to be crucial to reproduce results obtained from laboratory experiments. After proper calibration, we find that erosion occurs for impact velocities of 5 m/s and above. Though fractal aggregates as ...
Rappoldt, C.
1992-01-01
The structure of an aggregated soil is characterized by the distribution of the distance from an arbitrary point in the soil to the nearest macropore or crack. From this distribution an equivalent model system is derived to which a diffusion model can be more easily applied. The model system consist
Development of a Dynamic Lidar Uncertainty Framework
Energy Technology Data Exchange (ETDEWEB)
Newman, Jennifer [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Clifton, Andrew [WindForS; Bonin, Timothy [CIRES/NOAA ESRL; Choukulkar, Aditya [CIRES/NOAA ESRL; Brewer, W. Alan [NOAA ESRL; Delgado, Ruben [University of Maryland Baltimore County
2017-08-07
As wind turbine sizes increase and wind energy expands to more complex and remote sites, remote-sensing devices such as lidars are expected to play a key role in wind resource assessment and power performance testing. The switch to remote-sensing devices represents a paradigm shift in the way the wind industry typically obtains and interprets measurement data for wind energy. For example, the measurement techniques and sources of uncertainty for a remote-sensing device are vastly different from those associated with a cup anemometer on a meteorological tower. Current IEC standards for quantifying remote sensing device uncertainty for power performance testing consider uncertainty due to mounting, calibration, and classification of the remote sensing device, among other parameters. Values of the uncertainty are typically given as a function of the mean wind speed measured by a reference device and are generally fixed, leading to climatic uncertainty values that apply to the entire measurement campaign. However, real-world experience and a consideration of the fundamentals of the measurement process have shown that lidar performance is highly dependent on atmospheric conditions, such as wind shear, turbulence, and aerosol content. At present, these conditions are not directly incorporated into the estimated uncertainty of a lidar device. In this presentation, we describe the development of a new dynamic lidar uncertainty framework that adapts to current flow conditions and more accurately represents the actual uncertainty inherent in lidar measurements under different conditions. In this new framework, sources of uncertainty are identified for estimation of the line-of-sight wind speed and reconstruction of the three-dimensional wind field. These sources are then related to physical processes caused by the atmosphere and lidar operating conditions. The framework is applied to lidar data from a field measurement site to assess the ability of the framework to predict
Uncertainty information in climate data records from Earth observation
Merchant, Christopher J.; Paul, Frank; Popp, Thomas; Ablain, Michael; Bontemps, Sophie; Defourny, Pierre; Hollmann, Rainer; Lavergne, Thomas; Laeng, Alexandra; de Leeuw, Gerrit; Mittaz, Jonathan; Poulsen, Caroline; Povey, Adam C.; Reuter, Max; Sathyendranath, Shubha; Sandven, Stein; Sofieva, Viktoria F.; Wagner, Wolfgang
2017-07-01
The question of how to derive and present uncertainty information in climate data records (CDRs) has received sustained attention within the European Space Agency Climate Change Initiative (CCI), a programme to generate CDRs addressing a range of essential climate variables (ECVs) from satellite data. Here, we review the nature, mathematics, practicalities, and communication of uncertainty information in CDRs from Earth observations. This review paper argues that CDRs derived from satellite-based Earth observation (EO) should include rigorous uncertainty information to support the application of the data in contexts such as policy, climate modelling, and numerical weather prediction reanalysis. Uncertainty, error, and quality are distinct concepts, and the case is made that CDR products should follow international metrological norms for presenting quantified uncertainty. As a baseline for good practice, total standard uncertainty should be quantified per datum in a CDR, meaning that uncertainty estimates should clearly discriminate more and less certain data. In this case, flags for data quality should not duplicate uncertainty information, but instead describe complementary information (such as the confidence in the uncertainty estimate provided or indicators of conditions violating the retrieval assumptions). The paper discusses the many sources of error in CDRs, noting that different errors may be correlated across a wide range of timescales and space scales. Error effects that contribute negligibly to the total uncertainty in a single-satellite measurement can be the dominant sources of uncertainty in a CDR on the large space scales and long timescales that are highly relevant for some climate applications. For this reason, identifying and characterizing the relevant sources of uncertainty for CDRs is particularly challenging. The characterization of uncertainty caused by a given error effect involves assessing the magnitude of the effect, the shape of the
Topological data analysis of biological aggregation models.
Topaz, Chad M; Ziegelmeier, Lori; Halverson, Tom
2015-01-01
We apply tools from topological data analysis to two mathematical models inspired by biological aggregations such as bird flocks, fish schools, and insect swarms. Our data consists of numerical simulation output from the models of Vicsek and D'Orsogna. These models are dynamical systems describing the movement of agents who interact via alignment, attraction, and/or repulsion. Each simulation time frame is a point cloud in position-velocity space. We analyze the topological structure of these point clouds, interpreting the persistent homology by calculating the first few Betti numbers. These Betti numbers count connected components, topological circles, and trapped volumes present in the data. To interpret our results, we introduce a visualization that displays Betti numbers over simulation time and topological persistence scale. We compare our topological results to order parameters typically used to quantify the global behavior of aggregations, such as polarization and angular momentum. The topological calculations reveal events and structure not captured by the order parameters.
A Unified Approach for Reporting ARM Measurement Uncertainties Technical Report
Energy Technology Data Exchange (ETDEWEB)
Campos, E [Argonne National Lab. (ANL), Argonne, IL (United States); Sisterson, Douglas [Argonne National Lab. (ANL), Argonne, IL (United States)
2016-12-01
The U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility is observationally based, and quantifying the uncertainty of its measurements is critically important. With over 300 widely differing instruments providing over 2,500 datastreams, concise expression of measurement uncertainty is quite challenging. The ARM Facility currently provides data and supporting metadata (information about the data or data quality) to its users through a number of sources. Because the continued success of the ARM Facility depends on the known quality of its measurements, the Facility relies on instrument mentors and the ARM Data Quality Office (DQO) to ensure, assess, and report measurement quality. Therefore, an easily accessible, well-articulated estimate of ARM measurement uncertainty is needed. Note that some of the instrument observations require mathematical algorithms (retrievals) to convert a measured engineering variable into a useful geophysical measurement. While those types of retrieval measurements are identified, this study does not address particular methods for retrieval uncertainty. As well, the ARM Facility also provides engineered data products, or value-added products (VAPs), based on multiple instrument measurements. This study does not include uncertainty estimates for those data products. We propose here that a total measurement uncertainty should be calculated as a function of the instrument uncertainty (calibration factors), the field uncertainty (environmental factors), and the retrieval uncertainty (algorithm factors). The study will not expand on methods for computing these uncertainties. Instead, it will focus on the practical identification, characterization, and inventory of the measurement uncertainties already available in the ARM community through the ARM instrument mentors and their ARM instrument handbooks. As a result, this study will address the first steps towards reporting ARM measurement uncertainty
Optimal Universal Uncertainty Relations
Li, Tao; Xiao, Yunlong; Ma, Teng; Fei, Shao-Ming; Jing, Naihuan; Li-Jost, Xianqing; Wang, Zhi-Xi
2016-01-01
We study universal uncertainty relations and present a method called joint probability distribution diagram to improve the majorization bounds constructed independently in [Phys. Rev. Lett. 111, 230401 (2013)] and [J. Phys. A. 46, 272002 (2013)]. The results give rise to state independent uncertainty relations satisfied by any nonnegative Schur-concave functions. On the other hand, a remarkable recent result of entropic uncertainty relation is the direct-sum majorization relation. In this paper, we illustrate our bounds by showing how they provide a complement to that in [Phys. Rev. A. 89, 052115 (2014)]. PMID:27775010
Uncertainty Quantification in Climate Modeling and Projection
Energy Technology Data Exchange (ETDEWEB)
Qian, Yun; Jackson, Charles; Giorgi, Filippo; Booth, Ben; Duan, Qingyun; Forest, Chris; Higdon, Dave; Hou, Z. Jason; Huerta, Gabriel
2016-05-01
The projection of future climate is one of the most complex problems undertaken by the scientific community. Although scientists have been striving to better understand the physical basis of the climate system and to improve climate models, the overall uncertainty in projections of future climate has not been significantly reduced (e.g., from the IPCC AR4 to AR5). With the rapid increase of complexity in Earth system models, reducing uncertainties in climate projections becomes extremely challenging. Since uncertainties always exist in climate models, interpreting the strengths and limitations of future climate projections is key to evaluating risks, and climate change information for use in Vulnerability, Impact, and Adaptation (VIA) studies should be provided with both well-characterized and well-quantified uncertainty. The workshop aimed at providing participants, many of them from developing countries, information on strategies to quantify the uncertainty in climate model projections and assess the reliability of climate change information for decision-making. The program included a mixture of lectures on fundamental concepts in Bayesian inference and sampling, applications, and hands-on computer laboratory exercises employing software packages for Bayesian inference, Markov Chain Monte Carlo methods, and global sensitivity analyses. The lectures covered a range of scientific issues underlying the evaluation of uncertainties in climate projections, such as the effects of uncertain initial and boundary conditions, uncertain physics, and limitations of observational records. Progress in quantitatively estimating uncertainties in hydrologic, land surface, and atmospheric models at both regional and global scales was also reviewed. The application of Uncertainty Quantification (UQ) concepts to coupled climate system models is still in its infancy. The Coupled Model Intercomparison Project (CMIP) multi-model ensemble currently represents the primary data for
On Quantifying Semantic Information
Directory of Open Access Journals (Sweden)
Simon D’Alfonso
2011-01-01
Full Text Available The purpose of this paper is to look at some existing methods of semantic information quantification and suggest some alternatives. It begins with an outline of Bar-Hillel and Carnap’s theory of semantic information before going on to look at Floridi’s theory of strongly semantic information. The latter then serves to initiate an in-depth investigation into the idea of utilising the notion of truthlikeness to quantify semantic information. Firstly, a couple of approaches to measure truthlikeness are drawn from the literature and explored, with a focus on their applicability to semantic information quantification. Secondly, a similar but new approach to measure truthlikeness/information is presented and some supplementary points are made.
Uncertainty Quantification of Calculated Temperatures for the AGR-1 Experiment
Energy Technology Data Exchange (ETDEWEB)
Binh T. Pham; Jeffrey J. Einerson; Grant L. Hawkes
2012-04-01
This report documents an effort to quantify the uncertainty of the calculated temperature data for the first Advanced Gas Reactor (AGR-1) fuel irradiation experiment conducted in the INL's Advanced Test Reactor (ATR) in support of the Next Generation Nuclear Plant (NGNP) R&D program. Recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, the results of the numerical simulations can be used in combination with the statistical analysis methods to improve qualification of measured data. Additionally, the temperature simulation data for AGR tests can be used for validation of the fuel transport and fuel performance simulation models. The crucial roles of the calculated fuel temperatures in ensuring achievement of the AGR experimental program objectives require accurate determination of the model temperature uncertainties. The report is organized into three chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program and provides overviews of AGR-1 measured data, AGR-1 test configuration and test procedure, and thermal simulation. Chapters 2 describes the uncertainty quantification procedure for temperature simulation data of the AGR-1 experiment, namely, (i) identify and quantify uncertainty sources; (ii) perform sensitivity analysis for several thermal test conditions; (iii) use uncertainty propagation to quantify overall response temperature uncertainty. A set of issues associated with modeling uncertainties resulting from the expert assessments are identified. This also includes the experimental design to estimate the main effects and interactions of the important thermal model parameters. Chapter 3 presents the overall uncertainty results for the six AGR-1 capsules. This includes uncertainties for the daily volume-average and peak fuel temperatures, daily average temperatures at TC locations, and time-average volume-average and time-average peak fuel temperatures.
Uncertainty Quantification of Calculated Temperatures for the AGR-1 Experiment
Energy Technology Data Exchange (ETDEWEB)
Binh T. Pham; Jeffrey J. Einerson; Grant L. Hawkes
2013-03-01
This report documents an effort to quantify the uncertainty of the calculated temperature data for the first Advanced Gas Reactor (AGR-1) fuel irradiation experiment conducted in the INL’s Advanced Test Reactor (ATR) in support of the Next Generation Nuclear Plant (NGNP) R&D program. Recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, the results of the numerical simulations can be used in combination with the statistical analysis methods to improve qualification of measured data. Additionally, the temperature simulation data for AGR tests can be used for validation of the fuel transport and fuel performance simulation models. The crucial roles of the calculated fuel temperatures in ensuring achievement of the AGR experimental program objectives require accurate determination of the model temperature uncertainties. The report is organized into three chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program and provides overviews of AGR-1 measured data, AGR-1 test configuration and test procedure, and thermal simulation. Chapters 2 describes the uncertainty quantification procedure for temperature simulation data of the AGR-1 experiment, namely, (i) identify and quantify uncertainty sources; (ii) perform sensitivity analysis for several thermal test conditions; (iii) use uncertainty propagation to quantify overall response temperature uncertainty. A set of issues associated with modeling uncertainties resulting from the expert assessments are identified. This also includes the experimental design to estimate the main effects and interactions of the important thermal model parameters. Chapter 3 presents the overall uncertainty results for the six AGR-1 capsules. This includes uncertainties for the daily volume-average and peak fuel temperatures, daily average temperatures at TC locations, and time-average volume-average and time-average peak fuel temperatures.
Uncertainty Quantification in Climate Modeling
Sargsyan, K.; Safta, C.; Berry, R.; Debusschere, B.; Najm, H.
2011-12-01
We address challenges that sensitivity analysis and uncertainty quantification methods face when dealing with complex computational models. In particular, climate models are computationally expensive and typically depend on a large number of input parameters. We consider the Community Land Model (CLM), which consists of a nested computational grid hierarchy designed to represent the spatial heterogeneity of the land surface. Each computational cell can be composed of multiple land types, and each land type can incorporate one or more sub-models describing the spatial and depth variability. Even for simulations at a regional scale, the computational cost of a single run is quite high and the number of parameters that control the model behavior is very large. Therefore, the parameter sensitivity analysis and uncertainty propagation face significant difficulties for climate models. This work employs several algorithmic avenues to address some of the challenges encountered by classical uncertainty quantification methodologies when dealing with expensive computational models, specifically focusing on the CLM as a primary application. First of all, since the available climate model predictions are extremely sparse due to the high computational cost of model runs, we adopt a Bayesian framework that effectively incorporates this lack-of-knowledge as a source of uncertainty, and produces robust predictions with quantified uncertainty even if the model runs are extremely sparse. In particular, we infer Polynomial Chaos spectral expansions that effectively encode the uncertain input-output relationship and allow efficient propagation of all sources of input uncertainties to outputs of interest. Secondly, the predictability analysis of climate models strongly suffers from the curse of dimensionality, i.e. the large number of input parameters. While single-parameter perturbation studies can be efficiently performed in a parallel fashion, the multivariate uncertainty analysis
Uncertainty, rationality, and agency
Hoek, Wiebe van der
2006-01-01
Goes across 'classical' borderlines of disciplinesUnifies logic, game theory, and epistemics and studies them in an agent-settingCombines classical and novel approaches to uncertainty, rationality, and agency
Introduction to uncertainty quantification
Sullivan, T J
2015-01-01
Uncertainty quantification is a topic of increasing practical importance at the intersection of applied mathematics, statistics, computation, and numerous application areas in science and engineering. This text provides a framework in which the main objectives of the field of uncertainty quantification are defined, and an overview of the range of mathematical methods by which they can be achieved. Complete with exercises throughout, the book will equip readers with both theoretical understanding and practical experience of the key mathematical and algorithmic tools underlying the treatment of uncertainty in modern applied mathematics. Students and readers alike are encouraged to apply the mathematical methods discussed in this book to their own favourite problems to understand their strengths and weaknesses, also making the text suitable as a self-study. This text is designed as an introduction to uncertainty quantification for senior undergraduate and graduate students with a mathematical or statistical back...
Menger, Fredric M
2010-09-01
It might come as a disappointment to some chemists, but just as there are uncertainties in physics and mathematics, there are some chemistry questions we may never know the answer to either, suggests Fredric M. Menger.
Interval-Valued Model Level Fuzzy Aggregation-Based Background Subtraction.
Chiranjeevi, Pojala; Sengupta, Somnath
2016-07-29
In a recent work, the effectiveness of neighborhood supported model level fuzzy aggregation was shown under dynamic background conditions. The multi-feature fuzzy aggregation used in that approach uses real fuzzy similarity values, and is robust for low and medium-scale dynamic background conditions such as swaying vegetation, sprinkling water, etc. The technique, however, exhibited some limitations under heavily dynamic background conditions, as features have high uncertainty under such noisy conditions and these uncertainties were not captured by real fuzzy similarity values. Our proposed algorithm is particularly focused toward improving the detection under heavy dynamic background conditions by modeling uncertainties in the data by interval-valued fuzzy set. In this paper, real-valued fuzzy aggregation has been extended to interval-valued fuzzy aggregation by considering uncertainties over real similarity values. We build up a procedure to calculate the uncertainty that varies for each feature, at each pixel, and at each time instant. We adaptively determine membership values at each pixel by the Gaussian of uncertainty value instead of fixed membership values used in recent fuzzy approaches, thereby, giving importance to a feature based on its uncertainty. Interval-valued Choquet integral is evaluated using interval similarity values and the membership values in order to calculate interval-valued fuzzy similarity between model and current. Adequate qualitative and quantitative studies are carried out to illustrate the effectiveness of the proposed method in mitigating heavily dynamic background situations as compared to state-of-the-art.
Lemaire, Maurice
2014-01-01
Science is a quest for certainty, but lack of certainty is the driving force behind all of its endeavors. This book, specifically, examines the uncertainty of technological and industrial science. Uncertainty and Mechanics studies the concepts of mechanical design in an uncertain setting and explains engineering techniques for inventing cost-effective products. Though it references practical applications, this is a book about ideas and potential advances in mechanical science.
Generalized uncertainty principles
Machluf, Ronny
2008-01-01
The phenomenon in the essence of classical uncertainty principles is well known since the thirties of the last century. We introduce a new phenomenon which is in the essence of a new notion that we introduce: "Generalized Uncertainty Principles". We show the relation between classical uncertainty principles and generalized uncertainty principles. We generalized "Landau-Pollak-Slepian" uncertainty principle. Our generalization relates the following two quantities and two scaling parameters: 1) The weighted time spreading $\\int_{-\\infty}^\\infty |f(x)|^2w_1(x)dx$, ($w_1(x)$ is a non-negative function). 2) The weighted frequency spreading $\\int_{-\\infty}^\\infty |\\hat{f}(\\omega)|^2w_2(\\omega)d\\omega$. 3) The time weight scale $a$, ${w_1}_a(x)=w_1(xa^{-1})$ and 4) The frequency weight scale $b$, ${w_2}_b(\\omega)=w_2(\\omega b^{-1})$. "Generalized Uncertainty Principle" is an inequality that summarizes the constraints on the relations between the two spreading quantities and two scaling parameters. For any two reason...
Quantification and Propagation of Nuclear Data Uncertainties
Rising, Michael E.
The use of several uncertainty quantification and propagation methodologies is investigated in the context of the prompt fission neutron spectrum (PFNS) uncertainties and its impact on critical reactor assemblies. First, the first-order, linear Kalman filter is used as a nuclear data evaluation and uncertainty quantification tool combining available PFNS experimental data and a modified version of the Los Alamos (LA) model. The experimental covariance matrices, not generally given in the EXFOR database, are computed using the GMA methodology used by the IAEA to establish more appropriate correlations within each experiment. Then, using systematics relating the LA model parameters across a suite of isotopes, the PFNS for both the uranium and plutonium actinides are evaluated leading to a new evaluation including cross-isotope correlations. Next, an alternative evaluation approach, the unified Monte Carlo (UMC) method, is studied for the evaluation of the PFNS for the n(0.5 MeV)+Pu-239 fission reaction and compared to the Kalman filter. The UMC approach to nuclear data evaluation is implemented in a variety of ways to test convergence toward the Kalman filter results and to determine the nonlinearities present in the LA model. Ultimately, the UMC approach is shown to be comparable to the Kalman filter for a realistic data evaluation of the PFNS and is capable of capturing the nonlinearities present in the LA model. Next, the impact that the PFNS uncertainties have on important critical assemblies is investigated. Using the PFNS covariance matrices in the ENDF/B-VII.1 nuclear data library, the uncertainties of the effective multiplication factor, leakage, and spectral indices of the Lady Godiva and Jezebel critical assemblies are quantified. Using principal component analysis on the PFNS covariance matrices results in needing only 2-3 principal components to retain the PFNS uncertainties. Then, using the polynomial chaos expansion (PCE) on the uncertain output
Uncertainties in the estimation of max
Indian Academy of Sciences (India)
Girish C Joshi; Mukat Lal Sharma
2008-11-01
In the present paper, the parameters affecting the uncertainties on the estimation of max have been investigated by exploring different methodologies being used in the analysis of seismicity catalogue and estimation of seismicity parameters. A critical issue to be addressed before any scientific analysis is to assess the quality, consistency, and homogeneity of the data. The empirical relationships between different magnitude scales have been used for conversions for homogenization of seismicity catalogues to be used for further seismic hazard assessment studies. An endeavour has been made to quantify the uncertainties due to magnitude conversions and the seismic hazard parameters are then estimated using different methods to consider the epistemic uncertainty in the process. The study area chosen is around Delhi. The value and the magnitude of completeness for the four seismogenic sources considered around Delhi varied more than 40% using the three catalogues compiled based on different magnitude conversion relationships. The effect of the uncertainties has been then shown on the estimation of max and the probabilities of occurrence of different magnitudes. It has been emphasized to consider the uncertainties and their quantification to carry out seismic hazard assessment and in turn the seismic microzonation.
Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses
Energy Technology Data Exchange (ETDEWEB)
Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Curtis E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-08-01
We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.
Proteins aggregation and human diseases
Hu, Chin-Kun
2015-04-01
Many human diseases and the death of most supercentenarians are related to protein aggregation. Neurodegenerative diseases include Alzheimer's disease (AD), Huntington's disease (HD), Parkinson's disease (PD), frontotemporallobar degeneration, etc. Such diseases are due to progressive loss of structure or function of neurons caused by protein aggregation. For example, AD is considered to be related to aggregation of Aβ40 (peptide with 40 amino acids) and Aβ42 (peptide with 42 amino acids) and HD is considered to be related to aggregation of polyQ (polyglutamine) peptides. In this paper, we briefly review our recent discovery of key factors for protein aggregation. We used a lattice model to study the aggregation rates of proteins and found that the probability for a protein sequence to appear in the conformation of the aggregated state can be used to determine the temperature at which proteins can aggregate most quickly. We used molecular dynamics and simple models of polymer chains to study relaxation and aggregation of proteins under various conditions and found that when the bending-angle dependent and torsion-angle dependent interactions are zero or very small, then protein chains tend to aggregate at lower temperatures. All atom models were used to identify a key peptide chain for the aggregation of insulin chains and to find that two polyQ chains prefer anti-parallel conformation. It is pointed out that in many cases, protein aggregation does not result from protein mis-folding. A potential drug from Chinese medicine was found for Alzheimer's disease.
Quantifying economic fluctuations
Stanley, H. Eugene; Nunes Amaral, Luis A.; Gabaix, Xavier; Gopikrishnan, Parameswaran; Plerou, Vasiliki
2001-12-01
This manuscript is a brief summary of a talk designed to address the question of whether two of the pillars of the field of phase transitions and critical phenomena-scale invariance and universality-can be useful in guiding research on interpreting empirical data on economic fluctuations. Using this conceptual framework as a guide, we empirically quantify the relation between trading activity-measured by the number of transactions N-and the price change G( t) for a given stock, over a time interval [ t, t+Δ t]. We relate the time-dependent standard deviation of price changes-volatility-to two microscopic quantities: the number of transactions N( t) in Δ t and the variance W2( t) of the price changes for all transactions in Δ t. We find that the long-ranged volatility correlations are largely due to those of N. We then argue that the tail-exponent of the distribution of N is insufficient to account for the tail-exponent of P{ G> x}. Since N and W display only weak inter-dependency, our results show that the fat tails of the distribution P{ G> x} arises from W. Finally, we review recent work on quantifying collective behavior among stocks by applying the conceptual framework of random matrix theory (RMT). RMT makes predictions for “universal” properties that do not depend on the interactions between the elements comprising the system, and deviations from RMT provide clues regarding system-specific properties. We compare the statistics of the cross-correlation matrix C-whose elements Cij are the correlation coefficients of price fluctuations of stock i and j-against a random matrix having the same symmetry properties. It is found that RMT methods can distinguish random and non-random parts of C. The non-random part of C which deviates from RMT results, provides information regarding genuine collective behavior among stocks. We also discuss results that are reminiscent of phase transitions in spin systems, where the divergent behavior of the response function at
Cartograms tool to represent spatial uncertainty in species distribution
Directory of Open Access Journals (Sweden)
Duccio Rocchini
2017-02-01
Full Text Available Species distribution models have become an important tool for biodiversity monitoring. Like all statistical modelling techniques developed based on field data, they are prone to uncertainty due to bias in the sampling (e.g. identification, effort, detectability. In this study, we explicitly quantify and map the uncertainty derived from sampling effort bias. With that aim, we extracted data from the widely used GBIF dataset to map this semantic bias using cartograms.
Pratt, Gregory C; Parson, Kris; Shinoda, Naomi; Lindgren, Paula; Dunlap, Sara; Yawn, Barbara; Wollan, Peter; Johnson, Jean
2014-01-01
Living near traffic adversely affects health outcomes. Traffic exposure metrics include distance to high-traffic roads, traffic volume on nearby roads, traffic within buffer distances, measured pollutant concentrations, land-use regression estimates of pollution concentrations, and others. We used Geographic Information System software to explore a new approach using traffic count data and a kernel density calculation to generate a traffic density surface with a resolution of 50 m. The density value in each cell reflects all the traffic on all the roads within the distance specified in the kernel density algorithm. The effect of a given roadway on the raster cell value depends on the amount of traffic on the road segment, its distance from the raster cell, and the form of the algorithm. We used a Gaussian algorithm in which traffic influence became insignificant beyond 300 m. This metric integrates the deleterious effects of traffic rather than focusing on one pollutant. The density surface can be used to impute exposure at any point, and it can be used to quantify integrated exposure along a global positioning system route. The traffic density calculation compares favorably with other metrics for assessing traffic exposure and can be used in a variety of applications.
Quantifying loopy network architectures.
Directory of Open Access Journals (Sweden)
Eleni Katifori
Full Text Available Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of approaches have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes from the metric topology (connectivity and edge weight and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs.
Quantifying innovation in surgery.
Hughes-Hallett, Archie; Mayer, Erik K; Marcus, Hani J; Cundy, Thomas P; Pratt, Philip J; Parston, Greg; Vale, Justin A; Darzi, Ara W
2014-08-01
The objectives of this study were to assess the applicability of patents and publications as metrics of surgical technology and innovation; evaluate the historical relationship between patents and publications; develop a methodology that can be used to determine the rate of innovation growth in any given health care technology. The study of health care innovation represents an emerging academic field, yet it is limited by a lack of valid scientific methods for quantitative analysis. This article explores and cross-validates 2 innovation metrics using surgical technology as an exemplar. Electronic patenting databases and the MEDLINE database were searched between 1980 and 2010 for "surgeon" OR "surgical" OR "surgery." Resulting patent codes were grouped into technology clusters. Growth curves were plotted for these technology clusters to establish the rate and characteristics of growth. The initial search retrieved 52,046 patents and 1,801,075 publications. The top performing technology cluster of the last 30 years was minimally invasive surgery. Robotic surgery, surgical staplers, and image guidance were the most emergent technology clusters. When examining the growth curves for these clusters they were found to follow an S-shaped pattern of growth, with the emergent technologies lying on the exponential phases of their respective growth curves. In addition, publication and patent counts were closely correlated in areas of technology expansion. This article demonstrates the utility of publically available patent and publication data to quantify innovations within surgical technology and proposes a novel methodology for assessing and forecasting areas of technological innovation.
Energy Technology Data Exchange (ETDEWEB)
Brown, J. [National Radiological Protection Board (United Kingdom); Goossens, L.H.J.; Kraan, B.C.P. [Delft Univ. of Technology (Netherlands)] [and others
1997-06-01
This volume is the first of a two-volume document that summarizes a joint project conducted by the US Nuclear Regulatory Commission and the European Commission to assess uncertainties in the MACCS and COSYMA probabilistic accident consequence codes. These codes were developed primarily for estimating the risks presented by nuclear reactors based on postulated frequencies and magnitudes of potential accidents. This document reports on an ongoing project to assess uncertainty in the MACCS and COSYMA calculations for the offsite consequences of radionuclide releases by hypothetical nuclear power plant accidents. A panel of sixteen experts was formed to compile credible and traceable uncertainty distributions for food chain variables that affect calculations of offsite consequences. The expert judgment elicitation procedure and its outcomes are described in these volumes. Other panels were formed to consider uncertainty in other aspects of the codes. Their results are described in companion reports. Volume 1 contains background information and a complete description of the joint consequence uncertainty study. Volume 2 contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures for both panels, (3) the rationales and results for the panels on soil and plant transfer and animal transfer, (4) short biographies of the experts, and (5) the aggregated results of their responses.
Aggregate stability in citrus plantations. The impact of drip irrigation
Cerdà, A.; Mataix-Solera, J.; Arcenegui, V.
2012-04-01
Soil aggregate stability is a key property for soil and water conservation, and a synthetic parameter to quantify the soil degradation. Aggregation is relevant in soils where vegetation cover is scarce (Cerdà, 1996). Most of the research carried out to determine the soil aggregate stability was done in forest soils (Mataix-Solera et al., 2011) and little is done on farms (Cerdà, 2000). The research have show the effect of vegetation cover on soil aggregate stability (Cerdà, 1998) but little is known when vegetation is scarce, rare or not found such it can be seeing in agriculture soils. Then, aggregation is the main factor to control the soil losses and to improve the water availability. Moreover, agriculture management can improve the soil aggregate characteristics and the first step in this direction should be to quantify the aggregate stability. There is no information about the aggregate stability of soils under citrus production, although the research did show that the soil losses in the farms with citrus plantations is very high (Cerdà et al., 2009), and that aggregation should play a key role as the soils are bare due to the widespread use of herbicides. From 2009 to 2011, samples were collected in summer and winter in a chemically managed farm in Montesa, Eastern Iberian Peninsula. Ten irrigated patches and ten non-irrigated patches were selected to compare the effect of the drip irrigation on the soil aggregate stability. The Ten Drop Impacts (TDI) and the Counting the number of drops (CND) tests were applied at 200 aggregates (10 samples x 10 aggregates x 2 sites) in winter and summer in 2009, 2010 and 2011. The results show that the irrigated patches had TDI values that ranged from 43 to 56 % and that the non-irrigated reached values of 41 to 54 %. The CND samples ranged from 29 to 38 drops in the non-irrigated patches to 32 to 42 drop-impacts in the irrigated soil patches. No trends were found from winter to summer during the three years time period
Integrating uncertainties for climate change mitigation
Rogelj, Joeri; McCollum, David; Reisinger, Andy; Meinshausen, Malte; Riahi, Keywan
2013-04-01
The target of keeping global average temperature increase to below 2°C has emerged in the international climate debate more than a decade ago. In response, the scientific community has tried to estimate the costs of reaching such a target through modelling and scenario analysis. Producing such estimates remains a challenge, particularly because of relatively well-known, but ill-quantified uncertainties, and owing to limited integration of scientific knowledge across disciplines. The integrated assessment community, on one side, has extensively assessed the influence of technological and socio-economic uncertainties on low-carbon scenarios and associated costs. The climate modelling community, on the other side, has worked on achieving an increasingly better understanding of the geophysical response of the Earth system to emissions of greenhouse gases (GHG). This geophysical response remains a key uncertainty for the cost of mitigation scenarios but has only been integrated with assessments of other uncertainties in a rudimentary manner, i.e., for equilibrium conditions. To bridge this gap between the two research communities, we generate distributions of the costs associated with limiting transient global temperature increase to below specific temperature limits, taking into account uncertainties in multiple dimensions: geophysical, technological, social and political. In other words, uncertainties resulting from our incomplete knowledge about how the climate system precisely reacts to GHG emissions (geophysical uncertainties), about how society will develop (social uncertainties and choices), which technologies will be available (technological uncertainty and choices), when we choose to start acting globally on climate change (political choices), and how much money we are or are not willing to spend to achieve climate change mitigation. We find that political choices that delay mitigation have the largest effect on the cost-risk distribution, followed by
Hydrophobic aggregation of ultrafine kaolinite
Institute of Scientific and Technical Information of China (English)
ZHANG Xiao-ping; HU Yue-hua; LIU Run-Qing
2008-01-01
The hydrophobic aggregation of ultrafine kaolinite in cationic surfactant suspension was investigated by sedimentation test, zeta potential measurement and SEM observation. SEM images reveal that kaolinite particles show the self-aggregation of edge-face in acidic media, the aggregation of edge-face and edge-edge in neutral media, and the dispersion in alkaline media due to electrostatic repulsion. In the presence of the dodecylammonium acetate cationic surfactant and in neutral and alkaline suspension, the hydrophobic aggregation of face-face is demonstrated. The zeta potential of kaolinite increases with increasing the concentration of cationic surfactant. The small and loose aggregation at a low concentration but big and tight aggregation at a high concentration is presented At pH=7 alkyl quarterly amine salt CTAB has the best hydrophobic aggregation among three cationic surfactants, namely, dodecylammonium acetate, alkyl quarterly amine salts 1227 and CTAB.
Absorption Spectra of Astaxanthin Aggregates
Olsina, Jan; Minofar, Babak; Polivka, Tomas; Mancal, Tomas
2012-01-01
Carotenoids in hydrated polar solvents form aggregates characterized by dramatic changes in their absorption spectra with respect to monomers. Here we analyze absorption spectra of aggregates of the carotenoid astaxanthin in hydrated dimethylsulfoxide. Depending on water content, two types of aggregates were produced: H-aggregates with absorption maximum around 390 nm, and J-aggregates with red-shifted absorption band peaking at wavelengths >550 nm. The large shifts with respect to absorption maximum of monomeric astaxanthin (470-495 nm depending on solvent) are caused by excitonic interaction between aggregated molecules. We applied molecular dynamics simulations to elucidate structure of astaxanthin dimer in water, and the resulting structure was used as a basis for calculations of absorption spectra. Absorption spectra of astaxanthin aggregates in hydrated dimethylsulfoxide were calculated using molecular exciton model with the resonance interaction energy between astaxanthin monomers constrained by semi-e...
Non-Arrhenius protein aggregation.
Wang, Wei; Roberts, Christopher J
2013-07-01
Protein aggregation presents one of the key challenges in the development of protein biotherapeutics. It affects not only product quality but also potentially impacts safety, as protein aggregates have been shown to be linked with cytotoxicity and patient immunogenicity. Therefore, investigations of protein aggregation remain a major focus in pharmaceutical companies and academic institutions. Due to the complexity of the aggregation process and temperature-dependent conformational stability, temperature-induced protein aggregation is often non-Arrhenius over even relatively small temperature windows relevant for product development, and this makes low-temperature extrapolation difficult based simply on accelerated stability studies at high temperatures. This review discusses the non-Arrhenius nature of the temperature dependence of protein aggregation, explores possible causes, and considers inherent hurdles for accurately extrapolating aggregation rates from conventional industrial approaches for selecting accelerated conditions and from conventional or more advanced methods of analyzing the resulting rate data.
Network planning under uncertainties
Ho, Kwok Shing; Cheung, Kwok Wai
2008-11-01
One of the main focuses for network planning is on the optimization of network resources required to build a network under certain traffic demand projection. Traditionally, the inputs to this type of network planning problems are treated as deterministic. In reality, the varying traffic requirements and fluctuations in network resources can cause uncertainties in the decision models. The failure to include the uncertainties in the network design process can severely affect the feasibility and economics of the network. Therefore, it is essential to find a solution that can be insensitive to the uncertain conditions during the network planning process. As early as in the 1960's, a network planning problem with varying traffic requirements over time had been studied. Up to now, this kind of network planning problems is still being active researched, especially for the VPN network design. Another kind of network planning problems under uncertainties that has been studied actively in the past decade addresses the fluctuations in network resources. One such hotly pursued research topic is survivable network planning. It considers the design of a network under uncertainties brought by the fluctuations in topology to meet the requirement that the network remains intact up to a certain number of faults occurring anywhere in the network. Recently, the authors proposed a new planning methodology called Generalized Survivable Network that tackles the network design problem under both varying traffic requirements and fluctuations of topology. Although all the above network planning problems handle various kinds of uncertainties, it is hard to find a generic framework under more general uncertainty conditions that allows a more systematic way to solve the problems. With a unified framework, the seemingly diverse models and algorithms can be intimately related and possibly more insights and improvements can be brought out for solving the problem. This motivates us to seek a
Market uncertainty; Markedsusikkerhet
Energy Technology Data Exchange (ETDEWEB)
Doorman, Gerard; Holtan, Jon Anders; Mo, Birger; Groenli, Helle; Haaland, Magnar; Grinden, Bjoern
1997-04-10
In Norway, the project ``Market uncertainty`` has been in progress for over two years and resulted in increased skill in the use of the Grid System Operation Model. This report classifies some of the factors which lead to uncertainties in the electric power market. It has been examined whether these factors should be, or can be, modelled in the available simulation models. Some of the factors have been further considered and methods of modelling the associated uncertainties have been examined. It is concluded that (1) There is a need for automatic simulation of several scenarios in the model, and these scenarios should incorporate probability parameters, (2) At first it is most important that one can handle uncertainties in fuel prices and demand, (3) Market uncertainty which is due to irrational behaviour should be dealt with in a separate model. The difference between real and simulated prices should be analysed and modelled with a time series model, (4) Risk should be included in the Vansimtap model by way of feedback from simulations, (5) The marginal values of stored water as calculated by means of the various methods in use should be compared systematically. 9 refs., 16 figs., 5 tabs.
Measurement uncertainty relations
Energy Technology Data Exchange (ETDEWEB)
Busch, Paul, E-mail: paul.busch@york.ac.uk [Department of Mathematics, University of York, York (United Kingdom); Lahti, Pekka, E-mail: pekka.lahti@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Werner, Reinhard F., E-mail: reinhard.werner@itp.uni-hannover.de [Institut für Theoretische Physik, Leibniz Universität, Hannover (Germany)
2014-04-15
Measurement uncertainty relations are quantitative bounds on the errors in an approximate joint measurement of two observables. They can be seen as a generalization of the error/disturbance tradeoff first discussed heuristically by Heisenberg. Here we prove such relations for the case of two canonically conjugate observables like position and momentum, and establish a close connection with the more familiar preparation uncertainty relations constraining the sharpness of the distributions of the two observables in the same state. Both sets of relations are generalized to means of order α rather than the usual quadratic means, and we show that the optimal constants are the same for preparation and for measurement uncertainty. The constants are determined numerically and compared with some bounds in the literature. In both cases, the near-saturation of the inequalities entails that the state (resp. observable) is uniformly close to a minimizing one.
Sustainability and uncertainty
DEFF Research Database (Denmark)
Jensen, Karsten Klint
2007-01-01
and infers prescriptions from this requirement. These two approaches may conflict, and in this conflict the top-down approach has the upper hand, ethically speaking. However, the implicit goal in the top-down approach of justice between generations needs to be refined in several dimensions. But even given...... a clarified ethical goal, disagreements can arise. At present we do not know what substitutions will be possible in the future. This uncertainty clearly affects the prescriptions that follow from the measure of sustainability. Consequently, decisions about how to make future agriculture sustainable...... are decisions under uncertainty. There might be different judgments on likelihoods; but even given some set of probabilities, there might be disagreement on the right level of precaution in face of the uncertainty....
SAGD optimization under uncertainty
Energy Technology Data Exchange (ETDEWEB)
Gossuin, J.; Naccache, P. [Schlumberger SIS, Abingdon (United Kingdom); Bailley, W.; Couet, B. [Schlumberger-Doll Research, Cambridge, MA, (United States)
2011-07-01
In the heavy oil industry, the steam assisted gravity drainage process is often used to enhance oil recovery but this is a costly method and ways to make it more efficient are needed. Multiple methods have been developed to optimize the SAGD process but none of them explicitly considered uncertainty. This paper presents an optimization method in the presence of reservoir uncertainty. This process was tested on an SAGD model where three equi-probable geological models are possible. Preparatory steps were first performed to identify key variables and the optimization model was then proposed. The method was shown to be successful in handling a significant number of uncertainties, optimizing the SAGD process and preventing premature steam channels that can choke production. The optimization method presented herein was successfully applied to an SAGD process and was shown to provide better strategies than sensitivity analysis while handling more complex problems.
Hamm, Nicholas A. S.; Soares Magalhães, Ricardo J.; Stein, Alfred
2016-01-01
Background Spatial modelling of STH and schistosomiasis epidemiology is now commonplace. Spatial epidemiological studies help inform decisions regarding the number of people at risk as well as the geographic areas that need to be targeted with mass drug administration; however, limited attention has been given to propagated uncertainties, their interpretation, and consequences for the mapped values. Using currently published literature on the spatial epidemiology of helminth infections we identified: (1) the main uncertainty sources, their definition and quantification and (2) how uncertainty is informative for STH programme managers and scientists working in this domain. Methodology/Principal Findings We performed a systematic literature search using the Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA) protocol. We searched Web of Knowledge and PubMed using a combination of uncertainty, geographic and disease terms. A total of 73 papers fulfilled the inclusion criteria for the systematic review. Only 9% of the studies did not address any element of uncertainty, while 91% of studies quantified uncertainty in the predicted morbidity indicators and 23% of studies mapped it. In addition, 57% of the studies quantified uncertainty in the regression coefficients but only 7% incorporated it in the regression response variable (morbidity indicator). Fifty percent of the studies discussed uncertainty in the covariates but did not quantify it. Uncertainty was mostly defined as precision, and quantified using credible intervals by means of Bayesian approaches. Conclusion/Significance None of the studies considered adequately all sources of uncertainties. We highlighted the need for uncertainty in the morbidity indicator and predictor variable to be incorporated into the modelling framework. Study design and spatial support require further attention and uncertainty associated with Earth observation data should be quantified. Finally, more attention
Directory of Open Access Journals (Sweden)
Andrea L Araujo Navas
2016-12-01
Full Text Available Spatial modelling of STH and schistosomiasis epidemiology is now commonplace. Spatial epidemiological studies help inform decisions regarding the number of people at risk as well as the geographic areas that need to be targeted with mass drug administration; however, limited attention has been given to propagated uncertainties, their interpretation, and consequences for the mapped values. Using currently published literature on the spatial epidemiology of helminth infections we identified: (1 the main uncertainty sources, their definition and quantification and (2 how uncertainty is informative for STH programme managers and scientists working in this domain.We performed a systematic literature search using the Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA protocol. We searched Web of Knowledge and PubMed using a combination of uncertainty, geographic and disease terms. A total of 73 papers fulfilled the inclusion criteria for the systematic review. Only 9% of the studies did not address any element of uncertainty, while 91% of studies quantified uncertainty in the predicted morbidity indicators and 23% of studies mapped it. In addition, 57% of the studies quantified uncertainty in the regression coefficients but only 7% incorporated it in the regression response variable (morbidity indicator. Fifty percent of the studies discussed uncertainty in the covariates but did not quantify it. Uncertainty was mostly defined as precision, and quantified using credible intervals by means of Bayesian approaches.None of the studies considered adequately all sources of uncertainties. We highlighted the need for uncertainty in the morbidity indicator and predictor variable to be incorporated into the modelling framework. Study design and spatial support require further attention and uncertainty associated with Earth observation data should be quantified. Finally, more attention should be given to mapping and interpreting
The Effects of Uncertainty in Speed-Flow Curve Parameters on a Large-Scale Model
DEFF Research Database (Denmark)
Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo
2014-01-01
Uncertainty is inherent in transport models and prevents the use of a deterministic approach when traffic is modeled. Quantifying uncertainty thus becomes an indispensable step to produce a more informative and reliable output of transport models. In traffic assignment models, volume-delay functi......Uncertainty is inherent in transport models and prevents the use of a deterministic approach when traffic is modeled. Quantifying uncertainty thus becomes an indispensable step to produce a more informative and reliable output of transport models. In traffic assignment models, volume...... uncertainty. This aspect is evident particularly for stretches of the network with a high number of competing routes. Model sensitivity was also tested for BPR parameter uncertainty combined with link capacity uncertainty. The resultant increase in model sensitivity demonstrates even further the importance...
Research on Judgment Aggregation Based on Logic
Directory of Open Access Journals (Sweden)
Li Dai
2014-05-01
Full Text Available Preference aggregation and judgment aggregation are two basic research models of group decision making. And preference aggregation has been deeply studied in social choice theory. However, researches of social choice theory gradually focus on judgment aggregation which appears recently. Judgment aggregation focuses on how to aggregate many consistent logical formulas into one, from the perspective of logic. We try to start with judgment aggregation model based on logic and then explore different solutions to problem of judgment aggregation.
Mapping the uncertainty in global CCN using emulation
Directory of Open Access Journals (Sweden)
L. A. Lee
2012-10-01
Full Text Available In the last two IPCC assessments aerosol radiative forcings have been given the largest uncertainty range of all forcing agents assessed. This forcing range is really a diversity of simulated forcings in different models. An essential step towards reducing model uncertainty is to quantify and attribute the sources of uncertainty at the process level. Here, we use statistical emulation techniques to quantify uncertainty in simulated concentrations of July-mean cloud condensation nuclei (CCN from a complex global aerosol microphysics model. CCN was chosen because it is the aerosol property that controls cloud drop concentrations, and therefore the aerosol indirect radiative forcing effect. We use Gaussian process emulation to perform a full variance-based sensitivity analysis and quantify, for each model grid box, the uncertainty in simulated CCN that results from 8 uncertain model parameters. We produce global maps of absolute and relative CCN sensitivities to the 8 model parameter ranges and derive probability density functions for simulated CCN. The approach also allows us to include the uncertainty from interactions between these parameters, which cannot be quantified in traditional one-at-a-time sensitivity tests. The key findings from our analysis are that model CCN in polluted regions and the Southern Ocean are mostly only sensitive to uncertainties in emissions parameters but in all other regions CCN uncertainty is driven almost exclusively by uncertainties in parameters associated with model processes. For example, in marine regions between 30° S and 30° N model CCN uncertainty is driven mainly by parameters associated with cloud-processing of Aitken-sized particles whereas in polar regions uncertainties in scavenging parameters dominate. In these two regions a single parameter dominates but in other regions up to 50% of the variance can be due to interaction effects between different parameters. Our analysis provides direct
The Effects of Uncertainty in Speed-Flow Curve Parameters on a Large-Scale Model
DEFF Research Database (Denmark)
Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo
2014-01-01
Uncertainty is inherent in transport models and prevents the use of a deterministic approach when traffic is modeled. Quantifying uncertainty thus becomes an indispensable step to produce a more informative and reliable output of transport models. In traffic assignment models, volume-delay functi......Uncertainty is inherent in transport models and prevents the use of a deterministic approach when traffic is modeled. Quantifying uncertainty thus becomes an indispensable step to produce a more informative and reliable output of transport models. In traffic assignment models, volume...
Sensitivity and uncertainty analysis
Cacuci, Dan G; Navon, Ionel Michael
2005-01-01
As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c
Uncertainty in artificial intelligence
Levitt, TS; Lemmer, JF; Shachter, RD
1990-01-01
Clearly illustrated in this volume is the current relationship between Uncertainty and AI.It has been said that research in AI revolves around five basic questions asked relative to some particular domain: What knowledge is required? How can this knowledge be acquired? How can it be represented in a system? How should this knowledge be manipulated in order to provide intelligent behavior? How can the behavior be explained? In this volume, all of these questions are addressed. From the perspective of the relationship of uncertainty to the basic questions of AI, the book divides naturally i
Orbital State Uncertainty Realism
Horwood, J.; Poore, A. B.
2012-09-01
Fundamental to the success of the space situational awareness (SSA) mission is the rigorous inclusion of uncertainty in the space surveillance network. The *proper characterization of uncertainty* in the orbital state of a space object is a common requirement to many SSA functions including tracking and data association, resolution of uncorrelated tracks (UCTs), conjunction analysis and probability of collision, sensor resource management, and anomaly detection. While tracking environments, such as air and missile defense, make extensive use of Gaussian and local linearity assumptions within algorithms for uncertainty management, space surveillance is inherently different due to long time gaps between updates, high misdetection rates, nonlinear and non-conservative dynamics, and non-Gaussian phenomena. The latter implies that "covariance realism" is not always sufficient. SSA also requires "uncertainty realism"; the proper characterization of both the state and covariance and all non-zero higher-order cumulants. In other words, a proper characterization of a space object's full state *probability density function (PDF)* is required. In order to provide a more statistically rigorous treatment of uncertainty in the space surveillance tracking environment and to better support the aforementioned SSA functions, a new class of multivariate PDFs are formulated which more accurately characterize the uncertainty of a space object's state or orbit. The new distribution contains a parameter set controlling the higher-order cumulants which gives the level sets a distinctive "banana" or "boomerang" shape and degenerates to a Gaussian in a suitable limit. Using the new class of PDFs within the general Bayesian nonlinear filter, the resulting filter prediction step (i.e., uncertainty propagation) is shown to have the *same computational cost as the traditional unscented Kalman filter* with the former able to maintain a proper characterization of the uncertainty for up to *ten
Commonplaces and social uncertainty
DEFF Research Database (Denmark)
Lassen, Inger
2008-01-01
an example of risk discourse in which the use of commonplaces seems to be a central feature (Myers 2004: 81). My analyses support earlier findings that commonplaces serve important interactional purposes (Barton 1999) and that they are used for mitigating disagreement, for closing topics and for facilitating...... risk discourse (Myers 2005; 2007). In additional, however, I argue that commonplaces are used to mitigate feelings of insecurity caused by uncertainty and to negotiate new codes of moral conduct. Keywords: uncertainty, commonplaces, risk discourse, focus groups, appraisal...
Estimating the measurement uncertainty in forensic blood alcohol analysis.
Gullberg, Rod G
2012-04-01
For many reasons, forensic toxicologists are being asked to determine and report their measurement uncertainty in blood alcohol analysis. While understood conceptually, the elements and computations involved in determining measurement uncertainty are generally foreign to most forensic toxicologists. Several established and well-documented methods are available to determine and report the uncertainty in blood alcohol measurement. A straightforward bottom-up approach is presented that includes: (1) specifying the measurand, (2) identifying the major components of uncertainty, (3) quantifying the components, (4) statistically combining the components and (5) reporting the results. A hypothetical example is presented that employs reasonable estimates for forensic blood alcohol analysis assuming headspace gas chromatography. These computations are easily employed in spreadsheet programs as well. Determining and reporting measurement uncertainty is an important element in establishing fitness-for-purpose. Indeed, the demand for such computations and information from the forensic toxicologist will continue to increase.
Avoiding climate change uncertainties in Strategic Environmental Assessment
DEFF Research Database (Denmark)
Larsen, Sanne Vammen; Kørnøv, Lone; Driscoll, Patrick Arthur
2013-01-01
incentives to do so, climate change uncertainties were systematically avoided or downplayed in all but 5 of the 151 SEAs that were reviewed. Finally, two possible explanatory mechanisms are proposed to explain this: conflict avoidance and a need to quantify uncertainty.......This article is concerned with how Strategic Environmental Assessment (SEA) practice handles climate change uncertainties within the Danish planning system. First, a hypothetical model is set up for how uncertainty is handled and not handled in decision-making. The model incorporates the strategies...... ‘reduction’ and ‘resilience’, ‘denying’, ‘ignoring’ and ‘postponing’. Second, 151 Danish SEAs are analysed with a focus on the extent to which climate change uncertainties are acknowledged and presented, and the empirical findings are discussed in relation to the model. The findings indicate that despite...
Facing uncertainty in ecosystem services-based resource management.
Grêt-Regamey, Adrienne; Brunner, Sibyl H; Altwegg, Jürg; Bebi, Peter
2013-09-01
The concept of ecosystem services is increasingly used as a support for natural resource management decisions. While the science for assessing ecosystem services is improving, appropriate methods to address uncertainties in a quantitative manner are missing. Ignoring parameter uncertainties, modeling uncertainties and uncertainties related to human-environment interactions can modify decisions and lead to overlooking important management possibilities. In this contribution, we present a new approach for mapping the uncertainties in the assessment of multiple ecosystem services. The spatially explicit risk approach links Bayesian networks to a Geographic Information System for forecasting the value of a bundle of ecosystem services and quantifies the uncertainties related to the outcomes in a spatially explicit manner. We demonstrate that mapping uncertainties in ecosystem services assessments provides key information for decision-makers seeking critical areas in the delivery of ecosystem services in a case study in the Swiss Alps. The results suggest that not only the total value of the bundle of ecosystem services is highly dependent on uncertainties, but the spatial pattern of the ecosystem services values changes substantially when considering uncertainties. This is particularly important for the long-term management of mountain forest ecosystems, which have long rotation stands and are highly sensitive to pressing climate and socio-economic changes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Objectified quantification of uncertainties in Bayesian atmospheric inversions
Directory of Open Access Journals (Sweden)
A. Berchet
2014-07-01
Full Text Available Classical Bayesian atmospheric inversions process atmospheric observations and prior emissions, the two being connected by an observation operator picturing mainly the atmospheric transport. These inversions rely on prescribed errors in the observations, the prior emissions and the observation operator. At the meso-scale, inversion results are very sensitive to the prescribed error distributions, which are not accurately known. The classical Bayesian framework experiences difficulties in quantifying the impact of mis-specified error distributions on the optimized fluxes. In order to cope with this issue, we rely on recent research results and enhance the classical Bayesian inversion framework through a marginalization on all the plausible errors that can be prescribed in the system. The marginalization consists in computing inversions for all possible error distributions weighted by the probability of occurence of the error distributions. The posterior distribution of the fluxes calculated by the marginalization is complicated and not explicitly describable. We then carry out a Monte-Carlo sampling relying on an approximation of the probability of occurence of the error distributions. This approximation is deduced from the well-tested algorithm of the Maximum of Likelihood. Thus, the marginalized inversion relies on an automatic objectified diagnosis of the error statistics, without any prior knowledge about the matrices. It robustly includes the uncertainties on the error distributions, contrary to what is classically done with frozen expert-knowledge error statistics. Some expert knowledge is still used in the method for the choice of emission aggregation pattern and sampling protocol in order to reduce the computation costs of the method. The relevance and the robustness of the method is tested on a case study: the inversion of methane surface fluxes at the meso-scale with real observation sites in Eurasia. Observing System Simulation
Directory of Open Access Journals (Sweden)
Barbara Mickowska
2013-02-01
Full Text Available The aim of this study was to assess the importance of validation and uncertainty estimation related to the results of amino acid analysis using the ion-exchange chromatography with post-column derivatization technique. The method was validated and the components of standard uncertainty were identified and quantified to recognize the major contributions to uncertainty of analysis. Estimated relative extended uncertainty (k=2, P=95% varied in range from 9.03% to 12.68%. Quantification of the uncertainty components indicates that the contribution of the calibration concentration uncertainty is the largest and it plays the most important role in the overall uncertainty in amino acid analysis. It is followed by uncertainty of area of chromatographic peaks and weighing procedure of samples. The uncertainty of sample volume and calibration peak area may be negligible. The comparison of CV% with estimated relative uncertainty indicates that interpretation of research results can be misled without uncertainty estimation.
Characterization Techniques for Aggregated Nanomaterials in Biological and Environmental Systems
Jeon, Seongho
colloidal systems. Aggregation mechanism and behavior of nanoparticles in surrounding were examined as a function of their quantified aggregate morphologies. The first three studies (Chapters 2, 3, and 4) introduced a new gas-phase particle size measurement system, a liquid nebulization-ion mobility spectrometry (LN-IMS) technique, to characterize nanomaterials (down to 5 nm in characteristic size) and nanoparticle-protein conjugates. In other two studies (Chapters 5 and 6), three dimensional structures of homo-aggregates were quantified with the fractal aggregate model, and resulted fractal structures of aggregates were correlated to their transport properties in surroundings.
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Energy Technology Data Exchange (ETDEWEB)
Haihua Zhao; Vincent A. Mousseau
2011-09-01
Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical
Development of a Prototype Model-Form Uncertainty Knowledge Base
Green, Lawrence L.
2016-01-01
Uncertainties are generally classified as either aleatory or epistemic. Aleatory uncertainties are those attributed to random variation, either naturally or through manufacturing processes. Epistemic uncertainties are generally attributed to a lack of knowledge. One type of epistemic uncertainty is called model-form uncertainty. The term model-form means that among the choices to be made during a design process within an analysis, there are different forms of the analysis process, which each give different results for the same configuration at the same flight conditions. Examples of model-form uncertainties include the grid density, grid type, and solver type used within a computational fluid dynamics code, or the choice of the number and type of model elements within a structures analysis. The objectives of this work are to identify and quantify a representative set of model-form uncertainties and to make this information available to designers through an interactive knowledge base (KB). The KB can then be used during probabilistic design sessions, so as to enable the possible reduction of uncertainties in the design process through resource investment. An extensive literature search has been conducted to identify and quantify typical model-form uncertainties present within aerospace design. An initial attempt has been made to assemble the results of this literature search into a searchable KB, usable in real time during probabilistic design sessions. A concept of operations and the basic structure of a model-form uncertainty KB are described. Key operations within the KB are illustrated. Current limitations in the KB, and possible workarounds are explained.
Model and parameter uncertainty in IDF relationships under climate change
Chandra, Rupa; Saha, Ujjwal; Mujumdar, P. P.
2015-05-01
Quantifying distributional behavior of extreme events is crucial in hydrologic designs. Intensity Duration Frequency (IDF) relationships are used extensively in engineering especially in urban hydrology, to obtain return level of extreme rainfall event for a specified return period and duration. Major sources of uncertainty in the IDF relationships are due to insufficient quantity and quality of data leading to parameter uncertainty due to the distribution fitted to the data and uncertainty as a result of using multiple GCMs. It is important to study these uncertainties and propagate them to future for accurate assessment of return levels for future. The objective of this study is to quantify the uncertainties arising from parameters of the distribution fitted to data and the multiple GCM models using Bayesian approach. Posterior distribution of parameters is obtained from Bayes rule and the parameters are transformed to obtain return levels for a specified return period. Markov Chain Monte Carlo (MCMC) method using Metropolis Hastings algorithm is used to obtain the posterior distribution of parameters. Twenty six CMIP5 GCMs along with four RCP scenarios are considered for studying the effects of climate change and to obtain projected IDF relationships for the case study of Bangalore city in India. GCM uncertainty due to the use of multiple GCMs is treated using Reliability Ensemble Averaging (REA) technique along with the parameter uncertainty. Scale invariance theory is employed for obtaining short duration return levels from daily data. It is observed that the uncertainty in short duration rainfall return levels is high when compared to the longer durations. Further it is observed that parameter uncertainty is large compared to the model uncertainty.
Delayed neutron spectra and their uncertainties in fission product summation calculations
Energy Technology Data Exchange (ETDEWEB)
Miyazono, T.; Sagisaka, M.; Ohta, H.; Oyamatsu, K.; Tamaki, M. [Nagoya Univ. (Japan)
1997-03-01
Uncertainties in delayed neutron summation calculations are evaluated with ENDF/B-VI for 50 fissioning systems. As the first step, uncertainty calculations are performed for the aggregate delayed neutron activity with the same approximate method as proposed previously for the decay heat uncertainty analyses. Typical uncertainty values are about 6-14% for {sup 238}U(F) and about 13-23% for {sup 243}Am(F) at cooling times 0.1-100 (s). These values are typically 2-3 times larger than those in decay heat at the same cooling times. For aggregate delayed neutron spectra, the uncertainties would be larger than those for the delayed neutron activity because much more information about the nuclear structure is still necessary. (author)
Coulson-Thomas, Colin
2015-01-01
Examines risk management and contemporary issues concerning risk governance from a board perspective, including risk tolerance, innovation, insurance, balancing risks and other factors, risk and strategies of diversification or focus, increasing flexibility to cope with uncertainty, periodic planning versus intelligent steering, and limiting downside risks and adverse consequences.
Uncertainties in repository modeling
Energy Technology Data Exchange (ETDEWEB)
Wilson, J.R.
1996-12-31
The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling.
Vehicle Routing under Uncertainty
Máhr, T.
2011-01-01
In this thesis, the main focus is on the study of a real-world transportation problem with uncertainties, and on the comparison of a centralized and a distributed solution approach in the context of this problem. We formalize the real-world problem, and provide a general framework to extend it with
DEFF Research Database (Denmark)
Greasley, David; Madsen, Jakob B.
2006-01-01
A severe collapse of fixed capital formation distinguished the onset of the Great Depression from other investment downturns between the world wars. Using a model estimated for the years 1890-2000, we show that the expected profitability of capital measured by Tobin's q, and the uncertainty surro...... of the depression: rather, its slump helped to propel the wider collapse...
Cettolin, E.; Riedl, A.M.
2013-01-01
An important element for the public support of policies is their perceived justice. At the same time most policy choices have uncertain outcomes. We report the results of a first experiment investigating just allocations of resources when some recipients are exposed to uncertainty. Although, under c
Institute of Scientific and Technical Information of China (English)
范梦璇
2015-01-01
<正>Employ change-related uncertainty is a condition that under current continually changing business environment,the organizations also have to change,the change include strategic direction,structure and staffing levels to help company to keep competitive(Armenakis&Bedeian,1999);However;these
Quantifying the risk of extreme aviation accidents
Das, Kumer Pial; Dey, Asim Kumer
2016-12-01
Air travel is considered a safe means of transportation. But when aviation accidents do occur they often result in fatalities. Fortunately, the most extreme accidents occur rarely. However, 2014 was the deadliest year in the past decade causing 111 plane crashes, and among them worst four crashes cause 298, 239, 162 and 116 deaths. In this study, we want to assess the risk of the catastrophic aviation accidents by studying historical aviation accidents. Applying a generalized Pareto model we predict the maximum fatalities from an aviation accident in future. The fitted model is compared with some of its competitive models. The uncertainty in the inferences are quantified using simulated aviation accident series, generated by bootstrap resampling and Monte Carlo simulations.
Uncertainty Quantification for Optical Model Parameters
Lovell, A E; Sarich, J; Wild, S M
2016-01-01
Although uncertainty quantification has been making its way into nuclear theory, these methods have yet to be explored in the context of reaction theory. For example, it is well known that different parameterizations of the optical potential can result in different cross sections, but these differences have not been systematically studied and quantified. The purpose of this work is to investigate the uncertainties in nuclear reactions that result from fitting a given model to elastic-scattering data, as well as to study how these uncertainties propagate to the inelastic and transfer channels. We use statistical methods to determine a best fit and create corresponding 95\\% confidence bands. A simple model of the process is fit to elastic-scattering data and used to predict either inelastic or transfer cross sections. In this initial work, we assume that our model is correct, and the only uncertainties come from the variation of the fit parameters. We study a number of reactions involving neutron and deuteron p...
Uncertainty Quantification for Cargo Hold Fires
DeGennaro, Anthony M; Martinelli, Luigi; Rowley, Clarence W
2015-01-01
The purpose of this study is twofold -- first, to introduce the application of high-order discontinuous Galerkin methods to buoyancy-driven cargo hold fire simulations, second, to explore statistical variation in the fluid dynamics of a cargo hold fire given parameterized uncertainty in the fire source location and temperature. Cargo hold fires represent a class of problems that require highly-accurate computational methods to simulate faithfully. Hence, we use an in-house discontinuous Galerkin code to treat these flows. Cargo hold fires also exhibit a large amount of uncertainty with respect to the boundary conditions. Thus, the second aim of this paper is to quantify the resulting uncertainty in the flow, using tools from the uncertainty quantification community to ensure that our efforts require a minimal number of simulations. We expect that the results of this study will provide statistical insight into the effects of fire location and temperature on cargo fires, and also assist in the optimization of f...
Idiosyncratic Uncertainty, Capacity Utilization and the Business Cycle
DEFF Research Database (Denmark)
Fragnart, Jean-Francois; Licandro, Omar; Portier, Franck
In a stochastic dynamic general equilibrium framework, we introduce the concept of variable capacity utilization (as opposed to the concept of capital utilization). We consider an economy where imperfectly competitive firms use a putty-clay technology and decide on their productive capacity level...... under uncertainty. An idiosyncratic uncertainty about the exact position of the demand curve faced by each firm explains why some productive capacities may remain idle in the sequel and why individual capacity utilization rates differ across firms. The capacity under-utilization at the aggregate level...... rate displays positive serial correlation)....
Uncertainty and validation. Effect of user interpretation on uncertainty estimates
Energy Technology Data Exchange (ETDEWEB)
Kirchner, G. [Univ. of Bremen (Germany); Peterson, R. [AECL, Chalk River, ON (Canada)] [and others
1996-11-01
Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the
Charm quark mass with calibrated uncertainty
Erler, Jens; Masjuan, Pere; Spiesberger, Hubert
2017-02-01
We determine the charm quark mass hat{m}_c from QCD sum rules of the moments of the vector current correlator calculated in perturbative QCD at O (hat{α }_s^3). Only experimental data for the charm resonances below the continuum threshold are needed in our approach, while the continuum contribution is determined by requiring self-consistency between various sum rules, including the one for the zeroth moment. Existing data from the continuum region can then be used to bound the theoretic uncertainty. Our result is hat{m}_c(hat{m}_c) = 1272 ± 8 MeV for hat{α }_s(M_Z) = 0.1182, where the central value is in very good agreement with other recent determinations based on the relativistic sum rule approach. On the other hand, there is considerably less agreement regarding the theory dominated uncertainty and we pay special attention to the question how to quantify and justify it.
Uncertainty and Sensitivity in Surface Dynamics Modeling
Kettner, Albert J.; Syvitski, James P. M.
2016-05-01
Papers for this special issue on 'Uncertainty and Sensitivity in Surface Dynamics Modeling' heralds from papers submitted after the 2014 annual meeting of the Community Surface Dynamics Modeling System or CSDMS. CSDMS facilitates a diverse community of experts (now in 68 countries) that collectively investigate the Earth's surface-the dynamic interface between lithosphere, hydrosphere, cryosphere, and atmosphere, by promoting, developing, supporting and disseminating integrated open source software modules. By organizing more than 1500 researchers, CSDMS has the privilege of identifying community strengths and weaknesses in the practice of software development. We recognize, for example, that progress has been slow on identifying and quantifying uncertainty and sensitivity in numerical modeling of earth's surface dynamics. This special issue is meant to raise awareness for these important subjects and highlight state-of-the-art progress.
Estimates of bias and uncertainty in recorded external dose
Energy Technology Data Exchange (ETDEWEB)
Fix, J.J.; Gilbert, E.S.; Baumgartner, W.V.
1994-10-01
A study is underway to develop an approach to quantify bias and uncertainty in recorded dose estimates for workers at the Hanford Site based on personnel dosimeter results. This paper focuses on selected experimental studies conducted to better define response characteristics of Hanford dosimeters. The study is more extensive than the experimental studies presented in this paper and includes detailed consideration and evaluation of other sources of bias and uncertainty. Hanford worker dose estimates are used in epidemiologic studies of nuclear workers. A major objective of these studies is to provide a direct assessment of the carcinogenic risk of exposure to ionizing radiation at low doses and dose rates. Considerations of bias and uncertainty in the recorded dose estimates are important in the conduct of this work. The method developed for use with Hanford workers can be considered an elaboration of the approach used to quantify bias and uncertainty in estimated doses for personnel exposed to radiation as a result of atmospheric testing of nuclear weapons between 1945 and 1962. This approach was first developed by a National Research Council (NRC) committee examining uncertainty in recorded film badge doses during atmospheric tests (NRC 1989). It involved quantifying both bias and uncertainty from three sources (i.e., laboratory, radiological, and environmental) and then combining them to obtain an overall assessment. Sources of uncertainty have been evaluated for each of three specific Hanford dosimetry systems (i.e., the Hanford two-element film dosimeter, 1944-1956; the Hanford multi-element film dosimeter, 1957-1971; and the Hanford multi-element TLD, 1972-1993) used to estimate personnel dose throughout the history of Hanford operations. Laboratory, radiological, and environmental sources of bias and uncertainty have been estimated based on historical documentation and, for angular response, on selected laboratory measurements.
Novel aspects of platelet aggregation
Directory of Open Access Journals (Sweden)
Roka-Moya Y. M.
2014-01-01
Full Text Available The platelet aggregation is an important process, which is critical for the hemostatic plug formation and thrombosis. Recent studies have shown that the platelet aggregation is more complex and dynamic than it was previously thought. There are several mechanisms that can initiate the platelet aggregation and each of them operates under specific conditions in vivo. At the same time, the influence of certain plasma proteins on this process should be considered. This review intends to summarize the recent data concerning the adhesive molecules and their receptors, which provide the platelet aggregation under different conditions.
Fractal Aggregation Under Rotation
Institute of Scientific and Technical Information of China (English)
WU Feng-Min; WU Li-Li; LU Hang-Jun; LI Qiao-Wen; YE Gao-Xiang
2004-01-01
By means of the Monte Carlo simulation, a fractal growth model is introduced to describe diffusion-limited aggregation (DLA) under rotation. Patterns which are different from the classical DLA model are observed and the fractal dimension of such clusters is calculated. It is found that the pattern of the clusters and their fractal dimension depend strongly on the rotation velocity of the diffusing particle. Our results indicate the transition from fractal to non-fractal behavior of growing cluster with increasing rotation velocity, i.e. for small enough angular velocity ω the fractal dimension decreases with increasing ω, but then, with increasing rotation velocity, the fractal dimension increases and the cluster becomes compact and tends to non-fractal.
Fractal Aggregation Under Rotation
Institute of Scientific and Technical Information of China (English)
WUFeng-Min; WULi-Li; LUHang-Jun; LIQiao-Wen; YEGao-Xiang
2004-01-01
By means of the Monte Carlo simulation, a fractal growth model is introduced to describe diffusion-limited aggregation (DLA) under rotation. Patterns which are different from the classical DLA model are observed and the fractal dimension of such clusters is calculated. It is found that the pattern of the clusters and their fractal dimension depend strongly on the rotation velocity of the diffusing particle. Our results indicate the transition from fractal to non-fractal behavior of growing cluster with increasing rotation velocity, i.e. for small enough angular velocity ω; thefractal dimension decreases with increasing ω;, but then, with increasing rotation velocity, the fractal dimension increases and the cluster becomes compact and tends to non-fractal.
Platelet aggregation following trauma
DEFF Research Database (Denmark)
Windeløv, Nis A; Sørensen, Anne M; Perner, Anders
2014-01-01
We aimed to elucidate platelet function in trauma patients, as it is pivotal for hemostasis yet remains scarcely investigated in this population. We conducted a prospective observational study of platelet aggregation capacity in 213 adult trauma patients on admission to an emergency department (ED......). Inclusion criteria were trauma team activation and arterial cannula insertion on arrival. Blood samples were analyzed by multiple electrode aggregometry initiated by thrombin receptor agonist peptide 6 (TRAP) or collagen using a Multiplate device. Blood was sampled median 65 min after injury; median injury...... severity score (ISS) was 17; 14 (7%) patients received 10 or more units of red blood cells in the ED (massive transfusion); 24 (11%) patients died within 28 days of trauma: 17 due to cerebral injuries, four due to exsanguination, and three from other causes. No significant association was found between...
Quantifying the value of redundant measurements at GRUAN sites
Directory of Open Access Journals (Sweden)
F. Madonna
2014-06-01
Full Text Available The potential for measurement redundancy to reduce uncertainty in atmospheric variables has not been investigated comprehensively for climate observations. We evaluated the usefulness of entropy and mutual correlation concepts, as defined in information theory, for quantifying random uncertainty and redundancy in time series of atmospheric water vapor provided by five highly instrumented GRUAN (GCOS [Global Climate Observing System] Reference Upper-Air Network Stations in 2010–2012. Results show that the random uncertainties for radiosonde, frost-point hygrometer, Global Positioning System, microwave and infrared radiometers, and Raman lidar measurements differed by less than 8%. Comparisons of time series of the Integrated Water Vapor (IWV content from ground-based remote sensing instruments with in situ soundings showed that microwave radiometers have the highest redundancy and therefore the highest potential to reduce random uncertainty of IWV time series estimated by radiosondes. Moreover, the random uncertainty of a time series from one instrument should be reduced of ~ 60% by constraining the measurements with those from another instrument. The best reduction of random uncertainty resulted from conditioning of Raman lidar measurements with microwave radiometer measurements. Specific instruments are recommended for atmospheric water vapor measurements at GRUAN sites. This approach can be applied to the study of redundant measurements for other climate variables.
Kinetics of Monoclonal Antibody Aggregation from Dilute toward Concentrated Conditions.
Nicoud, Lucrèce; Jagielski, Jakub; Pfister, David; Lazzari, Stefano; Massant, Jan; Lattuada, Marco; Morbidelli, Massimo
2016-04-07
Gaining understanding on the aggregation behavior of proteins under concentrated conditions is of both fundamental and industrial relevance. Here, we study the aggregation kinetics of a model monoclonal antibody (mAb) under thermal stress over a wide range of protein concentrations in various buffer solutions. We follow experimentally the monomer depletion and the aggregate growth by size exclusion chromatography with inline light scattering. We describe the experimental results in the frame of a kinetic model based on population balance equations, which allows one to discriminate the contributions of the conformational and of the colloidal stabilities to the global aggregation rate. Finally, we propose an expression for the aggregation rate constant, which accounts for solution viscosity, protein-protein interactions, as well as aggregate compactness. All these effects can be quantified by light scattering techniques. It is found that the model describes well the experimental data under dilute conditions. Under concentrated conditions, good model predictions are obtained when the solution pH is far below the isoelectric point (pI) of the mAb. However, peculiar effects arise when the solution pH is increased toward the mAb pI, and possible explanations are discussed.
Characterization of Nanoparticle Aggregation in Biologically Relevant Fluids
McEnnis, Kathleen; Lahann, Joerg
Nanoparticles (NPs) are often studied as drug delivery vehicles, but little is known about their behavior in blood once injected into animal models. If the NPs aggregate in blood, they will be shunted to the liver or spleen instead of reaching the intended target. The use of animals for these experiments is costly and raises ethical questions. Typically dynamic light scattering (DLS) is used to analyze aggregation behavior, but DLS cannot be used because the components of blood also scatter light. As an alternative, a method of analyzing NPs in biologically relevant fluids such as blood plasma has been developed using nanoparticle tracking analysis (NTA) with fluorescent filters. In this work, NTA was used to analyze the aggregation behavior of fluorescent polystyrene NPs with different surface modifications in blood plasma. It was expected that different surface chemistries on the particles will change the aggregation behavior. The effect of the surface modifications was investigated by quantifying the percentage of NPs in aggregates after addition to blood plasma. The use of this characterization method will allow for better understanding of particle behavior in the body, and potential problems, specifically aggregation, can be addressed before investing in in vivo studies.
Quantification of Aggregate Topology, the Minimum Dimension and Connectivity
Rai, Durgesh; Beaucage, Gregory; Ilavsky, Jan; Kammler, Hendrik
2010-03-01
The properties (electrical conductivity, diffusion coefficient, spring constant) of nanostructured ceramic aggregates can be determined only if details of the structural topology are known. For example, the mechanical strength of an aggregate depends only on the shortest average path through the aggregate, called the minimum path. Most characterization methods fail to quantify the topology. Values of the minimum dimension, associated with the minimum path, and the spectral dimension, associated with energy distribution in an aggregate have been considered only in simulations and models. Recently we have developed a method using small-angle neutron and x-ray scattering for the quantification of the details of topology in aggregated materials (Beaucage 2004, Ramachandran 2008, 2009). In situ SAXS studies of flame aerosols containing nanostructured aggregates will be presented. Their topology as a function of growth time on the millisecond time scale will be described. Beaucage G, Phys. Rev. E 70 031401 (2004).; Ramachandran R, et al. Macromolecules 41 9802-9806 (2008).; Ramachandran R, et al. Macromolecules, 42 4746-4750 (2009).
A multiscale view of therapeutic protein aggregation: a colloid science perspective.
Nicoud, Lucrèce; Owczarz, Marta; Arosio, Paolo; Morbidelli, Massimo
2015-03-01
The formation of aggregates in protein-based pharmaceuticals is a major issue that can compromise drug safety and drug efficacy. With a view to improving protein stability, considerable effort is put forth to unravel the fundamental mechanisms underlying the aggregation process. However, therapeutic protein aggregation is a complex multistep phenomenon that involves time and length scales spanning several orders of magnitude, and strategies addressing protein aggregation inhibition are currently still largely empirical in practice. Here, we review how key concepts developed in the frame of colloid science can be applied to gain knowledge on the kinetics and thermodynamics of therapeutic protein aggregation across different length scales. In particular, we discuss the use of coarse-grained molecular interaction potentials to quantify protein colloidal stability. We then show how population balance equations simulations can provide insights into the mechanisms of aggregate formation at the mesoscale, and we highlight the strength of the concept of fractal scaling to quantify irregular aggregate morphologies. Finally, we correlate the macroscopic rheological properties of protein solutions with the occupied volume fraction and the aggregate structure. Overall, this work illustrates the power and limitations of colloidal approaches in the multiscale description of the aggregation of therapeutic proteins.
A Multi-objective Model for Transmission Planning Under Uncertainties
DEFF Research Database (Denmark)
Zhang, Chunyu; Wang, Qi; Ding, Yi;
2014-01-01
The significant growth of distributed energy resources (DERs) associated with smart grid technologies has prompted excessive uncertainties in the transmission system. The most representative is the novel notation of commercial aggregator who has lighted a bright way for DERs to participate power...... trading and regulating in transmission level. In this paper, the aggregator caused uncertainty is analyzed first considering DERs’ correlation. During the transmission planning, a scenario-based multi-objective transmission planning (MOTP) framework is proposed to simultaneously optimize two objectives, i.......e. the cost of power purchase and network expansion, and the revenue of power delivery. A two-phase multi-objective PSO (MOPSO) algorithm is employed to be the solver. The feasibility of the proposed multi-objective planning approach has been verified by the 77-bus system linked with 38-bus distribution...
Assessment of errors and uncertainty patterns in GIA modeling
DEFF Research Database (Denmark)
Barletta, Valentina Roberta; Spada, G.
, such as time-evolving shorelines and paleo-coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...
Assessment of errors and uncertainty patterns in GIA modeling
DEFF Research Database (Denmark)
Barletta, Valentina Roberta; Spada, G.
2012-01-01
, such as time-evolving shorelines and paleo coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...
Uncertainty quantification in Rothermel's Model using an efficient sampling method
Edwin Jimenez; M. Yousuff Hussaini; Scott L. Goodrick
2007-01-01
The purpose of the present work is to quantify parametric uncertainty in Rothermelâs wildland fire spread model (implemented in software such as BehavePlus3 and FARSITE), which is undoubtedly among the most widely used fire spread models in the United States. This model consists of a nonlinear system of equations that relates environmental variables (input parameter...
Uncertainty quantification of soil property maps with statistical expert elicitation
Truong, N.P.; Heuvelink, G.B.M.
2013-01-01
Accuracy assessment and uncertainty analyses are key to the quality of data and data analysis in a wide array of scientific disciplines. For soil science, it is important to quantify the accuracy of soil maps that are used in environmental and agro-ecological studies and decision making. Many soil m
Quantifying the efficiency of river regulation
Directory of Open Access Journals (Sweden)
R. Rödel
2005-01-01
Full Text Available Dam-affected hydrologic time series give rise to uncertainties when they are used for calibrating large-scale hydrologic models or for analysing runoff records. It is therefore necessary to identify and to quantify the impact of impoundments on runoff time series. Two different approaches were employed. The first, classic approach compares the volume of the dams that are located upstream from a station with the annual discharge. The catchment areas of the stations are calculated and then related to geo-referenced dam attributes. The paper introduces a data set of geo-referenced dams linked with 677 gauging stations in Europe. Second, the intensity of the impoundment impact on runoff times series can be quantified more exactly and directly when long-term runoff records are available. Dams cause a change in the variability of flow regimes. This effect can be measured using the model of linear single storage. The dam-caused storage change ΔS can be assessed through the volume of the emptying process between two flow regimes. As an example, the storage change ΔS is calculated for regulated long-term series of the Luleälven in northern Sweden.
Traceability and Measurement Uncertainty
DEFF Research Database (Denmark)
Tosello, Guido; De Chiffre, Leonardo
2004-01-01
respects necessary scientific precision and problem-solving approach of the field of engineering studies. Competences should be presented in a way that is methodologically and didactically optimised for employees with a mostly work-based vocational qualification and should at the same time be appealing...... and motivating to this important group. The developed e-learning system consists on 12 different chapters dealing with the following topics: 1. Basics 2. Traceability and measurement uncertainty 3. Coordinate metrology 4. Form measurement 5. Surface testing 6. Optical measurement and testing 7. Measuring rooms 8....... Machine tool testing 9. The role of manufacturing metrology for QM 10. Inspection planning 11. Quality management of measurements incl. Documentation 12. Advanced manufacturing measurement technology The present report (which represents the section 2 - Traceability and Measurement Uncertainty – of the e...
Vámos, Tibor
The gist of the paper is the fundamental uncertain nature of all kinds of uncertainties and consequently a critical epistemic review of historical and recent approaches, computational methods, algorithms. The review follows the development of the notion from the beginnings of thinking, via the Aristotelian and Skeptic view, the medieval nominalism and the influential pioneering metaphors of ancient India and Persia to the birth of modern mathematical disciplinary reasoning. Discussing the models of uncertainty, e.g. the statistical, other physical and psychological background we reach a pragmatic model related estimation perspective, a balanced application orientation for different problem areas. Data mining, game theories and recent advances in approximation algorithms are discussed in this spirit of modest reasoning.
Coalition Formation under Uncertainty
2010-03-01
Unfortunately, many current approaches to coalition formation lack provi- sions for uncertainty. This prevents application of coalition formation techniques ...should also include mechanisms and processing techniques that provide stabil- ity, scalability, and, at a minimum, optimality relative to agent beliefs...relocate a piano . For the sake of simplicity, assume payment is divided evenly among the participants in the move (i.e., each mover has the same utility or
Optimizing production under uncertainty
DEFF Research Database (Denmark)
Rasmussen, Svend
This Working Paper derives criteria for optimal production under uncertainty based on the state-contingent approach (Chambers and Quiggin, 2000), and discusses po-tential problems involved in applying the state-contingent approach in a normative context. The analytical approach uses the concept o...... the relative benefits and of using the state-contingent approach in a norma-tive context, compared to the EV model....
Uncertainty in artificial intelligence
Shachter, RD; Henrion, M; Lemmer, JF
1990-01-01
This volume, like its predecessors, reflects the cutting edge of research on the automation of reasoning under uncertainty.A more pragmatic emphasis is evident, for although some papers address fundamental issues, the majority address practical issues. Topics include the relations between alternative formalisms (including possibilistic reasoning), Dempster-Shafer belief functions, non-monotonic reasoning, Bayesian and decision theoretic schemes, and new inference techniques for belief nets. New techniques are applied to important problems in medicine, vision, robotics, and natural language und
1981-05-15
Variants of Uncertainty Daniel Kahneman University of British Columbia Amos Tversky Stanford University DTI-C &%E-IECTE ~JUNO 1i 19 8 1j May 15, 1981... Dennett , 1979) in which different parts have ac- cess to different data, assign then different weights and hold different views of the situation...2robable and t..h1 provable. Oxford- Claredor Press, 1977. Dennett , D.C. Brainstorms. Hassocks: Harvester, 1979. Donchin, E., Ritter, W. & McCallum, W.C
Uncertainties in climate change projections for viticulture in Portugal
Fraga, Helder; Malheiro, Aureliano C.; Moutinho-Pereira, José; Pinto, Joaquim G.; Santos, João A.
2013-04-01
The assessment of climate change impacts on viticulture is often carried out using regional climate model (RCM) outputs. These studies rely on either multi-model ensembles or on single-model approaches. The RCM-ensembles account for uncertainties inherent to the different models. In this study, using a 16-RCM ensemble under the IPCC A1B scenario, the climate change signal (future minus recent-past, 2041-2070 - 1961-2000) of 4 bioclimatic indices (Huglin Index - HI, Dryness Index - DI, Hydrothermal Index - HyI and CompI - Composite Index) over mainland Portugal is analysed. A normalized interquartile range (NIQR) of the 16-member ensemble for each bioclimatic index is assessed in order to quantify the ensemble uncertainty. The results show significant increases in the HI index over most of Portugal, with higher values in Alentejo, Trás-os-Montes and Douro/Porto wine regions, also depicting very low uncertainty. Conversely, the decreases in the DI pattern throughout the country show large uncertainties, except in Minho (northwestern Portugal), where precipitation reaches the highest amounts in Portugal. The HyI shows significant decreases in northwestern Portugal, with relatively low uncertainty all across the country. The CompI depicts significant decreases over Alentejo and increases over Minho, though decreases over Alentejo reveal high uncertainty, while increases over Minho show low uncertainty. The assessment of the uncertainty in climate change projections is of great relevance for the wine industry. Quantifying this uncertainty is crucial, since different models may lead to quite different outcomes and may thereby be as crucial as climate change itself to the winemaking sector. This work is supported by European Union Funds (FEDER/COMPETE - Operational Competitiveness Programme) and by national funds (FCT - Portuguese Foundation for Science and Technology) under the project FCOMP-01-0124-FEDER-022692.
CSIR Research Space (South Africa)
Du Plessis, L
2007-10-01
Full Text Available , 1685, South Africa 4 Pavement and Materials Specialist, South African National Roads Agency LTD, P.O. Box 100 410, Scottsville, 3209, South Africa. Abstract It is well known that performance of plain jointed concrete pavements depends... on aggregate interlock to transfer load from the one slab to the next. In order to quantify the relative contribution of crack width and the strength of the aggregate to the long- term performance of a plain jointed pavement, experimental sections of road...
Uncertainty Quantification in Aeroelasticity
Beran, Philip; Stanford, Bret; Schrock, Christopher
2017-01-01
Physical interactions between a fluid and structure, potentially manifested as self-sustained or divergent oscillations, can be sensitive to many parameters whose values are uncertain. Of interest here are aircraft aeroelastic interactions, which must be accounted for in aircraft certification and design. Deterministic prediction of these aeroelastic behaviors can be difficult owing to physical and computational complexity. New challenges are introduced when physical parameters and elements of the modeling process are uncertain. By viewing aeroelasticity through a nondeterministic prism, where key quantities are assumed stochastic, one may gain insights into how to reduce system uncertainty, increase system robustness, and maintain aeroelastic safety. This article reviews uncertainty quantification in aeroelasticity using traditional analytical techniques not reliant on computational fluid dynamics; compares and contrasts this work with emerging methods based on computational fluid dynamics, which target richer physics; and reviews the state of the art in aeroelastic optimization under uncertainty. Barriers to continued progress, for example, the so-called curse of dimensionality, are discussed.
Calibration Under Uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Swiler, Laura Painton; Trucano, Timothy Guy
2005-03-01
This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.
Participation under Uncertainty
Energy Technology Data Exchange (ETDEWEB)
Boudourides, Moses A. [Univ. of Patras, Rio-Patras (Greece). Dept. of Mathematics
2003-10-01
This essay reviews a number of theoretical perspectives about uncertainty and participation in the present-day knowledge-based society. After discussing the on-going reconfigurations of science, technology and society, we examine how appropriate for policy studies are various theories of social complexity. Post-normal science is such an example of a complexity-motivated approach, which justifies civic participation as a policy response to an increasing uncertainty. But there are different categories and models of uncertainties implying a variety of configurations of policy processes. A particular role in all of them is played by expertise whose democratization is an often-claimed imperative nowadays. Moreover, we discuss how different participatory arrangements are shaped into instruments of policy-making and framing regulatory processes. As participation necessitates and triggers deliberation, we proceed to examine the role and the barriers of deliberativeness. Finally, we conclude by referring to some critical views about the ultimate assumptions of recent European policy frameworks and the conceptions of civic participation and politicization that they invoke.
Exciton dynamics in molecular aggregates
Augulis, R.; Pugžlys, A.; Loosdrecht, P.H.M. van; Pugzlys, A
2006-01-01
The fundamental aspects of exciton dynamics in double-wall cylindrical aggregates of cyanine dyes are studied by means of frequency resolved femtosecond pump-probe spectroscopy. The collective excitations of the aggregates, resulting from intermolecular dipole-dipole interactions have the characteri
Aggregate resources in the Netherlands
Meulen, M.J. van der; Gessel, S.F. van; Veldkamp, J.G.
2005-01-01
We have built a 3D lithological model of the Netherlands, for the purpose of mapping on-land aggregate resources down to 50 m below the surface. The model consists of voxel cells (1000 · 1000 · 1 m), with lithological composition and aggregate content estimates as primary attributes. These attribute
Exciton dynamics in molecular aggregates
Augulis, R.; Pugžlys, A.; Loosdrecht, P.H.M. van; Pugzlys, A
2006-01-01
The fundamental aspects of exciton dynamics in double-wall cylindrical aggregates of cyanine dyes are studied by means of frequency resolved femtosecond pump-probe spectroscopy. The collective excitations of the aggregates, resulting from intermolecular dipole-dipole interactions have the
Uncertainty of measurement or of mean value for the reliable classification of contaminated land.
Boon, Katy A; Ramsey, Michael H
2010-12-15
Classification of contaminated land is important for risk assessment and so it is vital to understand and quantify all of the uncertainties that are involved in the assessment of contaminated land. This paper uses a case study to compare two methods for assessing the uncertainty in site investigations (uncertainty of individual measurements, including that from sampling, and uncertainty of the mean value of all measurements within an area) and how the different methods affect the decisions made about a site. Using the 'uncertainty of the mean value' there is shown to be no significant possibility of 'significant harm' under UK guidance at one particular test site, but if you consider the 'uncertainty of the measurements' a significant proportion (50%) of the site is shown to be possibly contaminated. This raises doubts as to whether the current method using 'uncertainty of the mean' is sufficiently robust, and suggests that 'uncertainty of measurement' information may be preferable, or at least beneficial when used in conjunction.
How incorporating more data reduces uncertainty in recovery predictions
Energy Technology Data Exchange (ETDEWEB)
Campozana, F.P.; Lake, L.W.; Sepehrnoori, K. [Univ. of Texas, Austin, TX (United States)
1997-08-01
From the discovery to the abandonment of a petroleum reservoir, there are many decisions that involve economic risks because of uncertainty in the production forecast. This uncertainty may be quantified by performing stochastic reservoir modeling (SRM); however, it is not practical to apply SRM every time the model is updated to account for new data. This paper suggests a novel procedure to estimate reservoir uncertainty (and its reduction) as a function of the amount and type of data used in the reservoir modeling. Two types of data are analyzed: conditioning data and well-test data. However, the same procedure can be applied to any other data type. Three performance parameters are suggested to quantify uncertainty. SRM is performed for the following typical stages: discovery, primary production, secondary production, and infill drilling. From those results, a set of curves is generated that can be used to estimate (1) the uncertainty for any other situation and (2) the uncertainty reduction caused by the introduction of new wells (with and without well-test data) into the description.
Structural uncertainty in watershed phosphorus modeling: Toward a stochastic framework
Chen, Lei; Gong, Yongwei; Shen, Zhenyao
2016-06-01
Structural uncertainty is an important source of model predictive errors, but few studies have been conducted on the error-transitivity from model structure to nonpoint source (NPS) prediction. In this study, we focused on the structural uncertainty caused by the algorithms and equations that are used to describe the phosphorus (P) cycle at the watershed scale. The sensitivity of simulated P to each algorithm/equation was quantified using the Soil and Water Assessment Tool (SWAT) in the Three Gorges Reservoir Area, China. The results indicated that the ratios of C:N and P:N for humic materials, as well as the algorithm of fertilization and P leaching contributed the largest output uncertainties. In comparison, the initiation of inorganic P in the soil layer and the transformation algorithm between P pools are less sensitive for the NPS-P predictions. In addition, the coefficient of variation values were quantified as 0.028-0.086, indicating that the structure-induced uncertainty is minor compared to NPS-P prediction uncertainty caused by the model input and parameters. Using the stochastic framework, the cumulative probability of simulated NPS-P data provided a trade-off between expenditure burden and desired risk. In this sense, this paper provides valuable information for the control of model structural uncertainty, and can be extrapolated to other model-based studies.
A comparison of approximate reasoning results using information uncertainty
Energy Technology Data Exchange (ETDEWEB)
Chavez, Gregory [Los Alamos National Laboratory; Key, Brian [Los Alamos National Laboratory; Zerkle, David [Los Alamos National Laboratory; Shevitz, Daniel [Los Alamos National Laboratory
2009-01-01
An Approximate Reasoning (AR) model is a useful alternative to a probabilistic model when there is a need to draw conclusions from information that is qualitative. For certain systems, much of the information available is elicited from subject matter experts (SME). One such example is the risk of attack on a particular facility by a pernicious adversary. In this example there are several avenues of attack, i.e. scenarios, and AR can be used to model the risk of attack associated with each scenario. The qualitative information available and provided by the SME is comprised of linguistic values which are well suited for an AR model but meager for other modeling approaches. AR models can produce many competing results. Associated with each competing AR result is a vector of linguistic values and a respective degree of membership in each value. A suitable means to compare and segregate AR results would be an invaluable tool to analysts and decisions makers. A viable method would be to quantify the information uncertainty present in each AR result then use the measured quantity comparatively. One issue of concern for measuring the infornlation uncertainty involved with fuzzy uncertainty is that previously proposed approaches focus on the information uncertainty involved within the entire fuzzy set. This paper proposes extending measures of information uncertainty to AR results, which involve only one degree of membership for each fuzzy set included in the AR result. An approach to quantify the information uncertainty in the AR result is presented.
Uncertainty Estimation Improves Energy Measurement and Verification Procedures
Energy Technology Data Exchange (ETDEWEB)
Walter, Travis; Price, Phillip N.; Sohn, Michael D.
2014-05-14
Implementing energy conservation measures in buildings can reduce energy costs and environmental impacts, but such measures cost money to implement so intelligent investment strategies require the ability to quantify the energy savings by comparing actual energy used to how much energy would have been used in absence of the conservation measures (known as the baseline energy use). Methods exist for predicting baseline energy use, but a limitation of most statistical methods reported in the literature is inadequate quantification of the uncertainty in baseline energy use predictions. However, estimation of uncertainty is essential for weighing the risks of investing in retrofits. Most commercial buildings have, or soon will have, electricity meters capable of providing data at short time intervals. These data provide new opportunities to quantify uncertainty in baseline predictions, and to do so after shorter measurement durations than are traditionally used. In this paper, we show that uncertainty estimation provides greater measurement and verification (M&V) information and helps to overcome some of the difficulties with deciding how much data is needed to develop baseline models and to confirm energy savings. We also show that cross-validation is an effective method for computing uncertainty. In so doing, we extend a simple regression-based method of predicting energy use using short-interval meter data. We demonstrate the methods by predicting energy use in 17 real commercial buildings. We discuss the benefits of uncertainty estimates which can provide actionable decision making information for investing in energy conservation measures.
Molecular aggregation of humic substances
Wershaw, R. L.
1999-01-01
Humic substances (HS) form molecular aggregates in solution and on mineral surfaces. Elucidation of the mechanism of formation of these aggregates is important for an understanding of the interactions of HS in soils arid natural waters. The HS are formed mainly by enzymatic depolymerization and oxidation of plant biopolymers. These reactions transform the aromatic and lipid plant components into amphiphilic molecules, that is, molecules that consist of separate hydrophobic (nonpolar) and hydrophilic (polar) parts. The nonpolar parts of the molecules are composed of relatively unaltered segments of plant polymers and the polar parts of carboxylic acid groups. These amphiphiles form membrane-like aggregates on mineral surfaces and micelle-like aggregates in solution. The exterior surfaces of these aggregates are hydrophilic, and the interiors constitute separate hydrophobic liquid-like phases.
Optical monitoring of particle aggregates
Institute of Scientific and Technical Information of China (English)
John Gregory
2009-01-01
Methods for monitoring particle aggregation are briefly reviewed. Most of these techniques are based on some form of light scattering and may be greatly dependent on the optical properties of aggregates, which are not generally known. As fractal aggregates grow larger their density can become very low and this has important practical consequences for light scattering. For instance, the scattering coefficient may be much less than for solid objects, which means that the aggregates can appear much smaller than their actual size by a light transmission method. Also, for low-density objects, a high proportion of the scattered light energy is within a small angle of the incident beam, which may also be relevant for measurements with aggregates.Using the 'turbidity fluctuation' technique as an example, it is shown how the apparent size of hydroxide flocs depends mainly on the included impurity particles, rather than the hydroxide precipitate itself. Results using clay suspensions with hydrolyzing coagulants and under are discussed.
Institute of Scientific and Technical Information of China (English)
Thomas Paul; Sarfraz Hussain
2004-01-01
Dye aggregation has long been recognised as a key factor in performance, and this is no less so in ink jet applications. The aggregation state was shown to be important in many different areas ranging from the use of dyes in photodynamic therapies all the way to colorants for dying of fabrics. Therefore different methods to investigate dye association qualitatively and quantitatively were developed. A simple procedure to study aggregation could be a useful tool to characterise dyes for ink jet printing. It is critically reviewed the methods used to study dye aggregation, and discussed some of the main conclusions. This will be illustrated by examples of ink jet dye aggregation and its study in aqueous and ink systems. The results are used to correlate the solution behaviour of dyes with their print performance.
DRAG ON SUBMICRON NANOPARTICLE AGGREGATES
Institute of Scientific and Technical Information of China (English)
F.; Einar; Kruis
2005-01-01
A new procedure was developed for estimating the effective collision diameter of an aggregate composed of primary particles of any size. The coagulation coefficient of two oppositely charged particles was measured experimentally and compared with classic Fuchs theory, including a new method to account for particle non-sphericity. A second set of experiments were performed on well-defined nanoparticle aggregates at different stages of sintering, i.e. from the aggregate to the fully sintered stage. Here, electrical mobility was used to characterize the particle drag. The aggregates are being built from two different size-fractionated nanoparticle aerosols, the non-aggregated particles are discarded by an electrofilter and then they are passed through a furnace at concentrations low enough not to induce coagulation.
Hail formation triggers rapid ash aggregation in volcanic plumes.
Van Eaton, Alexa R; Mastin, Larry G; Herzog, Michael; Schwaiger, Hans F; Schneider, David J; Wallace, Kristi L; Clarke, Amanda B
2015-08-03
During explosive eruptions, airborne particles collide and stick together, accelerating the fallout of volcanic ash and climate-forcing aerosols. This aggregation process remains a major source of uncertainty both in ash dispersal forecasting and interpretation of eruptions from the geological record. Here we illuminate the mechanisms and timescales of particle aggregation from a well-characterized 'wet' eruption. The 2009 eruption of Redoubt Volcano, Alaska, incorporated water from the surface (in this case, a glacier), which is a common occurrence during explosive volcanism worldwide. Observations from C-band weather radar, fall deposits and numerical modelling demonstrate that hail-forming processes in the eruption plume triggered aggregation of ∼95% of the fine ash and stripped much of the erupted mass out of the atmosphere within 30 min. Based on these findings, we propose a mechanism of hail-like ash aggregation that contributes to the anomalously rapid fallout of fine ash and occurrence of concentrically layered aggregates in volcanic deposits.
Hail formation triggers rapid ash aggregation in volcanic plumes
Van Eaton, Alexa; Mastin, Larry G.; Herzog, M.; Schwaiger, Hans F.; Schneider, David J.; Wallace, Kristi; Clarke, Amanda B
2015-01-01
During explosive eruptions, airborne particles collide and stick together, accelerating the fallout of volcanic ash and climate-forcing aerosols. This aggregation process remains a major source of uncertainty both in ash dispersal forecasting and interpretation of eruptions from the geological record. Here we illuminate the mechanisms and timescales of particle aggregation from a well-characterized ‘wet’ eruption. The 2009 eruption of Redoubt Volcano in Alaska incorporated water from the surface (in this case, a glacier), which is a common occurrence during explosive volcanism worldwide. Observations from C-band weather radar, fall deposits, and numerical modeling demonstrate that volcanic hail formed rapidly in the eruption plume, leading to mixed-phase aggregation of ~95% of the fine ash and stripping much of the cloud out of the atmosphere within 30 minutes. Based on these findings, we propose a mechanism of hail-like aggregation that contributes to the anomalously rapid fallout of fine ash and the occurrence of concentrically-layered aggregates in volcanic deposits.
Uncertainty in magnetic activity indices
Institute of Scientific and Technical Information of China (English)
2008-01-01
Magnetic activity indices are widely used in theoretical studies of solar-terrestrial coupling and space weather prediction. However, the indices suffer from various uncertainties, which limit their application and even mislead to incorrect conclu-sion. In this paper we analyze three most popular indices, Kp, AE and Dst. Three categories of uncertainties in magnetic indices are discussed: "data uncertainty" originating from inadequate data processing, "station uncertainty" caused by in-complete station covering, and "physical uncertainty" stemming from unclear physical mechanism. A comparison between magnetic disturbances and related indices indicate that the residual Sq will cause an uncertainty of 1―2 in K meas-urement, the uncertainty in saturated AE is as much as 50%, and the uncertainty in Dst index caused by the partial ring currents is about a half of the partial ring cur-rent.
Uncertainty in magnetic activity indices
Institute of Scientific and Technical Information of China (English)
XU WenYao
2008-01-01
Magnetic activity indices are widely used in theoretical studies of solar-terrestrial coupling and space weather prediction. However, the indices suffer from various uncertainties, which limit their application and even mislead to incorrect conclu-sion. In this paper we analyze three most popular indices, Kp, AE and Dst. Three categories of uncertainties in magnetic indices are discussed: "data uncertainty" originating from inadequate data processing, "station uncertainty" caused by in-complete station covering, and "physical uncertainty" stemming from unclear physical mechanism. A comparison between magnetic disturbances and related indices indicate that the residual Sq will cause an uncertainty of 1-2 in K meas-urement, the uncertainty in saturated AE is as much as 50%, and the uncertainty in Dst index caused by the partial ring currents is about a half of the partial ring cur-rent.
Perspectives on Preference Aggregation.
Regenwetter, Michel
2009-07-01
For centuries, the mathematical aggregation of preferences by groups, organizations, or society itself has received keen interdisciplinary attention. Extensive theoretical work in economics and political science throughout the second half of the 20th century has highlighted the idea that competing notions of rational social choice intrinsically contradict each other. This has led some researchers to consider coherent democratic decision making to be a mathematical impossibility. Recent empirical work in psychology qualifies that view. This nontechnical review sketches a quantitative research paradigm for the behavioral investigation of mathematical social choice rules on real ballots, experimental choices, or attitudinal survey data. The article poses a series of open questions. Some classical work sometimes makes assumptions about voter preferences that are descriptively invalid. Do such technical assumptions lead the theory astray? How can empirical work inform the formulation of meaningful theoretical primitives? Classical "impossibility results" leverage the fact that certain desirable mathematical properties logically cannot hold in all conceivable electorates. Do these properties nonetheless hold true in empirical distributions of preferences? Will future behavioral analyses continue to contradict the expectations of established theory? Under what conditions do competing consensus methods yield identical outcomes and why do they do so?
Orthogonal flexible Rydberg aggregates
Leonhardt, K.; Wüster, S.; Rost, J. M.
2016-02-01
We study the link between atomic motion and exciton transport in flexible Rydberg aggregates, assemblies of highly excited light alkali-metal atoms, for which motion due to dipole-dipole interaction becomes relevant. In two one-dimensional atom chains crossing at a right angle adiabatic exciton transport is affected by a conical intersection of excitonic energy surfaces, which induces controllable nonadiabatic effects. A joint exciton-motion pulse that is initially governed by a single energy surface is coherently split into two modes after crossing the intersection. The modes induce strongly different atomic motion, leading to clear signatures of nonadiabatic effects in atomic density profiles. We have shown how this scenario can be exploited as an exciton switch, controlling direction and coherence properties of the joint pulse on the second of the chains [K. Leonhardt et al., Phys. Rev. Lett. 113, 223001 (2014), 10.1103/PhysRevLett.113.223001]. In this article we discuss the underlying complex dynamics in detail, characterize the switch, and derive our isotropic interaction model from a realistic anisotropic one with the addition of a magnetic bias field.
Orthogonal flexible Rydberg aggregates
Leonhardt, K; Rost, J M
2015-01-01
We study the link between atomic motion and exciton transport in flexible Rydberg aggregates, assemblies of highly excited light alkali atoms, for which motion due to dipole-dipole interaction becomes relevant. In two one-dimensional atom chains crossing at a right angle adiabatic exciton transport is affected by a conical intersection of excitonic energy surfaces, which induces controllable non-adiabatic effects. A joint exciton/motion pulse that is initially governed by a single energy surface is coherently split into two modes after crossing the intersection. The modes induce strongly different atomic motion, leading to clear signatures of non-adiabatic effects in atomic density profiles. We have shown how this scenario can be exploited as an exciton switch, controlling direction and coherence properties of the joint pulse on the second of the chains [K.~Leonhardt {\\it et al.}, Phys.~Rev.~Lett. {\\bf 113} 223001 (2014)]. In this article we discuss the underlying complex dynamics in detail, characterise the ...
S-parameter uncertainty computations
DEFF Research Database (Denmark)
Vidkjær, Jens
1993-01-01
A method for computing uncertainties of measured s-parameters is presented. Unlike the specification software provided with network analyzers, the new method is capable of calculating the uncertainties of arbitrary s-parameter sets and instrument settings.......A method for computing uncertainties of measured s-parameters is presented. Unlike the specification software provided with network analyzers, the new method is capable of calculating the uncertainties of arbitrary s-parameter sets and instrument settings....
Pauli effects in uncertainty relations
Toranzo, I V; Esquivel, R O; Dehesa, J S
2014-01-01
In this letter we analyze the effect of the spin dimensionality of a physical system in two mathematical formulations of the uncertainty principle: a generalized Heisenberg uncertainty relation valid for all antisymmetric N-fermion wavefunctions, and the Fisher-information- based uncertainty relation valid for all antisymmetric N-fermion wavefunctions of central potentials. The accuracy of these spin-modified uncertainty relations is examined for all atoms from Hydrogen to Lawrencium in a self-consistent framework.
Capturing the complexity of uncertainty language to maximise its use.
Juanchich, Marie; Sirota, Miroslav
2016-04-01
Uncertainty is often communicated verbally, using uncertainty phrases such as 'there is a small risk of earthquake', 'flooding is possible' or 'it is very likely the sea level will rise'. Prior research has only examined a limited number of properties of uncertainty phrases: mainly the probability conveyed (e.g., 'a small chance' convey a small probability whereas 'it is likely' convey a high probability). We propose a new analytical framework that captures more of the complexity of uncertainty phrases by studying their semantic, pragmatic and syntactic properties. Further, we argue that the complexity of uncertainty phrases is functional and can be leveraged to best describe uncertain outcomes and achieve the goals of speakers. We will present findings from a corpus study and an experiment where we assessed the following properties of uncertainty phrases: probability conveyed, subjectivity, valence, nature of the subject, grammatical category of the uncertainty quantifier and whether the quantifier elicits a positive or a negative framing. Natural language processing techniques applied to corpus data showed that people use a very large variety of uncertainty phrases representing different configurations of the properties of uncertainty phrases (e.g., phrases that convey different levels of subjectivity, phrases with different grammatical construction). In addition, the corpus analysis uncovered that uncertainty phrases commonly studied in psychology are not the most commonly used in real life. In the experiment we manipulated the amount of evidence indicating that a fact was true and whether the participant was required to prove the fact was true or that it was false. Participants produced a phrase to communicate the likelihood that the fact was true (e.g., 'it is not sure…', 'I am convinced that…'). The analyses of the uncertainty phrases produced showed that participants leveraged the properties of uncertainty phrases to reflect the strength of evidence but
Mechanisms of Soil Aggregation: a biophysical modeling framework
Ghezzehei, T. A.; Or, D.
2016-12-01
Soil aggregation is one of the main crosscutting concepts in all sub-disciplines and applications of soil science from agriculture to climate regulation. The concept generally refers to adhesion of primary soil particles into distinct units that remain stable when subjected to disruptive forces. It is one of the most sensitive soil qualities that readily respond to disturbances such as cultivation, fire, drought, flooding, and changes in vegetation. These changes are commonly quantified and incorporated in soil models indirectly as alterations in carbon content and type, bulk density, aeration, permeability, as well as water retention characteristics. Soil aggregation that is primarily controlled by organic matter generally exhibits hierarchical organization of soil constituents into stable units that range in size from a few microns to centimeters. However, this conceptual model of soil aggregation as the key unifying mechanism remains poorly quantified and is rarely included in predictive soil models. Here we provide a biophysical framework for quantitative and predictive modeling of soil aggregation and its attendant soil characteristics. The framework treats aggregates as hotspots of biological, chemical and physical processes centered around roots and root residue. We keep track of the life cycle of an individual aggregate from it genesis in the rhizosphere, fueled by rhizodeposition and mediated by vigorous microbial activity, until its disappearance when the root-derived resources are depleted. The framework synthesizes current understanding of microbial life in porous media; water holding and soil binding capacity of biopolymers; and environmental controls on soil organic matter dynamics. The framework paves a way for integration of processes that are presently modeled as disparate or poorly coupled processes, including storage and protection of carbon, microbial activity, greenhouse gas fluxes, movement and storage of water, resistance of soils against
DEFF Research Database (Denmark)
Maiorano, Andrea; Martre, Pierre; Asseng, Senthold
2017-01-01
To improve climate change impact estimates and to quantify their uncertainty, multi-model ensembles (MMEs) have been suggested. Model improvements can improve the accuracy of simulations and reduce the uncertainty of climate change impact assessments. Furthermore, they can reduce the number of mo...
DEFF Research Database (Denmark)
Price, Jason Anthony; Nordblad, Mathias; Woodley, John
2014-01-01
This paper demonstrates the added benefits of using uncertainty and sensitivity analysis in the kinetics of enzymatic biodiesel production. For this study, a kinetic model by Fedosov and co-workers is used. For the uncertainty analysis the Monte Carlo procedure was used to statistically quantify...
DEFF Research Database (Denmark)
Diniz-Filho, José Alexandre F.; Bini, Luis Mauricio; Rangel, Thiago Fernando
2009-01-01
of uncertainty in ensembles of forecasts is presented. We model the distributions of 3837 New World birds and project them into 2080. We then quantify and map the relative contribution of different sources of uncertainty from alternative methods for niche modeling, general circulation models (AOGCM...
Effects of uncertainty in major input variables on simulated functional soil behaviour
Finke, P.A.; Wösten, J.H.M.; Jansen, M.J.W.
1997-01-01
Uncertainties in major input variables in water and solute models were quantified and their effects on simulated functional aspects of soil behaviour were studied using basic soil properties from an important soil map unit in the Netherlands. Two sources of uncertainty were studied: spatial
Effects of uncertainty in major input variables on simulated functional soil behaviour
Finke, P.A.; Wösten, J.H.M.; Jansen, M.J.W.
1996-01-01
Uncertainties in major input variables in water and solute models were quantified and their effects on simulated functional aspects of soil behaviour were studied using basic soil properties from an important soil map unit in the Netherlands. Two sources of uncertainty were studied: spatial
Wansik Yu; Eiichi Nakakita; Sunmin Kim; Kosei Yamaguchi
2016-01-01
The common approach to quantifying the precipitation forecast uncertainty is ensemble simulations where a numerical weather prediction (NWP) model is run for a number of cases with slightly different initial conditions. In practice, the spread of ensemble members in terms of flood discharge is used as a measure of forecast uncertainty due to uncertain precipitation forecasts. This study presents the uncertainty propagation of rainfall forecast into hydrological response with catchment scale t...
Redshift uncertainties and baryonic acoustic oscillations
Chaves-Montero, Jonás; Hernández-Monteagudo, Carlos
2016-01-01
In the upcoming era of high-precision galaxy surveys, it becomes necessary to understand the impact of uncertain redshift estimators on cosmological observables. In this paper we present a detailed exploration of the galaxy clustering and baryonic acoustic oscillation (BAO) signal under the presence of redshift errors. We provide analytic expressions for how the monopole and the quadrupole of the redshift-space power spectrum (together with their covariances) are affected. Additionally, we discuss the modifications in the shape, signal to noise, and cosmological constraining power of the BAO signature. We show how and why the BAO contrast is $\\mathit{enhanced}$ with small redshift uncertainties, and explore in detail how the cosmological information is modulated by the interplay of redshift-space distortions, redshift errors, and the number density of the sample. We validate our results by comparing them with measurements from a ensemble of $N$-body simulations with $8100h^{-3}\\text{Gpc}^3$ aggregated volume....
Uncertainty visualization in HARDI based on ensembles of ODFs
Jiao, Fangxiang
2012-02-01
In this paper, we propose a new and accurate technique for uncertainty analysis and uncertainty visualization based on fiber orientation distribution function (ODF) glyphs, associated with high angular resolution diffusion imaging (HARDI). Our visualization applies volume rendering techniques to an ensemble of 3D ODF glyphs, which we call SIP functions of diffusion shapes, to capture their variability due to underlying uncertainty. This rendering elucidates the complex heteroscedastic structural variation in these shapes. Furthermore, we quantify the extent of this variation by measuring the fraction of the volume of these shapes, which is consistent across all noise levels, the certain volume ratio. Our uncertainty analysis and visualization framework is then applied to synthetic data, as well as to HARDI human-brain data, to study the impact of various image acquisition parameters and background noise levels on the diffusion shapes. © 2012 IEEE.
Incorporating Uncertainty into Backward Erosion Piping Risk Assessments
Directory of Open Access Journals (Sweden)
Robbins Bryant A.
2016-01-01
Full Text Available Backward erosion piping (BEP is a type of internal erosion that typically involves the erosion of foundation materials beneath an embankment. BEP has been shown, historically, to be the cause of approximately one third of all internal erosion related failures. As such, the probability of BEP is commonly evaluated as part of routine risk assessments for dams and levees in the United States. Currently, average gradient methods are predominantly used to perform these assessments, supported by mean trends of critical gradient observed in laboratory flume tests. Significant uncertainty exists surrounding the mean trends of critical gradient used in practice. To quantify this uncertainty, over 100 laboratory-piping tests were compiled and analysed to assess the variability of laboratory measurements of horizontal critical gradient. Results of these analyses indicate a large amount of uncertainty surrounding critical gradient measurements for all soils, with increasing uncertainty as soils become less uniform.
Analysis and Reduction of Complex Networks Under Uncertainty
Energy Technology Data Exchange (ETDEWEB)
Knio, Omar M
2014-04-09
This is a collaborative proposal that aims at developing new methods for the analysis and reduction of complex multiscale networks under uncertainty. The approach is based on combining methods of computational singular perturbation (CSP) and probabilistic uncertainty quantification. In deterministic settings, CSP yields asymptotic approximations of reduced-dimensionality “slow manifolds” on which a multiscale dynamical system evolves. Introducing uncertainty raises fundamentally new issues, particularly concerning its impact on the topology of slow manifolds, and means to represent and quantify associated variability. To address these challenges, this project uses polynomial chaos (PC) methods to reformulate uncertain network models, and to analyze them using CSP in probabilistic terms. Specific objectives include (1) developing effective algorithms that can be used to illuminate fundamental and unexplored connections among model reduction, multiscale behavior, and uncertainty, and (2) demonstrating the performance of these algorithms through applications to model problems.
Pragmatic aspects of uncertainty propagation: A conceptual review
Thacker, W. Carlisle; Iskandarani, Mohamed; Gonçalves, Rafael C.; Srinivasan, Ashwanth; Knio, Omar M.
2015-11-01
When quantifying the uncertainty of the response of a computationally costly oceanographic or meteorological model stemming from the uncertainty of its inputs, practicality demands getting the most information using the fewest simulations. It is widely recognized that, by interpolating the results of a small number of simulations, results of additional simulations can be inexpensively approximated to provide a useful estimate of the variability of the response. Even so, as computing the simulations to be interpolated remains the biggest expense, the choice of these simulations deserves attention. When making this choice, two requirement should be considered: (i) the nature of the interpolation and (ii) the available information about input uncertainty. Examples comparing polynomial interpolation and Gaussian process interpolation are presented for three different views of input uncertainty.
A Stochastic Nonlinear Water Wave Model for Efficient Uncertainty Quantification
Bigoni, Daniele; Eskilsson, Claes
2014-01-01
A major challenge in next-generation industrial applications is to improve numerical analysis by quantifying uncertainties in predictions. In this work we present a stochastic formulation of a fully nonlinear and dispersive potential flow water wave model for the probabilistic description of the evolution waves. This model is discretized using the Stochastic Collocation Method (SCM), which provides an approximate surrogate of the model. This can be used to accurately and efficiently estimate the probability distribution of the unknown time dependent stochastic solution after the forward propagation of uncertainties. We revisit experimental benchmarks often used for validation of deterministic water wave models. We do this using a fully nonlinear and dispersive model and show how uncertainty in the model input can influence the model output. Based on numerical experiments and assumed uncertainties in boundary data, our analysis reveals that some of the known discrepancies from deterministic simulation in compa...
Uncertainty in prediction and simulation of flow in sewer systems
DEFF Research Database (Denmark)
Breinholt, Anders
was obtained with the stochastic method as the preferred. The thesis has demonstrated that the statistical requirements to the formal stochastic approach are very hard to fulfill in practice when prediction steps beyond the one-step is considered. Thus the underlying assumption of the GLUE methodology...... of describing features such as flow constraints, basins and pumps were tested for their ability to describe the output with a time resolution of 15 minutes. Two approaches to uncertainty quantification were distinguished and adopted, the stochastic and the epistemic method. Stochastic uncertainty refers...... to the randomness observed in nature, which is normally irreducible due to the inherent variation of physical systems. Epistemic uncertainty on the contrary arises from incomplete knowledge about a physical system. For quantifying stochastic uncertainties a frequentist approach was applied whereas the generalised...
Analysis of automated highway system risks and uncertainties. Volume 5
Energy Technology Data Exchange (ETDEWEB)
Sicherman, A.
1994-10-01
This volume describes a risk analysis performed to help identify important Automated Highway System (AHS) deployment uncertainties and quantify their effect on costs and benefits for a range of AHS deployment scenarios. The analysis identified a suite of key factors affecting vehicle and roadway costs, capacities and market penetrations for alternative AHS deployment scenarios. A systematic protocol was utilized for obtaining expert judgments of key factor uncertainties in the form of subjective probability percentile assessments. Based on these assessments, probability distributions on vehicle and roadway costs, capacity and market penetration were developed for the different scenarios. The cost/benefit risk methodology and analysis provide insights by showing how uncertainties in key factors translate into uncertainties in summary cost/benefit indices.
Pragmatic aspects of uncertainty propagation: A conceptual review
Thacker, W.Carlisle
2015-09-11
When quantifying the uncertainty of the response of a computationally costly oceanographic or meteorological model stemming from the uncertainty of its inputs, practicality demands getting the most information using the fewest simulations. It is widely recognized that, by interpolating the results of a small number of simulations, results of additional simulations can be inexpensively approximated to provide a useful estimate of the variability of the response. Even so, as computing the simulations to be interpolated remains the biggest expense, the choice of these simulations deserves attention. When making this choice, two requirement should be considered: (i) the nature of the interpolation and ii) the available information about input uncertainty. Examples comparing polynomial interpolation and Gaussian process interpolation are presented for three different views of input uncertainty.
Risk, Uncertainty and Entrepreneurship
DEFF Research Database (Denmark)
Koudstaal, Martin; Sloof, Randolph; Van Praag, Mirjam
Theory predicts that entrepreneurs have distinct attitudes towards risk and uncertainty, but empirical evidence is mixed. To better understand the unique behavioral characteristics of entrepreneurs and the causes of these mixed results, we perform a large ‘lab-in-the-field’ experiment comparing...... entrepreneurs to managers – a suitable comparison group – and employees (n = 2288). The results indicate that entrepreneurs perceive themselves as less risk averse than managers and employees, in line with common wisdom. However, when using experimental incentivized measures, the differences are subtler...
Mathematical Analysis of Uncertainty
Directory of Open Access Journals (Sweden)
Angel GARRIDO
2016-01-01
Full Text Available Classical Logic showed early its insufficiencies for solving AI problems. The introduction of Fuzzy Logic aims at this problem. There have been research in the conventional Rough direction alone or in the Fuzzy direction alone, and more recently, attempts to combine both into Fuzzy Rough Sets or Rough Fuzzy Sets. We analyse some new and powerful tools in the study of Uncertainty, as the Probabilistic Graphical Models, Chain Graphs, Bayesian Networks, and Markov Networks, integrating our knowledge of graphs and probability.
Optimization under Uncertainty
Lopez, Rafael H.
2016-01-06
The goal of this poster is to present the main approaches to optimization of engineering systems in the presence of uncertainties. We begin by giving an insight about robust optimization. Next, we detail how to deal with probabilistic constraints in optimization, the so called the reliability based design. Subsequently, we present the risk optimization approach, which includes the expected costs of failure in the objective function. After that the basic description of each approach is given, the projects developed by CORE are presented. Finally, the main current topic of research of CORE is described.
Collective Uncertainty Entanglement Test
Rudnicki, Łukasz; Życzkowski, Karol
2011-01-01
For a given pure state of a composite quantum system we analyze the product of its projections onto a set of locally orthogonal separable pure states. We derive a bound for this product analogous to the entropic uncertainty relations. For bipartite systems the bound is saturated for maximally entangled states and it allows us to construct a family of entanglement measures, we shall call collectibility. As these quantities are experimentally accessible, the approach advocated contributes to the task of experimental quantification of quantum entanglement, while for a three-qubit system it is capable to identify the genuine three-party entanglement.
Variance-based uncertainty relations
Huang, Yichen
2010-01-01
It is hard to overestimate the fundamental importance of uncertainty relations in quantum mechanics. In this work, I propose state-independent variance-based uncertainty relations for arbitrary observables in both finite and infinite dimensional spaces. We recover the Heisenberg uncertainty principle as a special case. By studying examples, we find that the lower bounds provided by our new uncertainty relations are optimal or near-optimal. I illustrate the uses of our new uncertainty relations by showing that they eliminate one common obstacle in a sequence of well-known works in entanglement detection, and thus make these works much easier to access in applications.
Directory of Open Access Journals (Sweden)
Wei Zhou
2014-01-01
Full Text Available Due to convenience and powerfulness in dealing with vagueness and uncertainty of real situation, hesitant fuzzy set has received more and more attention and has been a hot research topic recently. To differently process and effectively aggregate hesitant fuzzy information and capture their interrelationship, in this paper, we propose the hesitant fuzzy reducible weighted Bonferroni mean (HFRWBM and present its four prominent characteristics, namely, reductibility, monotonicity, boundedness, and idempotency. Then, we further investigate its generalized form, that is, the generalized hesitant fuzzy reducible weighted Bonferroni mean (GHFRWBM. Based on the discussion of model parameters, some special cases of the HFRWBM and GHFRWBM are studied in detail. In addition, to deal with the situation that multicriteria have connections in hesitant fuzzy information aggregation, a three-step aggregation approach has been proposed on the basis of the HFRWBM and GHFRWBM. In the end, we apply the proposed aggregation operators to multicriteria aggregation and give an example to illustrate our results.
MacIntosh, C. R.; Merchant, C. J.; von Schuckmann, K.
2016-10-01
This article presents a review of current practice in estimating steric sea level change, focussed on the treatment of uncertainty. Steric sea level change is the contribution to the change in sea level arising from the dependence of density on temperature and salinity. It is a significant component of sea level rise and a reflection of changing ocean heat content. However, tracking these steric changes still remains a significant challenge for the scientific community. We review the importance of understanding the uncertainty in estimates of steric sea level change. Relevant concepts of uncertainty are discussed and illustrated with the example of observational uncertainty propagation from a single profile of temperature and salinity measurements to steric height. We summarise and discuss the recent literature on methodologies and techniques used to estimate steric sea level in the context of the treatment of uncertainty. Our conclusions are that progress in quantifying steric sea level uncertainty will benefit from: greater clarity and transparency in published discussions of uncertainty, including exploitation of international standards for quantifying and expressing uncertainty in measurement; and the development of community "recipes" for quantifying the error covariances in observations and from sparse sampling and for estimating and propagating uncertainty across spatio-temporal scales.
MacIntosh, C. R.; Merchant, C. J.; von Schuckmann, K.
2017-01-01
This article presents a review of current practice in estimating steric sea level change, focussed on the treatment of uncertainty. Steric sea level change is the contribution to the change in sea level arising from the dependence of density on temperature and salinity. It is a significant component of sea level rise and a reflection of changing ocean heat content. However, tracking these steric changes still remains a significant challenge for the scientific community. We review the importance of understanding the uncertainty in estimates of steric sea level change. Relevant concepts of uncertainty are discussed and illustrated with the example of observational uncertainty propagation from a single profile of temperature and salinity measurements to steric height. We summarise and discuss the recent literature on methodologies and techniques used to estimate steric sea level in the context of the treatment of uncertainty. Our conclusions are that progress in quantifying steric sea level uncertainty will benefit from: greater clarity and transparency in published discussions of uncertainty, including exploitation of international standards for quantifying and expressing uncertainty in measurement; and the development of community "recipes" for quantifying the error covariances in observations and from sparse sampling and for estimating and propagating uncertainty across spatio-temporal scales.
Uncertainty in mapping urban air quality using crowdsourcing techniques
Schneider, Philipp; Castell, Nuria; Lahoz, William; Bartonova, Alena
2016-04-01
Small and low-cost sensors measuring various air pollutants have become available in recent years owing to advances in sensor technology. Such sensors have significant potential for improving high-resolution mapping of air quality in the urban environment as they can be deployed in comparatively large numbers and therefore are able to provide information at unprecedented spatial detail. However, such sensor devices are subject to significant and currently little understood uncertainties that affect their usability. Not only do these devices exhibit random errors and biases of occasionally substantial magnitudes, but these errors may also shift over time. In addition, there often tends to be significant inter-sensor variability even when supposedly identical sensors from the same manufacturer are used. We need to quantify accurately these uncertainties to make proper use of the information they provide. Furthermore, when making use of the data and producing derived products such as maps, the measurement uncertainties that propagate throughout the analysis need to be clearly communicated to the scientific and non-scientific users of the map products. Based on recent experiences within the EU-funded projects CITI-SENSE and hackAIR we discuss the uncertainties along the entire processing chain when using crowdsourcing techniques for mapping urban air quality. Starting with the uncertainties exhibited by the sensors themselves, we present ways of quantifying the error characteristics of a network of low-cost microsensors and show suitable statistical metrics for summarizing them. Subsequently, we briefly present a data-fusion-based method for mapping air quality in the urban environment and illustrate how we propagate the uncertainties of the individual sensors throughout the mapping system, resulting in detailed maps that document the pixel-level uncertainty for each concentration field. Finally, we present methods for communicating the resulting spatial uncertainty
Integrating Out Astrophysical Uncertainties
Fox, Patrick J; Weiner, Neal
2010-01-01
Underground searches for dark matter involve a complicated interplay of particle physics, nuclear physics, atomic physics and astrophysics. We attempt to remove the uncertainties associated with astrophysics by developing the means to map the observed signal in one experiment directly into a predicted rate at another. We argue that it is possible to make experimental comparisons that are completely free of astrophysical uncertainties by focusing on {\\em integral} quantities, such as $g(v_{min})=\\int_{v_{min}} dv\\, f(v)/v $ and $\\int_{v_{thresh}} dv\\, v g(v)$. Direct comparisons are possible when the $v_{min}$ space probed by different experiments overlap. As examples, we consider the possible dark matter signals at CoGeNT, DAMA and CRESST-Oxygen. We find that expected rate from CoGeNT in the XENON10 experiment is higher than observed, unless scintillation light output is low. Moreover, we determine that S2-only analyses are constraining, unless the charge yield $Q_y< 2.4 {\\, \\rm electrons/keV}$. For DAMA t...
The critical role of uncertainty in projections of hydrological extremes
Meresa, Hadush K.; Romanowicz, Renata J.
2017-08-01
This paper aims to quantify the uncertainty in projections of future hydrological extremes in the Biala Tarnowska River at Koszyce gauging station, south Poland. The approach followed is based on several climate projections obtained from the EURO-CORDEX initiative, raw and bias-corrected realizations of catchment precipitation, and flow simulations derived using multiple hydrological model parameter sets. The projections cover the 21st century. Three sources of uncertainty are considered: one related to climate projection ensemble spread, the second related to the uncertainty in hydrological model parameters and the third related to the error in fitting theoretical distribution models to annual extreme flow series. The uncertainty of projected extreme indices related to hydrological model parameters was conditioned on flow observations from the reference period using the generalized likelihood uncertainty estimation (GLUE) approach, with separate criteria for high- and low-flow extremes. Extreme (low and high) flow quantiles were estimated using the generalized extreme value (GEV) distribution at different return periods and were based on two different lengths of the flow time series. A sensitivity analysis based on the analysis of variance (ANOVA) shows that the uncertainty introduced by the hydrological model parameters can be larger than the climate model variability and the distribution fit uncertainty for the low-flow extremes whilst for the high-flow extremes higher uncertainty is observed from climate models than from hydrological parameter and distribution fit uncertainties. This implies that ignoring one of the three uncertainty sources may cause great risk to future hydrological extreme adaptations and water resource planning and management.
Understanding and reducing statistical uncertainties in nebular abundance determinations
Wesson, R.; Stock, D. J.; Scicluna, P.
2012-06-01
Whenever observations are compared to theories, an estimate of the uncertainties associated with the observations is vital if the comparison is to be meaningful. However, many or even most determinations of temperatures, densities and abundances in photoionized nebulae do not quote the associated uncertainty. Those that do typically propagate the uncertainties using analytical techniques which rely on assumptions that generally do not hold. Motivated by this issue, we have developed Nebular Empirical Analysis Tool (NEAT), a new code for calculating chemical abundances in photoionized nebulae. The code carries out a standard analysis of lists of emission lines using long-established techniques to estimate the amount of interstellar extinction, calculate representative temperatures and densities, compute ionic abundances from both collisionally excited lines and recombination lines, and finally to estimate total elemental abundances using an ionization correction scheme. NEATuses a Monte Carlo technique to robustly propagate uncertainties from line flux measurements through to the derived abundances. We show that, for typical observational data, this approach is superior to analytic estimates of uncertainties. NEAT also accounts for the effect of upward biasing on measurements of lines with low signal-to-noise ratio, allowing us to accurately quantify the effect of this bias on abundance determinations. We find not only that the effect can result in significant overestimates of heavy element abundances derived from weak lines, but also that taking it into account reduces the uncertainty of these abundance determinations. Finally, we investigate the effect of possible uncertainties in R, the ratio of selective-to-total extinction, on abundance determinations. We find that the uncertainty due to this parameter is negligible compared to the statistical uncertainties due to typical line flux measurement uncertainties.
Uncertainty contributions to low-flow projections in Austria
Parajka, Juraj; Blaschke, Alfred Paul; Blöschl, Günter; Haslinger, Klaus; Hepp, Gerold; Laaha, Gregor; Schöner, Wolfgang; Trautvetter, Helene; Viglione, Alberto; Zessner, Matthias
2016-05-01
The main objective of the paper is to understand the contributions to the uncertainty in low-flow projections resulting from hydrological model uncertainty and climate projection uncertainty. Model uncertainty is quantified by different parameterisations of a conceptual semi-distributed hydrologic model (TUWmodel) using 11 objective functions in three different decades (1976-1986, 1987-1997, 1998-2008), which allows for disentangling the effect of the objective function-related uncertainty and temporal stability of model parameters. Climate projection uncertainty is quantified by four future climate scenarios (ECHAM5-A1B, A2, B1 and HADCM3-A1B) using a delta change approach. The approach is tested for 262 basins in Austria. The results indicate that the seasonality of the low-flow regime is an important factor affecting the performance of model calibration in the reference period and the uncertainty of Q95 low-flow projections in the future period. In Austria, the range of simulated Q95 in the reference period is larger in basins with a summer low-flow regime than in basins with a winter low-flow regime. The accuracy of simulated Q95 may result in a range of up to 60 % depending on the decade used for calibration. The low-flow projections of Q95 show an increase of low flows in the Alps, typically in the range of 10-30 % and a decrease in the south-eastern part of Austria mostly in the range -5 to -20 % for the climate change projected for the future period 2021-2050, relative the reference period 1978-2007. The change in seasonality varies between scenarios, but there is a tendency for earlier low flows in the northern Alps and later low flows in eastern Austria. The total uncertainty of Q95 projections is the largest in basins with a winter low-flow regime and, in some basins the range of Q95 projections exceeds 60 %. In basins with summer low flows, the total uncertainty is mostly less than 20 %. The ANOVA assessment of the relative contribution of the three
Schelling's Segregation Model: Parameters, scaling, and aggregation
Directory of Open Access Journals (Sweden)
Abhinav Singh
2009-09-01
Full Text Available Thomas Schelling proposed a simple spatial model to illustrate how, even with relatively mild assumptions on each individual's nearest neighbor preferences, an integrated city would likely unravel to a segregated city, even if all individuals prefer integration. This agent based lattice model has become quite influential amongst social scientists, demographers, and economists. Aggregation relates to individuals coming together to form groups and Schelling equated global aggregation with segregation. Many authors assumed that the segregation which Schelling observed in simulations on very small cities persists for larger, realistic size cities. We describe how different measures could be used to quantify the segregation and unlock its dependence on city size, disparate neighbor comfortability threshold, and population density. We identify distinct scales of global aggregation, and show that the striking global aggregation Schelling observed is strictly a small city phenomenon. We also discover several scaling laws for the aggregation measures. Along the way we prove that as the Schelling model evolves, the total perimeter of the interface between the different agents decreases, which provides a useful analytical tool to study the evolution.
Imprecise probabilistic estimation of design floods with epistemic uncertainties
Qi, Wei; Zhang, Chi; Fu, Guangtao; Zhou, Huicheng
2016-06-01
An imprecise probabilistic framework for design flood estimation is proposed on the basis of the Dempster-Shafer theory to handle different epistemic uncertainties from data, probability distribution functions, and probability distribution parameters. These uncertainties are incorporated in cost-benefit analysis to generate the lower and upper bounds of the total cost for flood control, thus presenting improved information for decision making on design floods. Within the total cost bounds, a new robustness criterion is proposed to select a design flood that can tolerate higher levels of uncertainty. A variance decomposition approach is used to quantify individual and interactive impacts of the uncertainty sources on total cost. Results from three case studies, with 127, 104, and 54 year flood data sets, respectively, show that the imprecise probabilistic approach effectively combines aleatory and epistemic uncertainties from the various sources and provides upper and lower bounds of the total cost. Between the total cost and the robustness of design floods, a clear trade-off which is beyond the information that can be provided by the conventional minimum cost criterion is identified. The interactions among data, distributions, and parameters have a much higher contribution than parameters to the estimate of the total cost. It is found that the contributions of the various uncertainty sources and their interactions vary with different flood magnitude, but remain roughly the same with different return periods. This study demonstrates that the proposed methodology can effectively incorporate epistemic uncertainties in cost-benefit analysis of design floods.
Transport Behavior in Fractured Rock under Conceptual and Parametric Uncertainty
Pham, H. V.; Parashar, R.; Sund, N. L.; Pohlmann, K.
2016-12-01
Lack of hydrogeological data and knowledge leads to uncertainty in numerical modeling, and many conceptualizations are often proposed to represent uncertain model components derived from the same data. This study investigates how conceptual and parametric uncertainty influence transport behavior in three-dimensional discrete fracture networks (DFN). dfnWorks, a parallelized computational suite developed at the Los Alamos National Laboratory is used to simulate flow and transport in simple 3D percolating DFNs. Model averaging techniques in a Monte-Carlo framework are adopted to effectively predict contaminant plumes and to quantify prediction uncertainty arising from conceptual and parametric uncertainties. The method is applied to stochastic fracture networks with orthogonal sets of background fractures and domain spanning faults. The sources of uncertainty are the boundary conditions and the fault characteristics. Spatial and temporal analyses of the contaminant plumes are conducted to compute influence of the uncertainty sources on the transport behavior. The flow and transport characteristics of 3D stochastic DFNs under uncertainty help in laying the groundwork for model development and analysis of field scale fractured rock systems.
Uncertainty and error in complex plasma chemistry models
Turner, Miles M.
2015-06-01
Chemistry models that include dozens of species and hundreds to thousands of reactions are common in low-temperature plasma physics. The rate constants used in such models are uncertain, because they are obtained from some combination of experiments and approximate theories. Since the predictions of these models are a function of the rate constants, these predictions must also be uncertain. However, systematic investigations of the influence of uncertain rate constants on model predictions are rare to non-existent. In this work we examine a particular chemistry model, for helium-oxygen plasmas. This chemistry is of topical interest because of its relevance to biomedical applications of atmospheric pressure plasmas. We trace the primary sources for every rate constant in the model, and hence associate an error bar (or equivalently, an uncertainty) with each. We then use a Monte Carlo procedure to quantify the uncertainty in predicted plasma species densities caused by the uncertainty in the rate constants. Under the conditions investigated, the range of uncertainty in most species densities is a factor of two to five. However, the uncertainty can vary strongly for different species, over time, and with other plasma conditions. There are extreme (pathological) cases where the uncertainty is more than a factor of ten. One should therefore be cautious in drawing any conclusion from plasma chemistry modelling, without first ensuring that the conclusion in question survives an examination of the related uncertainty.
Uncertainty analysis of fluvial outcrop data for stochastic reservoir modelling
Energy Technology Data Exchange (ETDEWEB)
Martinius, A.W. [Statoil Research Centre, Trondheim (Norway); Naess, A. [Statoil Exploration and Production, Stjoerdal (Norway)
2005-07-01
Uncertainty analysis and reduction is a crucial part of stochastic reservoir modelling and fluid flow simulation studies. Outcrop analogue studies are often employed to define reservoir model parameters but the analysis of uncertainties associated with sedimentological information is often neglected. In order to define uncertainty inherent in outcrop data more accurately, this paper presents geometrical and dimensional data from individual point bars and braid bars, from part of the low net:gross outcropping Tortola fluvial system (Spain) that has been subjected to a quantitative and qualitative assessment. Four types of primary outcrop uncertainties are discussed: (1) the definition of the conceptual depositional model; (2) the number of observations on sandstone body dimensions; (3) the accuracy and representativeness of observed three-dimensional (3D) sandstone body size data; and (4) sandstone body orientation. Uncertainties related to the depositional model are the most difficult to quantify but can be appreciated qualitatively if processes of deposition related to scales of time and the general lack of information are considered. Application of the N
Aggregating and Disaggregating Flexibility Objects
DEFF Research Database (Denmark)
Siksnys, Laurynas; Valsomatzis, Emmanouil; Hose, Katja
2015-01-01
In many scientific and commercial domains we encounter flexibility objects, i.e., objects with explicit flexibilities in a time and an amount dimension (e.g., energy or product amount). Applications of flexibility objects require novel and efficient techniques capable of handling large amounts...... energy data management and discuss strategies for aggregation and disaggregation of flex-objects while retaining flexibility. This paper further extends these approaches beyond flex-objects originating from energy consumption by additionally considering flex-objects originating from energy production...... and aiming at energy balancing during aggregation. In more detail, this paper considers the complete life cycle of flex-objects: aggregation, disaggregation, associated requirements, efficient incremental computation, and balance aggregation techniques. Extensive experiments based on real-world data from...
Dependability in Aggregation by Averaging
Jesus, Paulo; Almeida, Paulo Sérgio
2010-01-01
Aggregation is an important building block of modern distributed applications, allowing the determination of meaningful properties (e.g. network size, total storage capacity, average load, majorities, etc.) that are used to direct the execution of the system. However, the majority of the existing aggregation algorithms exhibit relevant dependability issues, when prospecting their use in real application environments. In this paper, we reveal some dependability issues of aggregation algorithms based on iterative averaging techniques, giving some directions to solve them. This class of algorithms is considered robust (when compared to common tree-based approaches), being independent from the used routing topology and providing an aggregation result at all nodes. However, their robustness is strongly challenged and their correctness often compromised, when changing the assumptions of their working environment to more realistic ones. The correctness of this class of algorithms relies on the maintenance of a funda...
Dynamic co-movements of stock market returns, implied volatility and policy uncertainty
Antonakakis, N.; Chatziantoniou, I.; Filis, George
2013-01-01
We examine time-varying correlations among stock market returns, implied volatility and policy uncertainty. Our findings suggest that correlations are indeed time-varying and sensitive to oil demand shocks and US recessions. Highlights: We examine dynamic correlations of stock market returns, implied volatility and policy uncertainty. Dynamic correlations reveal heterogeneous patterns during US recessions. Aggregate demand oil price shocks and US recessions affect dynamic correlations. A rise...
Model for amorphous aggregation processes
Stranks, Samuel D.; Ecroyd, Heath; van Sluyter, Steven; Waters, Elizabeth J.; Carver, John A.; von Smekal, Lorenz
2009-11-01
The amorphous aggregation of proteins is associated with many phenomena, ranging from the formation of protein wine haze to the development of cataract in the eye lens and the precipitation of recombinant proteins during their expression and purification. While much literature exists describing models for linear protein aggregation, such as amyloid fibril formation, there are few reports of models which address amorphous aggregation. Here, we propose a model to describe the amorphous aggregation of proteins which is also more widely applicable to other situations where a similar process occurs, such as in the formation of colloids and nanoclusters. As first applications of the model, we have tested it against experimental turbidimetry data of three proteins relevant to the wine industry and biochemistry, namely, thaumatin, a thaumatinlike protein, and α -lactalbumin. The model is very robust and describes amorphous experimental data to a high degree of accuracy. Details about the aggregation process, such as shape parameters of the aggregates and rate constants, can also be extracted.