Spatial Uncertainty Model for Visual Features Using a Kinect™ Sensor
Directory of Open Access Journals (Sweden)
Jae-Han Park
2012-06-01
Full Text Available This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.
Spatial uncertainty model for visual features using a Kinect™ sensor.
Park, Jae-Han; Shin, Yong-Deuk; Bae, Ji-Hun; Baeg, Moon-Hong
2012-01-01
This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.
Contextual Multi-armed Bandits under Feature Uncertainty
Energy Technology Data Exchange (ETDEWEB)
Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Nam, Jun Hyun [Korea Advanced Inst. Science and Technology (KAIST), Daejeon (Korea, Republic of); Mo, Sangwoo [Korea Advanced Inst. Science and Technology (KAIST), Daejeon (Korea, Republic of); Shin, Jinwoo [Korea Advanced Inst. Science and Technology (KAIST), Daejeon (Korea, Republic of)
2017-03-03
We study contextual multi-armed bandit problems under linear realizability on rewards and uncertainty (or noise) on features. For the case of identical noise on features across actions, we propose an algorithm, coined NLinRel, having O(T⁷/₈(log(dT)+K√d)) regret bound for T rounds, K actions, and d-dimensional feature vectors. Next, for the case of non-identical noise, we observe that popular linear hypotheses including NLinRel are impossible to achieve such sub-linear regret. Instead, under assumption of Gaussian feature vectors, we prove that a greedy algorithm has O(T²/₃√log d)regret bound with respect to the optimal linear hypothesis. Utilizing our theoretical understanding on the Gaussian case, we also design a practical variant of NLinRel, coined Universal-NLinRel, for arbitrary feature distributions. It first runs NLinRel for finding the ‘true’ coefficient vector using feature uncertainties and then adjust it to minimize its regret using the statistical feature information. We justify the performance of Universal-NLinRel on both synthetic and real-world datasets.
Using spatial uncertainty to manipulate the size of the attention focus.
Huang, Dan; Xue, Linyan; Wang, Xin; Chen, Yao
2016-09-01
Preferentially processing behaviorally relevant information is vital for primate survival. In visuospatial attention studies, manipulating the spatial extent of attention focus is an important question. Although many studies have claimed to successfully adjust attention field size by either varying the uncertainty about the target location (spatial uncertainty) or adjusting the size of the cue orienting the attention focus, no systematic studies have assessed and compared the effectiveness of these methods. We used a multiple cue paradigm with 2.5° and 7.5° rings centered around a target position to measure the cue size effect, while the spatial uncertainty levels were manipulated by changing the number of cueing positions. We found that spatial uncertainty had a significant impact on reaction time during target detection, while the cue size effect was less robust. We also carefully varied the spatial scope of potential target locations within a small or large region and found that this amount of variation in spatial uncertainty can also significantly influence target detection speed. Our results indicate that adjusting spatial uncertainty is more effective than varying cue size when manipulating attention field size.
Using spatial uncertainty to manipulate the size of the attention focus
Huang, Dan; Xue, Linyan; Wang, Xin; Chen, Yao
2016-01-01
Preferentially processing behaviorally relevant information is vital for primate survival. In visuospatial attention studies, manipulating the spatial extent of attention focus is an important question. Although many studies have claimed to successfully adjust attention field size by either varying the uncertainty about the target location (spatial uncertainty) or adjusting the size of the cue orienting the attention focus, no systematic studies have assessed and compared the effectiveness of...
Optimum sizing of wind-battery systems incorporating resource uncertainty
International Nuclear Information System (INIS)
Roy, Anindita; Kedare, Shireesh B.; Bandyopadhyay, Santanu
2010-01-01
The inherent uncertainty of the wind is a major impediment for successful implementation of wind based power generation technology. A methodology has been proposed in this paper to incorporate wind speed uncertainty in sizing wind-battery system for isolated applications. The uncertainty associated with the wind speed is incorporated using chance constraint programming approach. For a pre-specified reliability requirement, a deterministic equivalent energy balance equation may be derived from the chance constraint that allows time series simulation of the entire system. This results in a generation of the entire set of feasible design options, satisfying different system level constraints, on a battery capacity vs. generator rating diagram, also known as the design space. The proposed methodology highlights the trade-offs between the wind turbine rating, rotor diameter and the battery size for a given reliability of power supply. The optimum configuration is chosen on the basis of the minimum cost of energy (US$/kWh). It is shown with the help of illustrative examples that the proposed methodology is generic and flexible to incorporate alternate sub-component models. (author)
Uncertainties in effective dose estimates of adult CT head scans: The effect of head size
International Nuclear Information System (INIS)
Gregory, Kent J.; Bibbo, Giovanni; Pattison, John E.
2009-01-01
Purpose: This study is an extension of a previous study where the uncertainties in effective dose estimates from adult CT head scans were calculated using four CT effective dose estimation methods, three of which were computer programs (CT-EXPO, CTDOSIMETRY, and IMPACTDOSE) and one that involved the dose length product (DLP). However, that study did not include the uncertainty contribution due to variations in head sizes. Methods: The uncertainties due to head size variations were estimated by first using the computer program data to calculate doses to small and large heads. These doses were then compared with doses calculated for the phantom heads used by the computer programs. An uncertainty was then assigned based on the difference between the small and large head doses and the doses of the phantom heads. Results: The uncertainties due to head size variations alone were found to be between 4% and 26% depending on the method used and the patient gender. When these uncertainties were included with the results of the previous study, the overall uncertainties in effective dose estimates (stated at the 95% confidence interval) were 20%-31% (CT-EXPO), 15%-30% (CTDOSIMETRY), 20%-36% (IMPACTDOSE), and 31%-40% (DLP). Conclusions: For the computer programs, the lower overall uncertainties were still achieved when measured values of CT dose index were used rather than tabulated values. For DLP dose estimates, head size variations made the largest (for males) and second largest (for females) contributions to effective dose uncertainty. An improvement in the uncertainty of the DLP method dose estimates will be achieved if head size variation can be taken into account.
Uncertainties in effective dose estimates of adult CT head scans: The effect of head size
Energy Technology Data Exchange (ETDEWEB)
Gregory, Kent J.; Bibbo, Giovanni; Pattison, John E. [Department of Medical Physics, Royal Adelaide Hospital, Adelaide, South Australia 5000 (Australia) and School of Electrical and Information Engineering (Applied Physics), University of South Australia, Mawson Lakes, South Australia 5095 (Australia); Division of Medical Imaging, Women' s and Children' s Hospital, North Adelaide, South Australia 5006 (Australia) and School of Electrical and Information Engineering (Applied Physics), University of South Australia, Mawson Lakes, South Australia 5095 (Australia); School of Electrical and Information Engineering (Applied Physics), University of South Australia, Mawson Lakes, South Australia 5095 (Australia)
2009-09-15
Purpose: This study is an extension of a previous study where the uncertainties in effective dose estimates from adult CT head scans were calculated using four CT effective dose estimation methods, three of which were computer programs (CT-EXPO, CTDOSIMETRY, and IMPACTDOSE) and one that involved the dose length product (DLP). However, that study did not include the uncertainty contribution due to variations in head sizes. Methods: The uncertainties due to head size variations were estimated by first using the computer program data to calculate doses to small and large heads. These doses were then compared with doses calculated for the phantom heads used by the computer programs. An uncertainty was then assigned based on the difference between the small and large head doses and the doses of the phantom heads. Results: The uncertainties due to head size variations alone were found to be between 4% and 26% depending on the method used and the patient gender. When these uncertainties were included with the results of the previous study, the overall uncertainties in effective dose estimates (stated at the 95% confidence interval) were 20%-31% (CT-EXPO), 15%-30% (CTDOSIMETRY), 20%-36% (IMPACTDOSE), and 31%-40% (DLP). Conclusions: For the computer programs, the lower overall uncertainties were still achieved when measured values of CT dose index were used rather than tabulated values. For DLP dose estimates, head size variations made the largest (for males) and second largest (for females) contributions to effective dose uncertainty. An improvement in the uncertainty of the DLP method dose estimates will be achieved if head size variation can be taken into account.
A comparative analysis of DNA barcode microarray feature size
Directory of Open Access Journals (Sweden)
Smith Andrew M
2009-10-01
Full Text Available Abstract Background Microarrays are an invaluable tool in many modern genomic studies. It is generally perceived that decreasing the size of microarray features leads to arrays with higher resolution (due to greater feature density, but this increase in resolution can compromise sensitivity. Results We demonstrate that barcode microarrays with smaller features are equally capable of detecting variation in DNA barcode intensity when compared to larger feature sizes within a specific microarray platform. The barcodes used in this study are the well-characterized set derived from the Yeast KnockOut (YKO collection used for screens of pooled yeast (Saccharomyces cerevisiae deletion mutants. We treated these pools with the glycosylation inhibitor tunicamycin as a test compound. Three generations of barcode microarrays at 30, 8 and 5 μm features sizes independently identified the primary target of tunicamycin to be ALG7. Conclusion We show that the data obtained with 5 μm feature size is of comparable quality to the 30 μm size and propose that further shrinking of features could yield barcode microarrays with equal or greater resolving power and, more importantly, higher density.
Uncertainty budget in internal monostandard NAA for small and large size samples analysis
International Nuclear Information System (INIS)
Dasari, K.B.; Acharya, R.
2014-01-01
Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)
Design Features and Technology Uncertainties for the Next Generation Nuclear Plant
Energy Technology Data Exchange (ETDEWEB)
John M. Ryskamp; Phil Hildebrandt; Osamu Baba; Ron Ballinger; Robert Brodsky; Hans-Wolfgang Chi; Dennis Crutchfield; Herb Estrada; Jeane-Claude Garnier; Gerald Gordon; Richard Hobbins; Dan Keuter; Marilyn Kray; Philippe Martin; Steve Melancon; Christian Simon; Henry Stone; Robert Varrin; Werner von Lensa
2004-06-01
This report presents the conclusions, observations, and recommendations of the Independent Technology Review Group (ITRG) regarding design features and important technology uncertainties associated with very-high-temperature nuclear system concepts for the Next Generation Nuclear Plant (NGNP). The ITRG performed its reviews during the period November 2003 through April 2004.
Mama Software Features: Uncertainty Testing
Energy Technology Data Exchange (ETDEWEB)
Ruggiero, Christy E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Porter, Reid B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-05-30
This document reviews how the uncertainty in the calculations is being determined with test image data. The results of this testing give an ‘initial uncertainty’ number than can be used to estimate the ‘back end’ uncertainty in digital image quantification in images. Statisticians are refining these numbers as part of a UQ effort.
Uncertainties of size measurements in electron microscopy characterization of nanomaterials in foods
DEFF Research Database (Denmark)
Dudkiewicz, Agnieszka; Boxall, Alistair B. A.; Chaudhry, Qasim
2015-01-01
Electron microscopy is a recognized standard tool for nanomaterial characterization, and recommended by the European Food Safety Authority for the size measurement of nanomaterials in food. Despite this, little data have been published assessing the reliability of the method, especially for size...... measurement of nanomaterials characterized by a broad size distribution and/or added to food matrices. This study is a thorough investigation of the measurement uncertainty when applying electron microscopy for size measurement of engineered nanomaterials in foods. Our results show that the number of measured...
Fridlind, A. M.; Atlas, R.; van Diedenhoven, B.; Ackerman, A. S.; Rind, D. H.; Harrington, J. Y.; McFarquhar, G. M.; Um, J.; Jackson, R.; Lawson, P.
2017-12-01
It has recently been suggested that seeding synoptic cirrus could have desirable characteristics as a geoengineering approach, but surprisingly large uncertainties remain in the fundamental parameters that govern cirrus properties, such as mass accommodation coefficient, ice crystal physical properties, aggregation efficiency, and ice nucleation rate from typical upper tropospheric aerosol. Only one synoptic cirrus model intercomparison study has been published to date, and studies that compare the shapes of observed and simulated ice size distributions remain sparse. Here we amend a recent model intercomparison setup using observations during two 2010 SPARTICUS campaign flights. We take a quasi-Lagrangian column approach and introduce an ensemble of gravity wave scenarios derived from collocated Doppler cloud radar retrievals of vertical wind speed. We use ice crystal properties derived from in situ cloud particle images, for the first time allowing smoothly varying and internally consistent treatments of nonspherical ice capacitance, fall speed, gravitational collection, and optical properties over all particle sizes in our model. We test two new parameterizations for mass accommodation coefficient as a function of size, temperature and water vapor supersaturation, and several ice nucleation scenarios. Comparison of results with in situ ice particle size distribution data, corrected using state-of-the-art algorithms to remove shattering artifacts, indicate that poorly constrained uncertainties in the number concentration of crystals smaller than 100 µm in maximum dimension still prohibit distinguishing which parameter combinations are more realistic. When projected area is concentrated at such sizes, the only parameter combination that reproduces observed size distribution properties uses a fixed mass accommodation coefficient of 0.01, on the low end of recently reported values. No simulations reproduce the observed abundance of such small crystals when the
Chromospheric rotation. II. Dependence on the size of chromospheric features
Energy Technology Data Exchange (ETDEWEB)
Azzarelli, L; Casalini, P; Cerri, S; Denoth, F [Consiglio Nazionale delle Ricerche, Pisa (Italy). Ist. di Elaborazione della Informazione
1979-08-01
The dependence of solar rotation on the size of the chromospheric tracers is considered. On the basis of an analysis of Ca II K/sub 3/ daily filtergrams taken in the period 8 May-14 August, 1972, chromospheric features can be divided into two classes according to their size. Features with size falling into the range 24 000-110 000 km can be identified with network elements, while those falling into the range 120 000-300 000 km with active regions, or brightness features of comparable size present at high latitudes. The rotation rate is determined separately for the two families of chromospheric features by means of a cross-correlation technique directly yields the average daily displacement of tracers due to rotation. Before computing the cross-correlation functions, chromospheric brightness data have been filtered with appropriate bandpass and highpass filters for separating spatial periodicities whose wavelengths fall into the two ranges of size, characteristic of the network pattern and of the activity centers. A difference less than 1% of the rotation rate of the two families of chromospheric features has been found. This is an indication for a substantial corotation at chromospheric levels of different short-lived features, both related to solar activity and controlled by the convective supergranular motions.
Speeded induction under uncertainty: the influence of multiple categories and feature conjunctions.
Newell, Ben R; Paton, Helen; Hayes, Brett K; Griffiths, Oren
2010-12-01
When people are uncertain about the category membership of an item (e.g., Is it a dog or a dingo?), research shows that they tend to rely only on the dominant or most likely category when making inductions (e.g., How likely is it to befriend me?). An exception has been reported using speeded induction judgments where participants appeared to use information from multiple categories to make inductions (Verde, Murphy, & Ross, 2005). In two speeded induction studies, we found that participants tended to rely on the frequency with which features co-occurred when making feature predictions, independently of category membership. This pattern held whether categories were considered implicitly (Experiment 1) or explicitly (Experiment 2) prior to feature induction. The results converge with other recent work suggesting that people often rely on feature conjunction information, rather than category boundaries, when making inductions under uncertainty.
International Nuclear Information System (INIS)
Ngamroo, Issarachai
2010-01-01
It is well known that the superconducting magnetic energy storage (SMES) is able to quickly exchange active and reactive power with the power system. The SMES is expected to be the smart storage device for power system stabilization. Although the stabilizing effect of SMES is significant, the SMES is quite costly. Particularly, the superconducting magnetic coil size which is the essence of the SMES, must be carefully selected. On the other hand, various generation and load changes, unpredictable network structure, etc., cause system uncertainties. The power controller of SMES which is designed without considering such uncertainties, may not tolerate and loses stabilizing effect. To overcome these problems, this paper proposes the new design of robust SMES controller taking coil size and system uncertainties into account. The structure of the active and reactive power controllers is the 1st-order lead-lag compensator. No need for the exact mathematical representation, system uncertainties are modeled by the inverse input multiplicative perturbation. Without the difficulty of the trade-off of damping performance and robustness, the optimization problem of control parameters is formulated. The particle swarm optimization is used for solving the optimal parameters at each coil size automatically. Based on the normalized integral square error index and the consideration of coil current constraint, the robust SMES with the smallest coil size which still provides the satisfactory stabilizing effect, can be achieved. Simulation studies in the two-area four-machine interconnected power system show the superior robustness of the proposed robust SMES with the smallest coil size under various operating conditions over the non-robust SMES with large coil size.
Ngamroo, Issarachai
2010-12-01
It is well known that the superconducting magnetic energy storage (SMES) is able to quickly exchange active and reactive power with the power system. The SMES is expected to be the smart storage device for power system stabilization. Although the stabilizing effect of SMES is significant, the SMES is quite costly. Particularly, the superconducting magnetic coil size which is the essence of the SMES, must be carefully selected. On the other hand, various generation and load changes, unpredictable network structure, etc., cause system uncertainties. The power controller of SMES which is designed without considering such uncertainties, may not tolerate and loses stabilizing effect. To overcome these problems, this paper proposes the new design of robust SMES controller taking coil size and system uncertainties into account. The structure of the active and reactive power controllers is the 1st-order lead-lag compensator. No need for the exact mathematical representation, system uncertainties are modeled by the inverse input multiplicative perturbation. Without the difficulty of the trade-off of damping performance and robustness, the optimization problem of control parameters is formulated. The particle swarm optimization is used for solving the optimal parameters at each coil size automatically. Based on the normalized integral square error index and the consideration of coil current constraint, the robust SMES with the smallest coil size which still provides the satisfactory stabilizing effect, can be achieved. Simulation studies in the two-area four-machine interconnected power system show the superior robustness of the proposed robust SMES with the smallest coil size under various operating conditions over the non-robust SMES with large coil size.
Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions
Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
Severtson, Dolores J
2015-02-01
Barriers to communicating the uncertainty of environmental health risks include preferences for certain information and low numeracy. Map features designed to communicate the magnitude and uncertainty of estimated cancer risk from air pollution were tested among 826 participants to assess how map features influenced judgments of adequacy and the intended communication goals. An uncertain versus certain visual feature was judged as less adequate but met both communication goals and addressed numeracy barriers. Expressing relative risk using words communicated uncertainty and addressed numeracy barriers but was judged as highly inadequate. Risk communication and visual cognition concepts were applied to explain findings.
Pupil size reflects the focus of feature-based attention.
Binda, Paola; Pereverzeva, Maria; Murray, Scott O
2014-12-15
We measured pupil size in adult human subjects while they selectively attended to one of two surfaces, bright and dark, defined by coherently moving dots. The two surfaces were presented at the same location; therefore, subjects could select the cued surface only on the basis of its features. With no luminance change in the stimulus, we find that pupil size was smaller when the bright surface was attended and larger when the dark surface was attended: an effect of feature-based (or surface-based) attention. With the same surfaces at nonoverlapping locations, we find a similar effect of spatial attention. The pupil size modulation cannot be accounted for by differences in eye position and by other variables known to affect pupil size such as task difficulty, accommodation, or the mere anticipation (imagery) of bright/dark stimuli. We conclude that pupil size reflects not just luminance or cognitive state, but the interaction between the two: it reflects which luminance level in the visual scene is relevant for the task at hand. Copyright © 2014 the American Physiological Society.
Directory of Open Access Journals (Sweden)
Ryusuke Konishi
2018-01-01
Full Text Available In deregulated electricity markets, minimizing the procurement costs of electricity is a critical problem for procurement agencies (PAs. However, uncertainty is inevitable for PAs and includes multiple factors such as market prices, photovoltaic system (PV output and demand. This study focuses on settlements in multi-period markets (a day-ahead market and a real-time market and the installation of energy storage systems (ESSs. ESSs can be utilized for time arbitrage in the day-ahead market and to reduce the purchasing/selling of electricity in the real-time market. However, the high costs of an ESS mean the size of the system needs to be minimized. In addition, when determining the size of an ESS, it is important to identify the size appropriate for each role. Therefore, we employ the concept of a “slow” and a “fast” ESS to quantify the size of a system’s role, based on the values associated with the various uncertainties. Because the problem includes nonlinearity and non-convexity, we solve it within a realistic computational burden by reformulating the problem using reasonable assumptions. Therefore, this study identifies the optimal sizes of ESSs and procurement, taking into account the uncertainties of prices in multi-period markets, PV output and demand.
Nanopatterned surface with adjustable area coverage and feature size fabricated by photocatalysis
Energy Technology Data Exchange (ETDEWEB)
Bai Yang; Zhang Yan; Li Wei; Zhou Xuefeng; Wang Changsong; Feng Xin [State Key Laboratory of Materials-oriented Chemical Engineering, Nanjing University of Technology, Nanjing, Jiangsu 210009 (China); Zhang Luzheng [Petroleum Research Recovery Center, New Mexico Institute of Mining and Technology, Socorro, NM 87801 (United States); Lu Xiaohua, E-mail: xhlu@njut.edu.cn [State Key Laboratory of Materials-oriented Chemical Engineering, Nanjing University of Technology, Nanjing, Jiangsu 210009 (China)
2009-08-30
We report an effective approach to fabricate nanopatterns of alkylsilane self-assembly monolayers (SAMs) with desirable coverage and feature size by gradient photocatalysis in TiO{sub 2} aqueous suspension. Growth and photocatalytic degradation of octadecyltrichlorosilane (OTS) were combined to fabricate adjustable monolayered nanopatterns on mica sheet in this work. Systematic atomic force microscopy (AFM) analysis showed that OTS-SAMs that have similar area coverage with different feature sizes and similar feature size with different area coverages can be fabricated by this approach. Contact angle measurement was applied to confirm the gradually varied nanopatterns contributed to the gradient of UV light illumination. Since this approach is feasible for various organic SAMs and substrates, a versatile method was presented to prepare tunable nanopatterns with desirable area coverage and feature size in many applications, such as molecular and biomolecular recognition, sensor and electrode modification.
Nanopatterned surface with adjustable area coverage and feature size fabricated by photocatalysis
International Nuclear Information System (INIS)
Bai Yang; Zhang Yan; Li Wei; Zhou Xuefeng; Wang Changsong; Feng Xin; Zhang Luzheng; Lu Xiaohua
2009-01-01
We report an effective approach to fabricate nanopatterns of alkylsilane self-assembly monolayers (SAMs) with desirable coverage and feature size by gradient photocatalysis in TiO 2 aqueous suspension. Growth and photocatalytic degradation of octadecyltrichlorosilane (OTS) were combined to fabricate adjustable monolayered nanopatterns on mica sheet in this work. Systematic atomic force microscopy (AFM) analysis showed that OTS-SAMs that have similar area coverage with different feature sizes and similar feature size with different area coverages can be fabricated by this approach. Contact angle measurement was applied to confirm the gradually varied nanopatterns contributed to the gradient of UV light illumination. Since this approach is feasible for various organic SAMs and substrates, a versatile method was presented to prepare tunable nanopatterns with desirable area coverage and feature size in many applications, such as molecular and biomolecular recognition, sensor and electrode modification.
International Nuclear Information System (INIS)
Blackwood, Larry G.; Harker, Yale D.
2002-01-01
Assessment of active mode measurement uncertainty in passive-active neutron radioassay systems used to measure Pu content in nuclear waste is severely hampered by lack of knowledge of the waste Pu particle size distribution, which is a major factor in determining bias in active mode measurements. The sensitivity of active mode measurements to particle size precludes using simulations or surrogate waste forms to estimate uncertainty in active mode measurements when the particle size distribution is not precisely known or inadequately reproduced. An alternative approach is based on a statistical comparison of active and passive mode results in the mass range for which both active and passive mode analyses produce useable measurements. Because passive mode measurements are not particularly sensitive to particle size effects, their uncertainty can be more easily assessed. Once bias corrected, passive mode measurements can serve as confirmatory measurements for the estimation of active mode bias. Further statistical analysis of the errors in measurements leads to precision estimates for the active mode
Uncertainty in social dilemmas
Kwaadsteniet, Erik Willem de
2007-01-01
This dissertation focuses on social dilemmas, and more specifically, on environmental uncertainty in these dilemmas. Real-life social dilemma situations are often characterized by uncertainty. For example, fishermen mostly do not know the exact size of the fish population (i.e., resource size uncertainty). Several researchers have therefore asked themselves the question as to how such uncertainty influences people’s choice behavior. These researchers have repeatedly concluded that uncertainty...
Directory of Open Access Journals (Sweden)
Qing Fang
2015-01-01
Full Text Available The flow mechanism of the injected fluid was studied by the constant pressure core displacement experiments in the paper. It is assumed under condition of the constant pressure gradient in deep formation based on the characteristic of pressure gradient distribution between the injection and production wells and the mobility of different polymer systems in deep reservoir. Moreover, the flow rate of steady stream was quantitatively analyzed and the critical flow pressure gradient of different injection parameters polymer solutions in different permeability cores was measured. The result showed that polymer hydrodynamic feature size increases with the increasing molecular weight. If the concentration of polymer solutions overlaps beyond critical concentration, then molecular chains entanglement will be occur and cause the augment of its hydrodynamic feature size. The polymer hydrodynamic feature size decreased as the salinity of the dilution water increased. When the median radius of the core pore and throat was 5–10 times of the polymer system hydrodynamic feature size, the polymer solution had a better compatibility with the microscopic pore structure of the reservoir. The estimation of polymer solutions mobility in the porous media can be used to guide the polymer displacement plan and select the optimum injection parameters.
Energy Technology Data Exchange (ETDEWEB)
Darcel, C. (Itasca Consultants SAS (France)); Davy, P.; Le Goc, R.; Dreuzy, J.R. de; Bour, O. (Geosciences Rennes, UMR 6118 CNRS, Univ. def Rennes, Rennes (France))
2009-11-15
Investigations led for several years at Laxemar and Forsmark reveal the large heterogeneity of geological formations and associated fracturing. This project aims at reinforcing the statistical DFN modeling framework adapted to a site scale. This leads therefore to develop quantitative methods of characterization adapted to the nature of fracturing and data availability. We start with the hypothesis that the maximum likelihood DFN model is a power-law model with a density term depending on orientations. This is supported both by literature and specifically here by former analyses of the SKB data. This assumption is nevertheless thoroughly tested by analyzing the fracture trace and lineament maps. Fracture traces range roughly between 0.5 m and 10 m - i e the usual extension of the sample outcrops. Between the raw data and final data used to compute the fracture size distribution from which the size distribution model will arise, several steps are necessary, in order to correct data from finite-size, topographical and sampling effects. More precisely, a particular attention is paid to fracture segmentation status and fracture linkage consistent with the DFN model expected. The fracture scaling trend observed over both sites displays finally a shape parameter k{sub t} close to 1.2 with a density term (alpha{sub 2d}) between 1.4 and 1.8. Only two outcrops clearly display a different trend with k{sub t} close to 3 and a density term (alpha{sub 2d}) between 2 and 3.5. The fracture lineaments spread over the range between 100 meters and a few kilometers. When compared with fracture trace maps, these datasets are already interpreted and the linkage process developed previously has not to be done. Except for the subregional lineament map from Forsmark, lineaments display a clear power-law trend with a shape parameter k{sub t} equal to 3 and a density term between 2 and 4.5. The apparent variation in scaling exponent, from the outcrop scale (k{sub t} = 1.2) on one side, to
Transportation and Production Lot-size for Sugarcane under Uncertainty of Machine Capacity
Directory of Open Access Journals (Sweden)
Sudtachat Kanchala
2018-01-01
Full Text Available The integrated transportation and production lot size problems is important effect to total cost of operation system for sugar factories. In this research, we formulate a mathematic model that combines these two problems as two stage stochastic programming model. In the first stage, we determine the lot size of transportation problem and allocate a fixed number of vehicles to transport sugarcane to the mill factory. Moreover, we consider an uncertainty of machine (mill capacities. After machine (mill capacities realized, in the second stage we determine the production lot size and make decision to hold units of sugarcane in front of mills based on discrete random variables of machine (mill capacities. We investigate the model using a small size problem. The results show that the optimal solutions try to choose closest fields and lower holding cost per unit (at fields to transport sugarcane to mill factory. We show the results of comparison of our model and the worst case model (full capacity. The results show that our model provides better efficiency than the results of the worst case model.
No Effect of Featural Attention on Body Size Aftereffects
Directory of Open Access Journals (Sweden)
Ian David Stephen
2016-08-01
Full Text Available Prolonged exposure to images of narrow bodies has been shown to induce a perceptual aftereffect, such that observers’ point of subjective normality (PSN for bodies shifts towards narrower bodies. The converse effect is shown for adaptation to wide bodies. In low-level stimuli, object attention (attention directed to the object and spatial attention (attention directed to the location of the object have been shown to increase the magnitude of visual aftereffects, while object-based attention enhances the adaptation effect in faces. It is not known whether featural attention (attention directed to a specific aspect of the object affects the magnitude of adaptation effects in body stimuli. Here, we manipulate the attention of Caucasian observers to different featural information in body images, by asking them to rate the fatness or sex typicality of male and female bodies manipulated to appear fatter or thinner than average. PSNs for body fatness were taken at baseline and after adaptation, and a change in PSN (ΔPSN was calculated. A body size adaptation effect was found, with observers who viewed fat bodies showing an increased PSN, and those exposed to thin bodies showing a reduced PSN. However, manipulations of featural attention to body fatness or sex typicality produced equivalent results, suggesting that featural attention may not affect the strength of the body size aftereffect.
No Effect of Featural Attention on Body Size Aftereffects.
Stephen, Ian D; Bickersteth, Chloe; Mond, Jonathan; Stevenson, Richard J; Brooks, Kevin R
2016-01-01
Prolonged exposure to images of narrow bodies has been shown to induce a perceptual aftereffect, such that observers' point of subjective normality (PSN) for bodies shifts toward narrower bodies. The converse effect is shown for adaptation to wide bodies. In low-level stimuli, object attention (attention directed to the object) and spatial attention (attention directed to the location of the object) have been shown to increase the magnitude of visual aftereffects, while object-based attention enhances the adaptation effect in faces. It is not known whether featural attention (attention directed to a specific aspect of the object) affects the magnitude of adaptation effects in body stimuli. Here, we manipulate the attention of Caucasian observers to different featural information in body images, by asking them to rate the fatness or sex typicality of male and female bodies manipulated to appear fatter or thinner than average. PSNs for body fatness were taken at baseline and after adaptation, and a change in PSN (ΔPSN) was calculated. A body size adaptation effect was found, with observers who viewed fat bodies showing an increased PSN, and those exposed to thin bodies showing a reduced PSN. However, manipulations of featural attention to body fatness or sex typicality produced equivalent results, suggesting that featural attention may not affect the strength of the body size aftereffect.
Uncertainty in social dilemmas
Kwaadsteniet, Erik Willem de
2007-01-01
This dissertation focuses on social dilemmas, and more specifically, on environmental uncertainty in these dilemmas. Real-life social dilemma situations are often characterized by uncertainty. For example, fishermen mostly do not know the exact size of the fish population (i.e., resource size
International Nuclear Information System (INIS)
Darcel, C.; Davy, P.; Le Goc, R.; Dreuzy, J.R. de; Bour, O.
2009-11-01
Investigations led for several years at Laxemar and Forsmark reveal the large heterogeneity of geological formations and associated fracturing. This project aims at reinforcing the statistical DFN modeling framework adapted to a site scale. This leads therefore to develop quantitative methods of characterization adapted to the nature of fracturing and data availability. We start with the hypothesis that the maximum likelihood DFN model is a power-law model with a density term depending on orientations. This is supported both by literature and specifically here by former analyses of the SKB data. This assumption is nevertheless thoroughly tested by analyzing the fracture trace and lineament maps. Fracture traces range roughly between 0.5 m and 10 m - i e the usual extension of the sample outcrops. Between the raw data and final data used to compute the fracture size distribution from which the size distribution model will arise, several steps are necessary, in order to correct data from finite-size, topographical and sampling effects. More precisely, a particular attention is paid to fracture segmentation status and fracture linkage consistent with the DFN model expected. The fracture scaling trend observed over both sites displays finally a shape parameter k t close to 1.2 with a density term (α 2d ) between 1.4 and 1.8. Only two outcrops clearly display a different trend with k t close to 3 and a density term (α 2d ) between 2 and 3.5. The fracture lineaments spread over the range between 100 meters and a few kilometers. When compared with fracture trace maps, these datasets are already interpreted and the linkage process developed previously has not to be done. Except for the subregional lineament map from Forsmark, lineaments display a clear power-law trend with a shape parameter k t equal to 3 and a density term between 2 and 4.5. The apparent variation in scaling exponent, from the outcrop scale (k t = 1.2) on one side, to the lineament scale (k t = 2) on
New features and improved uncertainty analysis in the NEA nuclear data sensitivity tool (NDaST)
Dyrda, J.; Soppera, N.; Hill, I.; Bossant, M.; Gulliford, J.
2017-09-01
Following the release and initial testing period of the NEA's Nuclear Data Sensitivity Tool [1], new features have been designed and implemented in order to expand its uncertainty analysis capabilities. The aim is to provide a free online tool for integral benchmark testing, that is both efficient and comprehensive, meeting the needs of the nuclear data and benchmark testing communities. New features include access to P1 sensitivities for neutron scattering angular distribution [2] and constrained Chi sensitivities for the prompt fission neutron energy sampling. Both of these are compatible with covariance data accessed via the JANIS nuclear data software, enabling propagation of the resultant uncertainties in keff to a large series of integral experiment benchmarks. These capabilities are available using a number of different covariance libraries e.g., ENDF/B, JEFF, JENDL and TENDL, allowing comparison of the broad range of results it is possible to obtain. The IRPhE database of reactor physics measurements is now also accessible within the tool in addition to the criticality benchmarks from ICSBEP. Other improvements include the ability to determine and visualise the energy dependence of a given calculated result in order to better identify specific regions of importance or high uncertainty contribution. Sorting and statistical analysis of the selected benchmark suite is now also provided. Examples of the plots generated by the software are included to illustrate such capabilities. Finally, a number of analytical expressions, for example Maxwellian and Watt fission spectra will be included. This will allow the analyst to determine the impact of varying such distributions within the data evaluation, either through adjustment of parameters within the expressions, or by comparison to a more general probability distribution fitted to measured data. The impact of such changes is verified through calculations which are compared to a `direct' measurement found by
Coquelin, L.; Le Brusquet, L.; Fischer, N.; Gensdarmes, F.; Motzkus, C.; Mace, T.; Fleury, G.
2018-05-01
A scanning mobility particle sizer (SMPS) is a high resolution nanoparticle sizing system that is widely used as the standard method to measure airborne particle size distributions (PSD) in the size range 1 nm–1 μm. This paper addresses the problem to assess the uncertainty associated with PSD when a differential mobility analyzer (DMA) operates under scanning mode. The sources of uncertainty are described and then modeled either through experiments or knowledge extracted from the literature. Special care is brought to model the physics and to account for competing theories. Indeed, it appears that the modeling errors resulting from approximations of the physics can largely affect the final estimate of this indirect measurement, especially for quantities that are not measured during day-to-day experiments. The Monte Carlo method is used to compute the uncertainty associated with PSD. The method is tested against real data sets that are monosize polystyrene latex spheres (PSL) with nominal diameters of 100 nm, 200 nm and 450 nm. The median diameters and associated standard uncertainty of the aerosol particles are estimated as 101.22 nm ± 0.18 nm, 204.39 nm ± 1.71 nm and 443.87 nm ± 1.52 nm with the new approach. Other statistical parameters, such as the mean diameter, the mode and the geometric mean and associated standard uncertainty, are also computed. These results are then compared with the results obtained by SMPS embedded software.
Size-effect features on the magnetothermopower of bismuth nanowires
International Nuclear Information System (INIS)
Condrea, E.; Nicorici, A.
2011-01-01
Full text: In this work we have studied the magnetic field dependence of the thermopower (TEP) and resistance of glass-coated Bi wires with diameter (d) from 100 nm to at 1.5 μm below 80 K. Nanowires have anomalously large values of the thermopower (+100 μV K.1) and relatively high effective resistivities, but their frequencies of SdH oscillations remain those of bulk Bi. The TEP stays positive in longitudinal magnetic fields up to 15 T, where the surface scattering of charge carriers is negligible. Our analysis shows that the anomalous thermopower has a diffusion origin and is a consequence of the microstructure rather than the result of the strong scattering of electrons by the wire walls. The intensities of field at which the size-effect features appear on the magnetothermopower curves correspond to a value at which the diameter of the hole cyclotron orbit equals d. Size-effect features were observed only for set of nanowires with d = 100-350 nm, where diffusion TEP is dominant. The contribution of the phonon-drag effect was observed in a wire with diameter larger than 400 nm and becomes dominant at diameter of 1 μm. (authors)
Energy Technology Data Exchange (ETDEWEB)
Tobias, Benjamin John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Palaniyappan, Sasikumar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gautier, Donald Cort [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Mendez, Jacob [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Burris-Mog, Trevor John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Huang, Chengkun K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Favalli, Andrea [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hunter, James F. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Espy, Michelle E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Schmidt, Derek William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Nelson, Ronald Owen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sefkow, Adam [Univ. of Rochester, NY (United States); Shimada, Tsutomu [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Johnson, Randall Philip [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Fernandez, Juan Carlos [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-10-24
Images of the R2DTO resolution target were obtained during laser-driven-radiography experiments performed at the TRIDENT laser facility, and analysis of these images using the Bayesian Inference Engine (BIE) determines a most probable full-width half maximum (FWHM) spot size of 78 μm. However, significant uncertainty prevails due to variation in the measured detector blur. Propagating this uncertainty in detector blur through the forward model results in an interval of probabilistic ambiguity spanning approximately 35-195 μm when the laser energy impinges on a thick (1 mm) tantalum target. In other phases of the experiment, laser energy is deposited on a thin (~100 nm) aluminum target placed 250 μm ahead of the tantalum converter. When the energetic electron beam is generated in this manner, upstream from the bremsstrahlung converter, the inferred spot size shifts to a range of much larger values, approximately 270-600 μm FWHM. This report discusses methods applied to obtain these intervals as well as concepts necessary for interpreting the result within a context of probabilistic quantitative inference.
Uncertainty evaluation of reliability of shutdown system of a medium size fast breeder reactor
Energy Technology Data Exchange (ETDEWEB)
Zeliang, Chireuding; Singh, Om Pal, E-mail: singhop@iitk.ac.in; Munshi, Prabhat
2016-11-15
Highlights: • Uncertainty analysis of reliability of Shutdown System is carried out. • Monte Carlo method of sampling is used. • The effect of various reliability improvement measures of SDS are accounted. - Abstract: In this paper, results are presented on the uncertainty evaluation of the reliability of Shutdown System (SDS) of a Medium Size Fast Breeder Reactor (MSFBR). The reliability analysis results are of Kumar et al. (2005). The failure rate of the components of SDS are taken from International literature and it is assumed that these follow log-normal distribution. Fault tree method is employed to propagate the uncertainty in failure rate from components level to shutdown system level. The beta factor model is used to account different extent of diversity. The Monte Carlo sampling technique is used for the analysis. The results of uncertainty analysis are presented in terms of the probability density function, cumulative distribution function, mean, variance, percentile values, confidence intervals, etc. It is observed that the spread in the probability distribution of SDS failure rate is less than SDS components failure rate and ninety percent values of the failure rate of SDS falls below the target value. As generic values of failure rates are used, sensitivity analysis is performed with respect to failure rate of control and safety rods and beta factor. It is discovered that a large increase in failure rate of SDS rods is not carried to SDS system failure proportionately. The failure rate of SDS is very sensitive to the beta factor of common cause failure between the two systems of SDS. The results of the study provide insight in the propagation of uncertainty in the failure rate of SDS components to failure rate of shutdown system.
Intrinsic position uncertainty impairs overt search performance.
Semizer, Yelda; Michel, Melchi M
2017-08-01
Uncertainty regarding the position of the search target is a fundamental component of visual search. However, due to perceptual limitations of the human visual system, this uncertainty can arise from intrinsic, as well as extrinsic, sources. The current study sought to characterize the role of intrinsic position uncertainty (IPU) in overt visual search and to determine whether it significantly limits human search performance. After completing a preliminary detection experiment to characterize sensitivity as a function of visual field position, observers completed a search task that required localizing a Gabor target within a field of synthetic luminance noise. The search experiment included two clutter conditions designed to modulate the effect of IPU across search displays of varying set size. In the Cluttered condition, the display was tiled uniformly with feature clutter to maximize the effects of IPU. In the Uncluttered condition, the clutter at irrelevant locations was removed to attenuate the effects of IPU. Finally, we derived an IPU-constrained ideal searcher model, limited by the IPU measured in human observers. Ideal searchers were simulated based on the detection sensitivity and fixation sequences measured for individual human observers. The IPU-constrained ideal searcher predicted performance trends similar to those exhibited by the human observers. In the Uncluttered condition, performance decreased steeply as a function of increasing set size. However, in the Cluttered condition, the effect of IPU dominated and performance was approximately constant as a function of set size. Our findings suggest that IPU substantially limits overt search performance, especially in crowded displays.
Haptic exploration of fingertip-sized geometric features using a multimodal tactile sensor
Ponce Wong, Ruben D.; Hellman, Randall B.; Santos, Veronica J.
2014-06-01
Haptic perception remains a grand challenge for artificial hands. Dexterous manipulators could be enhanced by "haptic intelligence" that enables identification of objects and their features via touch alone. Haptic perception of local shape would be useful when vision is obstructed or when proprioceptive feedback is inadequate, as observed in this study. In this work, a robot hand outfitted with a deformable, bladder-type, multimodal tactile sensor was used to replay four human-inspired haptic "exploratory procedures" on fingertip-sized geometric features. The geometric features varied by type (bump, pit), curvature (planar, conical, spherical), and footprint dimension (1.25 - 20 mm). Tactile signals generated by active fingertip motions were used to extract key parameters for use as inputs to supervised learning models. A support vector classifier estimated order of curvature while support vector regression models estimated footprint dimension once curvature had been estimated. A distal-proximal stroke (along the long axis of the finger) enabled estimation of order of curvature with an accuracy of 97%. Best-performing, curvature-specific, support vector regression models yielded R2 values of at least 0.95. While a radial-ulnar stroke (along the short axis of the finger) was most helpful for estimating feature type and size for planar features, a rolling motion was most helpful for conical and spherical features. The ability to haptically perceive local shape could be used to advance robot autonomy and provide haptic feedback to human teleoperators of devices ranging from bomb defusal robots to neuroprostheses.
Directory of Open Access Journals (Sweden)
Huixia Liu
2017-07-01
Full Text Available Multilayer metal composite sheets possess superior properties to monolithic metal sheets, and formability is different from monolithic metal sheets. In this research, the feature size effect on formability of multilayer metal composite sheets under microscale laser flexible forming was studied by experiment. Two-layer copper/nickel composite sheets were selected as experimental materials. Five types of micro molds with different diameters were utilized. The formability of materials was evaluated by forming depth, thickness thinning, surface quality, and micro-hardness distribution. The research results showed that the formability of two-layer copper/nickel composite sheets was strongly influenced by feature size. With feature size increasing, the effect of layer stacking sequence on forming depth, thickness thinning ratio, and surface roughness became increasingly larger. However, the normalized forming depth, thickness thinning ratio, surface roughness, and micro-hardness of the formed components under the same layer stacking sequence first increased and then decreased with increasing feature size. The deformation behavior of copper/nickel composite sheets was determined by the external layer. The deformation extent was larger when the copper layer was set as the external layer.
Maes, J.H.R.; Fontanari, L.; Regolin, L.
2009-01-01
Rats were used in a spatial reorientation task to assess their ability to use geometric and non-geometric, featural, information. Experimental conditions differed in the size of the arena (small, medium, or large) and whether the food-baited corner was near or far from a visual feature. The main
Uncertainties in sealing a nuclear waste repository in partially saturated tuff
International Nuclear Information System (INIS)
Tillerson, J.R.; Fernandez, J.A.; Hinkebein, T.E.
1989-01-01
Sealing a nuclear waste repository in partially saturated tuff presents unique challenges to assuring performance of sealing components. Design and performance of components for sealing shafts, ramps, drifts, and exploratory boreholes depend on specific features of both the repository design and the site; of particular importance is the hydrologic environment in the unsaturated zone, including the role of fracture flow. Repository design features important to sealing of a repository include the size and location of shaft and ramp accesses, excavation methods, and the underground layout features such as grade (drainage direction) and location relative to geologic structure. Uncertainties about seal components relate to the postclosure environment for the seals, the emplacement methods, the material properties, and the potential performance of the components. An approach has been developed to reduce uncertainties and to increase confidence in seal performance; it includes gathering extensive site characterization data, establishing conservative design requirements, testing seal components in laboratory and field environments, and refining designs of both the seals and the repository before seals are installed. 9 refs., 5 figs., 2 tabs
Design features to achieve defence-in-depth in small and medium sized reactors
International Nuclear Information System (INIS)
Kuznetsov, Vladimir
2009-01-01
Broader incorporation of inherent and passive safety design features has become a 'trademark' of many advanced reactor concepts, including several evolutionary designs and nearly all innovative small and medium sized design concepts. Ensuring adequate defence-in-depth is important for reactors of smaller output because many of them are being designed to allow more proximity to the user, specifically, when non-electrical energy products are targeted. Based on the activities recently performed by the International Atomic Energy Agency, the paper provides a summary description of the design features used to achieve defence in depth in the eleven representative concepts of small and medium sized reactors. (author)
Jennings, Simon; Collingridge, Kate
2015-01-01
Existing estimates of fish and consumer biomass in the world’s oceans are disparate. This creates uncertainty about the roles of fish and other consumers in biogeochemical cycles and ecosystem processes, the extent of human and environmental impacts and fishery potential. We develop and use a size-based macroecological model to assess the effects of parameter uncertainty on predicted consumer biomass, production and distribution. Resulting uncertainty is large (e.g. median global biomass 4.9 billion tonnes for consumers weighing 1 g to 1000 kg; 50% uncertainty intervals of 2 to 10.4 billion tonnes; 90% uncertainty intervals of 0.3 to 26.1 billion tonnes) and driven primarily by uncertainty in trophic transfer efficiency and its relationship with predator-prey body mass ratios. Even the upper uncertainty intervals for global predictions of consumer biomass demonstrate the remarkable scarcity of marine consumers, with less than one part in 30 million by volume of the global oceans comprising tissue of macroscopic animals. Thus the apparently high densities of marine life seen in surface and coastal waters and frequently visited abundance hotspots will likely give many in society a false impression of the abundance of marine animals. Unexploited baseline biomass predictions from the simple macroecological model were used to calibrate a more complex size- and trait-based model to estimate fisheries yield and impacts. Yields are highly dependent on baseline biomass and fisheries selectivity. Predicted global sustainable fisheries yield increases ≈4 fold when smaller individuals (production estimates, which have yet to be achieved with complex models, and will therefore help to highlight priorities for future research and data collection. However, the focus on simple model structures and global processes means that non-phytoplankton primary production and several groups, structures and processes of ecological and conservation interest are not represented
West, A. C.; Novakowski, K. S.
2005-12-01
beyond a threshold concentration within the specified time. Aquifers are simulated by drawing the random spacings and apertures from specified distributions. Predictions are made of capture zone size assuming various degrees of knowledge of these distributions, with the parameters of the horizontal fractures being estimated using simulated hydraulic tests and a maximum likelihood estimator. The uncertainty is evaluated by calculating the variance in the capture zone size estimated in multiple realizations. The results show that despite good strategies to estimate the parameters of the horizontal fractures the uncertainty in capture zone size is enormous, mostly due to the lack of available information on vertical fractures. Also, at realistic distances (less than ten kilometers) and using realistic transmissivity distributions for the horizontal fractures the uptake of solute from fractures into matrix cannot be relied upon to protect the production well from contamination.
Communication target object recognition for D2D connection with feature size limit
Ok, Jiheon; Kim, Soochang; Kim, Young-hoon; Lee, Chulhee
2015-03-01
Recently, a new concept of device-to-device (D2D) communication, which is called "point-and-link communication" has attracted great attentions due to its intuitive and simple operation. This approach enables user to communicate with target devices without any pre-identification information such as SSIDs, MAC addresses by selecting the target image displayed on the user's own device. In this paper, we present an efficient object matching algorithm that can be applied to look(point)-and-link communications for mobile services. Due to the limited channel bandwidth and low computational power of mobile terminals, the matching algorithm should satisfy low-complexity, low-memory and realtime requirements. To meet these requirements, we propose fast and robust feature extraction by considering the descriptor size and processing time. The proposed algorithm utilizes a HSV color histogram, SIFT (Scale Invariant Feature Transform) features and object aspect ratios. To reduce the descriptor size under 300 bytes, a limited number of SIFT key points were chosen as feature points and histograms were binarized while maintaining required performance. Experimental results show the robustness and the efficiency of the proposed algorithm.
National Oceanic and Atmospheric Administration, Department of Commerce — This dataset represents sediment size prediction uncertainty from a sediment spatial model developed for the New York offshore spatial planning area. The model also...
Fabrication of Pt nanowires with a diffraction-unlimited feature size by high-threshold lithography
International Nuclear Information System (INIS)
Li, Li; Zhang, Ziang; Yu, Miao; Song, Zhengxun; Weng, Zhankun; Wang, Zuobin; Li, Wenjun; Wang, Dapeng; Zhao, Le; Peng, Kuiqing
2015-01-01
Although the nanoscale world can already be observed at a diffraction-unlimited resolution using far-field optical microscopy, to make the step from microscopy to lithography still requires a suitable photoresist material system. In this letter, we consider the threshold to be a region with a width characterized by the extreme feature size obtained using a Gaussian beam spot. By narrowing such a region through improvement of the threshold sensitization to intensity in a high-threshold material system, the minimal feature size becomes smaller. By using platinum as the negative photoresist, we demonstrate that high-threshold lithography can be used to fabricate nanowire arrays with a scalable resolution along the axial direction of the linewidth from the micro- to the nanoscale using a nanosecond-pulsed laser source with a wavelength λ 0 = 1064 nm. The minimal feature size is only several nanometers (sub λ 0 /100). Compared with conventional polymer resist lithography, the advantages of high-threshold lithography are sharper pinpoints of laser intensity triggering the threshold response and also higher robustness allowing for large area exposure by a less-expensive nanosecond-pulsed laser
Mid-level perceptual features distinguish objects of different real-world sizes.
Long, Bria; Konkle, Talia; Cohen, Michael A; Alvarez, George A
2016-01-01
Understanding how perceptual and conceptual representations are connected is a fundamental goal of cognitive science. Here, we focus on a broad conceptual distinction that constrains how we interact with objects--real-world size. Although there appear to be clear perceptual correlates for basic-level categories (apples look like other apples, oranges look like other oranges), the perceptual correlates of broader categorical distinctions are largely unexplored, i.e., do small objects look like other small objects? Because there are many kinds of small objects (e.g., cups, keys), there may be no reliable perceptual features that distinguish them from big objects (e.g., cars, tables). Contrary to this intuition, we demonstrated that big and small objects have reliable perceptual differences that can be extracted by early stages of visual processing. In a series of visual search studies, participants found target objects faster when the distractor objects differed in real-world size. These results held when we broadly sampled big and small objects, when we controlled for low-level features and image statistics, and when we reduced objects to texforms--unrecognizable textures that loosely preserve an object's form. However, this effect was absent when we used more basic textures. These results demonstrate that big and small objects have reliably different mid-level perceptual features, and suggest that early perceptual information about broad-category membership may influence downstream object perception, recognition, and categorization processes. (c) 2015 APA, all rights reserved).
Background and Qualification of Uncertainty Methods
International Nuclear Information System (INIS)
D'Auria, F.; Petruzzi, A.
2008-01-01
The evaluation of uncertainty constitutes the necessary supplement of Best Estimate calculations performed to understand accident scenarios in water cooled nuclear reactors. The needs come from the imperfection of computational tools on the one side and from the interest in using such tool to get more precise evaluation of safety margins. The paper reviews the salient features of two independent approaches for estimating uncertainties associated with predictions of complex system codes. Namely the propagation of code input error and the propagation of the calculation output error constitute the key-words for identifying the methods of current interest for industrial applications. Throughout the developed methods, uncertainty bands can be derived (both upper and lower) for any desired quantity of the transient of interest. For the second case, the uncertainty method is coupled with the thermal-hydraulic code to get the Code with capability of Internal Assessment of Uncertainty, whose features are discussed in more detail.
Visualizing Uncertainty of Point Phenomena by Redesigned Error Ellipses
Murphy, Christian E.
2018-05-01
Visualizing uncertainty remains one of the great challenges in modern cartography. There is no overarching strategy to display the nature of uncertainty, as an effective and efficient visualization depends, besides on the spatial data feature type, heavily on the type of uncertainty. This work presents a design strategy to visualize uncertainty con-nected to point features. The error ellipse, well-known from mathematical statistics, is adapted to display the uncer-tainty of point information originating from spatial generalization. Modified designs of the error ellipse show the po-tential of quantitative and qualitative symbolization and simultaneous point based uncertainty symbolization. The user can intuitively depict the centers of gravity, the major orientation of the point arrays as well as estimate the ex-tents and possible spatial distributions of multiple point phenomena. The error ellipse represents uncertainty in an intuitive way, particularly suitable for laymen. Furthermore it is shown how applicable an adapted design of the er-ror ellipse is to display the uncertainty of point features originating from incomplete data. The suitability of the error ellipse to display the uncertainty of point information is demonstrated within two showcases: (1) the analysis of formations of association football players, and (2) uncertain positioning of events on maps for the media.
International Nuclear Information System (INIS)
Fujimoto, Ikumatsu; Nishimura, Kunitoshi; Takatuji, Toshiyuki; Pyun, Young-Sik
2011-01-01
An autonomous method for calibrating the zero difference for the three-point method of surface straightness measurement is presented and discussed with respect to the relationship between the measurement uncertainty and the size of the disk gauge used for calibration. In this method, the disk gauge is used in two steps. In the first step, the disk gauge rotates a few revolutions and moves parallel to three displacement sensors built into a holder. In the second step, the geometrical parameters between the sensors and the disk gauge are acquired, and the zero differences are computed by our recently proposed algorithm. Finally, the uncertainty of the zero differences is analyzed and simulated numerically, and the relationship between the disk gauge radius and the measurement uncertainty is calculated. The use of a disk gauge of larger radius results in smaller uncertainty of straightness measurement
Feature-size dependent selective edge enhancement of x-ray images
International Nuclear Information System (INIS)
Herman, S.
1988-01-01
Morphological filters are nonlinear signal transformations that operate on a picture directly in the space domain. Such filters are based on the theory of mathematical morphology previously formulated. The filt4er being presented here features a ''mask'' operator (called a ''structuring element'' in some of the literature) which is a function of the two spatial coordinates x and y. The two basic mathematical operations are called ''masked erosion'' and ''masked dilation''. In the case of masked erosion the mask is passed over the input image in a raster pattern. At each position of the mask, the pixel values under the mask are multiplied by the mask pixel values. Then the output pixel value, located at the center position of the mask,is set equal to the minimum of the product of the mask and input values. Similarity, for masked dilation, the output pixel value is the maximum of the product of the input and the mask pixel values. The two basic processes of dilation and erosion can be used to construct the next level of operations the ''positive sieve'' (also called ''opening'') and the ''negative sieve'' (''closing''). The positive sieve modifies the peaks in the image whereas the negative sieve works on image valleys. The positive sieve is implemented by passing the output of the masked erosion step through the masked dilation function. The negative sieve reverses this procedure, using a dilation followed by an erosion. Each such sifting operator is characterized by a ''hole size''. It will be shown that the choice of hole size will select the range of pixel detail sizes which are to be enhanced. The shape of the mask will govern the shape of the enhancement. Finally positive sifting is used to enhance positive-going (peak) features, whereas negative enhances the negative-going (valley) landmarks
Experimental uncertainty estimation and statistics for data having interval uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)
2007-05-01
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Effect of uncertainties on probabilistic-based design capacity of hydrosystems
Tung, Yeou-Koung
2018-02-01
Hydrosystems engineering designs involve analysis of hydrometric data (e.g., rainfall, floods) and use of hydrologic/hydraulic models, all of which contribute various degrees of uncertainty to the design process. Uncertainties in hydrosystem designs can be generally categorized into aleatory and epistemic types. The former arises from the natural randomness of hydrologic processes whereas the latter are due to knowledge deficiency in model formulation and model parameter specification. This study shows that the presence of epistemic uncertainties induces uncertainty in determining the design capacity. Hence, the designer needs to quantify the uncertainty features of design capacity to determine the capacity with a stipulated performance reliability under the design condition. Using detention basin design as an example, the study illustrates a methodological framework by considering aleatory uncertainty from rainfall and epistemic uncertainties from the runoff coefficient, curve number, and sampling error in design rainfall magnitude. The effects of including different items of uncertainty and performance reliability on the design detention capacity are examined. A numerical example shows that the mean value of the design capacity of the detention basin increases with the design return period and this relation is found to be practically the same regardless of the uncertainty types considered. The standard deviation associated with the design capacity, when subject to epistemic uncertainty, increases with both design frequency and items of epistemic uncertainty involved. It is found that the epistemic uncertainty due to sampling error in rainfall quantiles should not be ignored. Even with a sample size of 80 (relatively large for a hydrologic application) the inclusion of sampling error in rainfall quantiles resulted in a standard deviation about 2.5 times higher than that considering only the uncertainty of the runoff coefficient and curve number. Furthermore, the
Uncertainty and global climate change research
Energy Technology Data Exchange (ETDEWEB)
Tonn, B.E. [Oak Ridge National Lab., TN (United States); Weiher, R. [National Oceanic and Atmospheric Administration, Boulder, CO (United States)
1994-06-01
The Workshop on Uncertainty and Global Climate Change Research March 22--23, 1994, in Knoxville, Tennessee. This report summarizes the results and recommendations of the workshop. The purpose of the workshop was to examine in-depth the concept of uncertainty. From an analytical point of view, uncertainty is a central feature of global climate science, economics and decision making. The magnitude and complexity of uncertainty surrounding global climate change has made it quite difficult to answer even the most simple and important of questions-whether potentially costly action is required now to ameliorate adverse consequences of global climate change or whether delay is warranted to gain better information to reduce uncertainties. A major conclusion of the workshop is that multidisciplinary integrated assessments using decision analytic techniques as a foundation is key to addressing global change policy concerns. First, uncertainty must be dealt with explicitly and rigorously since it is and will continue to be a key feature of analysis and recommendations on policy questions for years to come. Second, key policy questions and variables need to be explicitly identified, prioritized, and their uncertainty characterized to guide the entire scientific, modeling, and policy analysis process. Multidisciplinary integrated assessment techniques and value of information methodologies are best suited for this task. In terms of timeliness and relevance of developing and applying decision analytic techniques, the global change research and policy communities are moving rapidly toward integrated approaches to research design and policy analysis.
Parametric uncertainty in optical image modeling
Potzick, James; Marx, Egon; Davidson, Mark
2006-10-01
Optical photomask feature metrology and wafer exposure process simulation both rely on optical image modeling for accurate results. While it is fair to question the accuracies of the available models, model results also depend on several input parameters describing the object and imaging system. Errors in these parameter values can lead to significant errors in the modeled image. These parameters include wavelength, illumination and objective NA's, magnification, focus, etc. for the optical system, and topography, complex index of refraction n and k, etc. for the object. In this paper each input parameter is varied over a range about its nominal value and the corresponding images simulated. Second order parameter interactions are not explored. Using the scenario of the optical measurement of photomask features, these parametric sensitivities are quantified by calculating the apparent change of the measured linewidth for a small change in the relevant parameter. Then, using reasonable values for the estimated uncertainties of these parameters, the parametric linewidth uncertainties can be calculated and combined to give a lower limit to the linewidth measurement uncertainty for those parameter uncertainties.
Applied research in uncertainty modeling and analysis
Ayyub, Bilal
2005-01-01
Uncertainty has been a concern to engineers, managers, and scientists for many years. For a long time uncertainty has been considered synonymous with random, stochastic, statistic, or probabilistic. Since the early sixties views on uncertainty have become more heterogeneous. In the past forty years numerous tools that model uncertainty, above and beyond statistics, have been proposed by several engineers and scientists. The tool/method to model uncertainty in a specific context should really be chosen by considering the features of the phenomenon under consideration, not independent of what is known about the system and what causes uncertainty. In this fascinating overview of the field, the authors provide broad coverage of uncertainty analysis/modeling and its application. Applied Research in Uncertainty Modeling and Analysis presents the perspectives of various researchers and practitioners on uncertainty analysis and modeling outside their own fields and domain expertise. Rather than focusing explicitly on...
Spatial Autocorrelation and Uncertainty Associated with Remotely-Sensed Data
Directory of Open Access Journals (Sweden)
Daniel A. Griffith
2016-06-01
Full Text Available Virtually all remotely sensed data contain spatial autocorrelation, which impacts upon their statistical features of uncertainty through variance inflation, and the compounding of duplicate information. Estimating the nature and degree of this spatial autocorrelation, which is usually positive and very strong, has been hindered by computational intensity associated with the massive number of pixels in realistically-sized remotely-sensed images, a situation that more recently has changed. Recent advances in spatial statistical estimation theory support the extraction of information and the distilling of knowledge from remotely-sensed images in a way that accounts for latent spatial autocorrelation. This paper summarizes an effective methodological approach to achieve this end, illustrating results with a 2002 remotely sensed-image of the Florida Everglades, and simulation experiments. Specifically, uncertainty of spatial autocorrelation parameter in a spatial autoregressive model is modeled with a beta-beta mixture approach and is further investigated with three different sampling strategies: coterminous sampling, random sub-region sampling, and increasing domain sub-regions. The results suggest that uncertainty associated with remotely-sensed data should be cast in consideration of spatial autocorrelation. It emphasizes that one remaining challenge is to better quantify the spatial variability of spatial autocorrelation estimates across geographic landscapes.
Sources of uncertainty in future changes in local precipitation
Energy Technology Data Exchange (ETDEWEB)
Rowell, David P. [Met Office Hadley Centre, Exeter (United Kingdom)
2012-10-15
This study considers the large uncertainty in projected changes in local precipitation. It aims to map, and begin to understand, the relative roles of uncertain modelling and natural variability, using 20-year mean data from four perturbed physics or multi-model ensembles. The largest - 280-member - ensemble illustrates a rich pattern in the varying contribution of modelling uncertainty, with similar features found using a CMIP3 ensemble (despite its limited sample size, which restricts it value in this context). The contribution of modelling uncertainty to the total uncertainty in local precipitation change is found to be highest in the deep tropics, particularly over South America, Africa, the east and central Pacific, and the Atlantic. In the moist maritime tropics, the highly uncertain modelling of sea-surface temperature changes is transmitted to a large uncertain modelling of local rainfall changes. Over tropical land and summer mid-latitude continents (and to a lesser extent, the tropical oceans), uncertain modelling of atmospheric processes, land surface processes and the terrestrial carbon cycle all appear to play an additional substantial role in driving the uncertainty of local rainfall changes. In polar regions, inter-model variability of anomalous sea ice drives an uncertain precipitation response, particularly in winter. In all these regions, there is therefore the potential to reduce the uncertainty of local precipitation changes through targeted model improvements and observational constraints. In contrast, over much of the arid subtropical and mid-latitude oceans, over Australia, and over the Sahara in winter, internal atmospheric variability dominates the uncertainty in projected precipitation changes. Here, model improvements and observational constraints will have little impact on the uncertainty of time means shorter than at least 20 years. Last, a supplementary application of the metric developed here is that it can be interpreted as a measure
Energy Technology Data Exchange (ETDEWEB)
Garcia Alonso, S.; Perez Pastor, R. M.; Escolano Segoviano, O.; Garcia Frutos, F. J.
2007-07-20
An evaluation of uncertainty associated to PAH determination in a contaminated soil is presented. The work was focused to measure the influence of grain size on concentration deviations and give a measure of result confidence of PAHs in the gasworks contaminated soil. This study was performed in the frame of the project 'Assessment of natural remediation technologies for PAHs in contaminated soils'(Spanish Plan Nacional I+D+i, CTM 2004-05832-CO2-01). This paper is presented as follows: A brief introduction which describes the main uncertainty contributions associated to chromatographic analysis. Afterwards, a statistic calculation was performed to measure each uncertainty component. Finally, a global uncertainty was calculated and the influence of grain size and distribution of compounds according to volatility was evaluated. (Author) 10 refs.
Uncertainty, joint uncertainty, and the quantum uncertainty principle
International Nuclear Information System (INIS)
Narasimhachar, Varun; Poostindouz, Alireza; Gour, Gilad
2016-01-01
Historically, the element of uncertainty in quantum mechanics has been expressed through mathematical identities called uncertainty relations, a great many of which continue to be discovered. These relations use diverse measures to quantify uncertainty (and joint uncertainty). In this paper we use operational information-theoretic principles to identify the common essence of all such measures, thereby defining measure-independent notions of uncertainty and joint uncertainty. We find that most existing entropic uncertainty relations use measures of joint uncertainty that yield themselves to a small class of operational interpretations. Our notion relaxes this restriction, revealing previously unexplored joint uncertainty measures. To illustrate the utility of our formalism, we derive an uncertainty relation based on one such new measure. We also use our formalism to gain insight into the conditions under which measure-independent uncertainty relations can be found. (paper)
On Bayesian treatment of systematic uncertainties in confidence interval calculation
Tegenfeldt, Fredrik
2005-01-01
In high energy physics, a widely used method to treat systematic uncertainties in confidence interval calculations is based on combining a frequentist construction of confidence belts with a Bayesian treatment of systematic uncertainties. In this note we present a study of the coverage of this method for the standard Likelihood Ratio (aka Feldman & Cousins) construction for a Poisson process with known background and Gaussian or log-Normal distributed uncertainties in the background or signal efficiency. For uncertainties in the signal efficiency of upto 40 % we find over-coverage on the level of 2 to 4 % depending on the size of uncertainties and the region in signal space. Uncertainties in the background generally have smaller effect on the coverage. A considerable smoothing of the coverage curves is observed. A software package is presented which allows fast calculation of the confidence intervals for a variety of assumptions on shape and size of systematic uncertainties for different nuisance paramete...
Dealing with exploration uncertainties
International Nuclear Information System (INIS)
Capen, E.
1992-01-01
Exploration for oil and gas should fulfill the most adventurous in their quest for excitement and surprise. This paper tries to cover that tall order. The authors will touch on the magnitude of the uncertainty (which is far greater than in most other businesses), the effects of not knowing target sizes very well, how to build uncertainty into analyses naturally, how to tie reserves and chance estimates to economics, and how to look at the portfolio effect of an exploration program. With no apologies, the authors will be using a different language for some readers - the language of uncertainty, which means probability and statistics. These tools allow one to combine largely subjective exploration information with the more analytical data from the engineering and economic side
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process
Directory of Open Access Journals (Sweden)
Janet L. Rachlow
2013-08-01
Full Text Available United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1 if a current population size was given, (2 if a measure of uncertainty or variance was associated with current estimates of population size and (3 if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.
Dong, YiJie; Mao, MinJing; Zhan, WeiWei; Zhou, JianQiao; Zhou, Wei; Yao, JieJie; Hu, YunYun; Wang, Yan; Ye, TingJun
2017-11-09
Our goal was to assess the diagnostic efficacy of ultrasound (US)-guided fine-needle aspiration (FNA) of thyroid nodules according to size and US features. A retrospective correlation was made with 1745 whole thyroidectomy and hemithyroidectomy specimens with preoperative US-guided FNA results. All cases were divided into 5 groups according to nodule size (≤5, 5.1-10, 10.1-15, 15.1-20, and >20 mm). For target nodules, static images and cine clips of conventional US and color Doppler were obtained. Ultrasound images were reviewed and evaluated by two radiologists with at least 5 years US working experience without knowing the results of pathology, and then agreement was achieved. The Bethesda category I rate was higher in nodules larger than 15 mm (P 20 mm) with several US features tended to yield false-negative FNA results. © 2017 by the American Institute of Ultrasound in Medicine.
An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method
International Nuclear Information System (INIS)
Campolina, Daniel; Lima, Paulo Rubens I.; Pereira, Claubia; Veloso, Maria Auxiliadora F.
2015-01-01
Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)
The neurobiology of uncertainty: implications for statistical learning.
Hasson, Uri
2017-01-05
The capacity for assessing the degree of uncertainty in the environment relies on estimating statistics of temporally unfolding inputs. This, in turn, allows calibration of predictive and bottom-up processing, and signalling changes in temporally unfolding environmental features. In the last decade, several studies have examined how the brain codes for and responds to input uncertainty. Initial neurobiological experiments implicated frontoparietal and hippocampal systems, based largely on paradigms that manipulated distributional features of visual stimuli. However, later work in the auditory domain pointed to different systems, whose activation profiles have interesting implications for computational and neurobiological models of statistical learning (SL). This review begins by briefly recapping the historical development of ideas pertaining to the sensitivity to uncertainty in temporally unfolding inputs. It then discusses several issues at the interface of studies of uncertainty and SL. Following, it presents several current treatments of the neurobiology of uncertainty and reviews recent findings that point to principles that serve as important constraints on future neurobiological theories of uncertainty, and relatedly, SL. This review suggests it may be useful to establish closer links between neurobiological research on uncertainty and SL, considering particularly mechanisms sensitive to local and global structure in inputs, the degree of input uncertainty, the complexity of the system generating the input, learning mechanisms that operate on different temporal scales and the use of learnt information for online prediction.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).
Visualizing Summary Statistics and Uncertainty
Potter, K.; Kniss, J.; Riesenfeld, R.; Johnson, C.R.
2010-01-01
The graphical depiction of uncertainty information is emerging as a problem of great importance. Scientific data sets are not considered complete without indications of error, accuracy, or levels of confidence. The visual portrayal of this information is a challenging task. This work takes inspiration from graphical data analysis to create visual representations that show not only the data value, but also important characteristics of the data including uncertainty. The canonical box plot is reexamined and a new hybrid summary plot is presented that incorporates a collection of descriptive statistics to highlight salient features of the data. Additionally, we present an extension of the summary plot to two dimensional distributions. Finally, a use-case of these new plots is presented, demonstrating their ability to present high-level overviews as well as detailed insight into the salient features of the underlying data distribution. © 2010 The Eurographics Association and Blackwell Publishing Ltd.
Visualizing Summary Statistics and Uncertainty
Potter, K.
2010-08-12
The graphical depiction of uncertainty information is emerging as a problem of great importance. Scientific data sets are not considered complete without indications of error, accuracy, or levels of confidence. The visual portrayal of this information is a challenging task. This work takes inspiration from graphical data analysis to create visual representations that show not only the data value, but also important characteristics of the data including uncertainty. The canonical box plot is reexamined and a new hybrid summary plot is presented that incorporates a collection of descriptive statistics to highlight salient features of the data. Additionally, we present an extension of the summary plot to two dimensional distributions. Finally, a use-case of these new plots is presented, demonstrating their ability to present high-level overviews as well as detailed insight into the salient features of the underlying data distribution. © 2010 The Eurographics Association and Blackwell Publishing Ltd.
Uncertainties in Coastal Ocean Color Products: Impacts of Spatial Sampling
Pahlevan, Nima; Sarkar, Sudipta; Franz, Bryan A.
2016-01-01
With increasing demands for ocean color (OC) products with improved accuracy and well characterized, per-retrieval uncertainty budgets, it is vital to decompose overall estimated errors into their primary components. Amongst various contributing elements (e.g., instrument calibration, atmospheric correction, inversion algorithms) in the uncertainty of an OC observation, less attention has been paid to uncertainties associated with spatial sampling. In this paper, we simulate MODIS (aboard both Aqua and Terra) and VIIRS OC products using 30 m resolution OC products derived from the Operational Land Imager (OLI) aboard Landsat-8, to examine impacts of spatial sampling on both cross-sensor product intercomparisons and in-situ validations of R(sub rs) products in coastal waters. Various OLI OC products representing different productivity levels and in-water spatial features were scanned for one full orbital-repeat cycle of each ocean color satellite. While some view-angle dependent differences in simulated Aqua-MODIS and VIIRS were observed, the average uncertainties (absolute) in product intercomparisons (due to differences in spatial sampling) at regional scales are found to be 1.8%, 1.9%, 2.4%, 4.3%, 2.7%, 1.8%, and 4% for the R(sub rs)(443), R(sub rs)(482), R(sub rs)(561), R(sub rs)(655), Chla, K(sub d)(482), and b(sub bp)(655) products, respectively. It is also found that, depending on in-water spatial variability and the sensor's footprint size, the errors for an in-situ validation station in coastal areas can reach as high as +/- 18%. We conclude that a) expected biases induced by the spatial sampling in product intercomparisons are mitigated when products are averaged over at least 7 km × 7 km areas, b) VIIRS observations, with improved consistency in cross-track spatial sampling, yield more precise calibration/validation statistics than that of MODIS, and c) use of a single pixel centered on in-situ coastal stations provides an optimal sampling size for
Optical Model and Cross Section Uncertainties
Energy Technology Data Exchange (ETDEWEB)
Herman,M.W.; Pigni, M.T.; Dietrich, F.S.; Oblozinsky, P.
2009-10-05
Distinct minima and maxima in the neutron total cross section uncertainties were observed in model calculations using spherical optical potential. We found this oscillating structure to be a general feature of quantum mechanical wave scattering. Specifically, we analyzed neutron interaction with 56Fe from 1 keV up to 65 MeV, and investigated physical origin of the minima.We discuss their potential importance for practical applications as well as the implications for the uncertainties in total and absorption cross sections.
International Nuclear Information System (INIS)
Hernandez-Solis, Augusto
2010-04-01
This work has two main objectives. The first one is to enhance the validation process of the thermal-hydraulic features of the Westinghouse code POLCA-T. This is achieved by computing a quantitative validation limit based on statistical uncertainty analysis. This validation theory is applied to some of the benchmark cases of the following macroscopic BFBT exercises: 1) Single and two phase bundle pressure drops, 2) Steady-state cross-sectional averaged void fraction, 3) Transient cross-sectional averaged void fraction and 4) Steady-state critical power tests. Sensitivity analysis is also performed to identify the most important uncertain parameters for each exercise. The second objective consists in showing the clear advantages of using the quasi-random Latin Hypercube Sampling (LHS) strategy over simple random sampling (SRS). LHS allows a much better coverage of the input uncertainties than SRS because it densely stratifies across the range of each input probability distribution. The aim here is to compare both uncertainty analyses on the BWR assembly void axial profile prediction in steady-state, and on the transient void fraction prediction at a certain axial level coming from a simulated re-circulation pump trip scenario. It is shown that the replicated void fraction mean (either in steady-state or transient conditions) has less variability when using LHS than SRS for the same number of calculations (i.e. same input space sample size) even if the resulting void fraction axial profiles are non-monotonic. It is also shown that the void fraction uncertainty limits achieved with SRS by running 458 calculations (sample size required to cover 95% of 8 uncertain input parameters with a 95% confidence), result in the same uncertainty limits achieved by LHS with only 100 calculations. These are thus clear indications on the advantages of using LHS. Finally, the present study contributes to a realistic analysis of nuclear reactors, in the sense that the uncertainties of
Potential effects of organizational uncertainty on safety
Energy Technology Data Exchange (ETDEWEB)
Durbin, N.E. [MPD Consulting Group, Kirkland, WA (United States); Lekberg, A. [Swedish Nuclear Power Inspectorate, Stockholm (Sweden); Melber, B.D. [Melber Consulting, Seattle WA (United States)
2001-12-01
When organizations face significant change - reorganization, mergers, acquisitions, down sizing, plant closures or decommissioning - both the organizations and the workers in those organizations experience significant uncertainty about the future. This uncertainty affects the organization and the people working in the organization - adversely affecting morale, reducing concentration on safe operations, and resulting in the loss of key staff. Hence, organizations, particularly those using high risk technologies, which are facing significant change need to consider and plan for the effects of organizational uncertainty on safety - as well as planning for other consequences of change - technical, economic, emotional, and productivity related. This paper reviews some of what is known about the effects of uncertainty on organizations and individuals, discusses the potential consequences of uncertainty on organizational and individual behavior, and presents some of the implications for safety professionals.
Potential effects of organizational uncertainty on safety
International Nuclear Information System (INIS)
Durbin, N.E.; Lekberg, A.; Melber, B.D.
2001-12-01
When organizations face significant change - reorganization, mergers, acquisitions, down sizing, plant closures or decommissioning - both the organizations and the workers in those organizations experience significant uncertainty about the future. This uncertainty affects the organization and the people working in the organization - adversely affecting morale, reducing concentration on safe operations, and resulting in the loss of key staff. Hence, organizations, particularly those using high risk technologies, which are facing significant change need to consider and plan for the effects of organizational uncertainty on safety - as well as planning for other consequences of change - technical, economic, emotional, and productivity related. This paper reviews some of what is known about the effects of uncertainty on organizations and individuals, discusses the potential consequences of uncertainty on organizational and individual behavior, and presents some of the implications for safety professionals
Huang, Hening
2018-01-01
This paper is the second (Part II) in a series of two papers (Part I and Part II). Part I has quantitatively discussed the fundamental limitations of the t-interval method for uncertainty estimation with a small number of measurements. This paper (Part II) reveals that the t-interval is an ‘exact’ answer to a wrong question; it is actually misused in uncertainty estimation. This paper proposes a redefinition of uncertainty, based on the classical theory of errors and the theory of point estimation, and a modification of the conventional approach to estimating measurement uncertainty. It also presents an asymptotic procedure for estimating the z-interval. The proposed modification is to replace the t-based uncertainty with an uncertainty estimator (mean- or median-unbiased). The uncertainty estimator method is an approximate answer to the right question to uncertainty estimation. The modified approach provides realistic estimates of uncertainty, regardless of whether the population standard deviation is known or unknown, or if the sample size is small or large. As an application example of the modified approach, this paper presents a resolution to the Du-Yang paradox (i.e. Paradox 2), one of the three paradoxes caused by the misuse of the t-interval in uncertainty estimation.
Bloo, M.; Haitjema, H.; Pril, W.O.
1999-01-01
An experimental study was carried out, in order to investigate the deformation and wear taking place on pyramidal silicon-nitride AFM tips. The study focuses on the contact mode scanning of silicon features of micrometre-size. First the deformation and the mechanisms of wear of the tip during
Prostate size and adverse pathologic features in men undergoing radical prostatectomy.
Hong, Sung Kyu; Poon, Bing Ying; Sjoberg, Daniel D; Scardino, Peter T; Eastham, James A
2014-07-01
To investigate the relationship between prostate volume measured from preoperative imaging and adverse pathologic features at the time of radical prostatectomy and evaluate the potential effect of clinical stage on such relationship. In 1756 men who underwent preoperative magnetic resonance imaging and radical prostatectomy from 2000 to 2010, we examined associations of magnetic resonance imaging-measured prostate volume with pathologic outcomes using univariate logistic regression and with postoperative biochemical recurrence using Cox proportional hazards models. We also analyzed the effects of clinical stage on the relationship between prostate volume and adverse pathologic features via interaction analyses. In univariate analyses, smaller prostate volume was significantly associated with high pathologic Gleason score (P.05). The association between prostate volume and recurrence was significant in a multivariable analysis adjusting for postoperative variables (P=.031) but missed statistical significance in the preoperative model (P=.053). Addition of prostate volume did not change C-Indices (0.78 and 0.83) of either model. Although prostate size did not enhance the prediction of recurrence, it is associated with aggressiveness of prostate cancer. There is no evidence that this association differs depending on clinical stage. Prospective studies are warranted assessing the effect of initial method of detection on the relationship between volume and outcome. Copyright © 2014 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Lujano-Rojas, Juan M.; Dufo-López, Rodolfo; Bernal-Agustín, José L.
2012-01-01
Highlights: ► We propose a mathematical model for optimal sizing of small wind energy systems. ► No other previous work has considered all the aspects included in this paper. ► The model considers several parameters about batteries. ► Wind speed variability is considered by means of ARMA model. ► The results show how to minimize the expected energy that is not supplied. - Abstract: In this paper, a mathematical model for stochastic simulation and optimization of small wind energy systems is presented. This model is able to consider the operation of the charge controller, the coulombic efficiency during charge and discharge processes, the influence of temperature on the battery bank capacity, the wind speed variability, and load uncertainty. The joint effect of charge controller operation, ambient temperature, and coulombic efficiency is analyzed in a system installed in Zaragoza (Spain), concluding that if the analysis without considering these factors is carried out, the reliability level of the physical system could be lower than expected, and an increment of 25% in the battery bank capacity would be required to reach a reliability level of 90% in the analyzed case. Also, the effect of the wind speed variability and load uncertainty in the system reliability is analyzed. Finally, the uncertainty in the battery bank lifetime and its effect on the net present cost are discussed. The results showed that, considering uncertainty of 17.5% in the battery bank lifetime calculated using the Ah throughput model, about 12% of uncertainty in the net present cost is expected. The model presented in this research could be a useful stochastic simulation and optimization tool that allows the consideration of important uncertainty factors in techno-economic analysis.
Ma, Meng; He, Zhoukun; Li, Yuhan; Chen, Feng; Wang, Ke; Zhang, Qing; Deng, Hua; Fu, Qiang
2012-12-01
Thin films of polystyrene (PS)/poly(ε-caprolactone) (PCL) blends were prepared by spin-coating and characterized by tapping mode force microscopy (AFM). Effects of the relative concentration of PS in polymer solution on the surface phase separation and dewetting feature size of the blend films were systematically studied. Due to the coupling of phase separation, dewetting, and crystallization of the blend films with the evaporation of solvent during spin-coating, different size of PS islands decorated with various PCL crystal structures including spherulite-like, flat-on individual lamellae, and flat-on dendritic crystal were obtained in the blend films by changing the film composition. The average distance of PS islands was shown to increase with the relative concentration of PS in casting solution. For a given ratio of PS/PCL, the feature size of PS appeared to increase linearly with the square of PS concentration while the PCL concentration only determined the crystal morphology of the blend films with no influence on the upper PS domain features. This is explained in terms of vertical phase separation and spinodal dewetting of the PS rich layer from the underlying PCL rich layer, leading to the upper PS dewetting process and the underlying PCL crystalline process to be mutually independent. Copyright © 2012 Elsevier Inc. All rights reserved.
Uncertainties in segmentation and their visualisation
Lucieer, Arko
2004-01-01
This thesis focuses on uncertainties in remotely sensed image segmentation and their visualisation. The first part describes a visualisation tool, allowing interaction with the parameters of a fuzzy classification algorithm by visually adjusting fuzzy membership functions of classes in a 3D feature
Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes
van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.
2017-12-01
Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.
International Nuclear Information System (INIS)
Ying, Michael; Ahuja, Anil; Brook, Fiona; Metreweli, Constantine
2001-01-01
AIM: This study was undertaken to investigate variations in the vascularity and grey-scale sonographic features of cervical lymph nodes with their size. MATERIALS AND METHODS: High resolution grey-scale sonography and power Doppler sonography were performed in 1133 cervical nodes in 109 volunteers who had a sonographic examination of the neck. Standardized parameters were used in power Doppler sonography. RESULTS: About 90% of lymph nodes with a maximum transverse diameter greater than 5 mm showed vascularity and an echogenic hilus. Smaller nodes were less likely to show vascularity and an echogenic hilus. As the size of the lymph nodes increased, the intranodal blood flow velocity increased significantly (P 0.05). CONCLUSIONS: The findings provide a baseline for grey-scale and power Doppler sonography of normal cervical lymph nodes. Sonologists will find varying vascularity and grey-scale appearances when encountering nodes of different sizes. Ying, M. et al. (2001)
Uncertainty in visual processes predicts geometrical optical illusions.
Fermüller, Cornelia; Malm, Henrik
2004-03-01
It is proposed in this paper that many geometrical optical illusions, as well as illusory patterns due to motion signals in line drawings, are due to the statistics of visual computations. The interpretation of image patterns is preceded by a step where image features such as lines, intersections of lines, or local image movement must be derived. However, there are many sources of noise or uncertainty in the formation and processing of images, and they cause problems in the estimation of these features; in particular, they cause bias. As a result, the locations of features are perceived erroneously and the appearance of the patterns is altered. The bias occurs with any visual processing of line features; under average conditions it is not large enough to be noticeable, but illusory patterns are such that the bias is highly pronounced. Thus, the broader message of this paper is that there is a general uncertainty principle which governs the workings of vision systems, and optical illusions are an artifact of this principle.
Application of uncertainty analysis in conceptual fusion reactor design
International Nuclear Information System (INIS)
Wu, T.; Maynard, C.W.
1979-01-01
The theories of sensitivity and uncertainty analysis are described and applied to a new conceptual tokamak fusion reactor design--NUWMAK. The responses investigated in this study include the tritium breeding ratio, first wall Ti dpa and gas productions, nuclear heating in the blanket, energy leakage to the magnet, and the dpa rate in the superconducting magnet aluminum stabilizer. The sensitivities and uncertainties of these responses are calculated. The cost/benefit feature of proposed integral measurements is also studied through the uncertainty reductions of these responses
Managing demand uncertainty: probabilistic selling versus inventory substitution
Zhang, Y.; Hua, Guowei; Wang, Shouyang; Zhang, Juliang; Fernández Alarcón, Vicenç
2018-01-01
Demand variability is prevailing in the current rapidly changing business environment, which makes it difficult for a retailer that sells multiple substitutable products to determine the optimal inventory. To combat demand uncertainty, both strategies of inventory substitution and probabilistic selling can be used. Although the two strategies differ in operation, we believe that they share a common feature in combating demand uncertainty by encouraging some customers to give up some specific ...
Evaluation of advanced coal gasification combined-cycle systems under uncertainty
International Nuclear Information System (INIS)
Frey, H.C.; Rubin, E.S.
1992-01-01
Advanced integrated gasification combined cycle (IGCC) systems have not been commercially demonstrated, and uncertainties remain regarding their commercial-scale performance and cost. Therefore, a probabilistic evaluation method has been developed and applied to explicitly consider these uncertainties. The insights afforded by this method are illustrated for an IGCC design featuring a fixed-bed gasifier and a hot gas cleanup system. Detailed case studies are conducted to characterize uncertainties in key measures of process performance and cost, evaluate design trade-offs under uncertainty, identify research priorities, evaluate the potential benefits of additional research, compare results for different uncertainty assumptions, and compare the advanced IGCC system to a conventional system under uncertainty. The implications of probabilistic results for research planning and technology selection are discussed in this paper
Uncertainty and complementarity in axiomatic quantum mechanics
International Nuclear Information System (INIS)
Lahti, P.J.
1980-01-01
An investigation of the uncertainty principle and the complementarity principle is carried through. The physical content of these principles and their representation in the conventional Hilbert space formulation of quantum mechanics forms a natural starting point. Thereafter is presented more general axiomatic framework for quantum mechanics, namely, a probability function formulation of the theory. Two extra axioms are stated, reflecting the ideas of the uncertainty principle and the complementarity principle, respectively. The quantal features of these axioms are explicated. (author)
Position-momentum uncertainty relations in the presence of quantum memory
DEFF Research Database (Denmark)
Furrer, Fabian; Berta, Mario; Tomamichel, Marco
2014-01-01
A prominent formulation of the uncertainty principle identifies the fundamental quantum feature that no particle may be prepared with certain outcomes for both position and momentum measurements. Often the statistical uncertainties are thereby measured in terms of entropies providing a clear oper....... As an illustration, we evaluate the uncertainty relations for position and momentum measurements, which is operationally significant in that it implies security of a quantum key distribution scheme based on homodyne detection of squeezed Gaussian states....
Design features of HTMR-hybrid toroidal magnet tokamak reactor
International Nuclear Information System (INIS)
Rosatelli, F.; Avanzini, P.G.; Derchi, D.; Magnasco, M.; Grattarola, M.; Peluffo, M.; Raia, G.; Brunelli, B.; Zampaglione, V.
1984-01-01
The HTMR (Hybrid Toroidal Magnet Tokamak Reactor) conceptual design is aimed to demonstrate the feasibility of a Tokamak reactor which could fulfil the scientific and technological objectives expected from next generation devices with size and costs as small as possible. A hybrid toroidal field magnet, made up by copper and superconducting coils, seems to be a promising solution, allowing a considerable flexibility in machine performances, so as to gain useful margins in front of the uncertainties in confinement time scaling laws and beta and plasma density limits. The optimization procedure for the hybrid magnet, configuration, the main design features of HTMR and the preliminary mechanical calculations of the superconducting toroidal coils are described. (author)
Perseveration induces dissociative uncertainty in obsessive-compulsive disorder.
Giele, Catharina L; van den Hout, Marcel A; Engelhard, Iris M; Dek, Eliane C P; Toffolo, Marieke B J; Cath, Danielle C
2016-09-01
Obsessive compulsive (OC)-like perseveration paradoxically increases feelings of uncertainty. We studied whether the underlying mechanism between perseveration and uncertainty is a reduced accessibility of meaning ('semantic satiation'). OCD patients (n = 24) and matched non-clinical controls (n = 24) repeated words 2 (non-perseveration) or 20 times (perseveration). They decided whether this word was related to another target word. Speed of relatedness judgments and feelings of dissociative uncertainty were measured. The effects of real-life perseveration on dissociative uncertainty were tested in a smaller subsample of the OCD group (n = 9). Speed of relatedness judgments was not affected by perseveration. However, both groups reported more dissociative uncertainty after perseveration compared to non-perseveration, which was higher in OCD patients. Patients reported more dissociative uncertainty after 'clinical' perseveration compared to non-perseveration.. Both parts of this study are limited by some methodological issues and a small sample size. Although the mechanism behind 'perseveration → uncertainty' is still unclear, results suggest that the effects of perseveration are counterproductive. Copyright © 2016 Elsevier Ltd. All rights reserved.
Uncertainty and inference in the world of paleoecological data
McLachlan, J. S.; Dawson, A.; Dietze, M.; Finley, M.; Hooten, M.; Itter, M.; Jackson, S. T.; Marlon, J. R.; Raiho, A.; Tipton, J.; Williams, J.
2017-12-01
Proxy data in paleoecology and paleoclimatology share a common set of biases and uncertainties: spatiotemporal error associated with the taphonomic processes of deposition, preservation, and dating; calibration error between proxy data and the ecosystem states of interest; and error in the interpolation of calibrated estimates across space and time. Researchers often account for this daunting suite of challenges by applying qualitave expert judgment: inferring the past states of ecosystems and assessing the level of uncertainty in those states subjectively. The effectiveness of this approach can be seen by the extent to which future observations confirm previous assertions. Hierarchical Bayesian (HB) statistical approaches allow an alternative approach to accounting for multiple uncertainties in paleo data. HB estimates of ecosystem state formally account for each of the common uncertainties listed above. HB approaches can readily incorporate additional data, and data of different types into estimates of ecosystem state. And HB estimates of ecosystem state, with associated uncertainty, can be used to constrain forecasts of ecosystem dynamics based on mechanistic ecosystem models using data assimilation. Decisions about how to structure an HB model are also subjective, which creates a parallel framework for deciding how to interpret data from the deep past.Our group, the Paleoecological Observatory Network (PalEON), has applied hierarchical Bayesian statistics to formally account for uncertainties in proxy based estimates of past climate, fire, primary productivity, biomass, and vegetation composition. Our estimates often reveal new patterns of past ecosystem change, which is an unambiguously good thing, but we also often estimate a level of uncertainty that is uncomfortably high for many researchers. High levels of uncertainty are due to several features of the HB approach: spatiotemporal smoothing, the formal aggregation of multiple types of uncertainty, and a
DEFF Research Database (Denmark)
Callesen, Ingeborg; Keck, Hannes; Andersen, Thorbjørn Joest
2018-01-01
with less than 1% C and some marine sediments. Materials and methods: The method uncertainty for particle size analysis by the laser diffraction method using or not using H2O2 pretreatment followed by 2 min ultrasound and 1-mm sieving was determined for two soil samples and two aquatic sediments......Purpose: Methods for particle size distribution (PSD) determination by laser diffraction are not standardized and differ between disciplines and sectors. The effect of H2O2 pretreatment before a sonication treatment in laser diffraction analysis of soils and marine sediments was examined on soils...... pretreatment on the PSD was small and not significant. The standard deviation (std) in particle size fractions increased with particle size. PSDs and std for some samples were presented for future reference. Similar to other studies, the content of clay and silt (by sieving/hydrometer, SHM) was lower...
Improved sample size determination for attributes and variables sampling
International Nuclear Information System (INIS)
Stirpe, D.; Picard, R.R.
1985-01-01
Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs
Binder, Andrew R; Hillback, Elliott D; Brossard, Dominique
2016-04-01
Research indicates that uncertainty in science news stories affects public assessment of risk and uncertainty. However, the form in which uncertainty is presented may also affect people's risk and uncertainty assessments. For example, a news story that features an expert discussing both what is known and what is unknown about a topic may convey a different form of scientific uncertainty than a story that features two experts who hold conflicting opinions about the status of scientific knowledge of the topic, even when both stories contain the same information about knowledge and its boundaries. This study focuses on audience uncertainty and risk perceptions regarding the emerging science of nanotechnology by manipulating whether uncertainty in a news story about potential risks is attributed to expert sources in the form of caveats (individual uncertainty) or conflicting viewpoints (collective uncertainty). Results suggest that the type of uncertainty portrayed does not impact audience feelings of uncertainty or risk perceptions directly. Rather, the presentation of the story influences risk perceptions only among those who are highly deferent to scientific authority. Implications for risk communication theory and practice are discussed. © 2015 Society for Risk Analysis.
International Nuclear Information System (INIS)
1978-01-01
Uncertainties with regard to many facets of repository site characterization have not yet been quantified. This report summarizes the state of knowledge of uncertainties in the measurement of porosity, hydraulic conductivity, and hydraulic gradient; uncertainties associated with various geophysical field techniques; and uncertainties associated with the effects of exploration and exploitation activities in bedded salt basins. The potential for seepage through a depository in bedded salt or shale is reviewed and, based upon the available data, generic values for the hydraulic conductivity and porosity of bedded salt and shale are proposed
Supporting Qualified Database for Uncertainty Evaluation
International Nuclear Information System (INIS)
Petruzzi, A.; Fiori, F.; Kovtonyuk, A.; Lisovyy, O.; D'Auria, F.
2013-01-01
Uncertainty evaluation constitutes a key feature of BEPU (Best Estimate Plus Uncertainty) process. The uncertainty can be the result of a Monte Carlo type analysis involving input uncertainty parameters or the outcome of a process involving the use of experimental data and connected code calculations. Those uncertainty methods are discussed in several papers and guidelines (IAEA-SRS-52, OECD/NEA BEMUSE reports). The present paper aims at discussing the role and the depth of the analysis required for merging from one side suitable experimental data and on the other side qualified code calculation results. This aspect is mostly connected with the second approach for uncertainty mentioned above, but it can be used also in the framework of the first approach. Namely, the paper discusses the features and structure of the database that includes the following kinds of documents: 1. The 'RDS-facility' (Reference Data Set for the selected facility): this includes the description of the facility, the geometrical characterization of any component of the facility, the instrumentations, the data acquisition system, the evaluation of pressure losses, the physical properties of the material and the characterization of pumps, valves and heat losses; 2. The 'RDS-test' (Reference Data Set for the selected test of the facility): this includes the description of the main phenomena investigated during the test, the configuration of the facility for the selected test (possible new evaluation of pressure and heat losses if needed) and the specific boundary and initial conditions; 3. The 'QP' (Qualification Report) of the code calculation results: this includes the description of the nodalization developed following a set of homogeneous techniques, the achievement of the steady state conditions and the qualitative and quantitative analysis of the transient with the characterization of the Relevant Thermal-Hydraulics Aspects (RTA); 4. The EH (Engineering Handbook) of the input nodalization
Supporting qualified database for uncertainty evaluation
Energy Technology Data Exchange (ETDEWEB)
Petruzzi, A.; Fiori, F.; Kovtonyuk, A.; D' Auria, F. [Nuclear Research Group of San Piero A Grado, Univ. of Pisa, Via Livornese 1291, 56122 Pisa (Italy)
2012-07-01
Uncertainty evaluation constitutes a key feature of BEPU (Best Estimate Plus Uncertainty) process. The uncertainty can be the result of a Monte Carlo type analysis involving input uncertainty parameters or the outcome of a process involving the use of experimental data and connected code calculations. Those uncertainty methods are discussed in several papers and guidelines (IAEA-SRS-52, OECD/NEA BEMUSE reports). The present paper aims at discussing the role and the depth of the analysis required for merging from one side suitable experimental data and on the other side qualified code calculation results. This aspect is mostly connected with the second approach for uncertainty mentioned above, but it can be used also in the framework of the first approach. Namely, the paper discusses the features and structure of the database that includes the following kinds of documents: 1. The' RDS-facility' (Reference Data Set for the selected facility): this includes the description of the facility, the geometrical characterization of any component of the facility, the instrumentations, the data acquisition system, the evaluation of pressure losses, the physical properties of the material and the characterization of pumps, valves and heat losses; 2. The 'RDS-test' (Reference Data Set for the selected test of the facility): this includes the description of the main phenomena investigated during the test, the configuration of the facility for the selected test (possible new evaluation of pressure and heat losses if needed) and the specific boundary and initial conditions; 3. The 'QR' (Qualification Report) of the code calculation results: this includes the description of the nodalization developed following a set of homogeneous techniques, the achievement of the steady state conditions and the qualitative and quantitative analysis of the transient with the characterization of the Relevant Thermal-Hydraulics Aspects (RTA); 4. The EH (Engineering
Digging deeper into platform game level design: session size and sequential features
DEFF Research Database (Denmark)
Shaker, Noor; Yannakakis, Georgios N.; Togelius, Julian
2012-01-01
A recent trend within computational intelligence and games research is to investigate how to affect video game players’ in-game experience by designing and/or modifying aspects of game content. Analysing the relationship between game content, player behaviour and self-reported affective states...... constitutes an important step towards understanding game experience and constructing effective game adaptation mechanisms. This papers reports on further refinement of a method to understand this relationship by analysing data collected from players, building models that predict player experience...... and analysing what features of game and player data predict player affect best. We analyse data from players playing 780 pairs of short game sessions of the platform game Super Mario Bros, investigate the impact of the session size and what part of the level that has the major affect on player experience...
Uncertainties in predicting solar panel power output
Anspaugh, B.
1974-01-01
The problem of calculating solar panel power output at launch and during a space mission is considered. The major sources of uncertainty and error in predicting the post launch electrical performance of the panel are considered. A general discussion of error analysis is given. Examples of uncertainty calculations are included. A general method of calculating the effect on the panel of various degrading environments is presented, with references supplied for specific methods. A technique for sizing a solar panel for a required mission power profile is developed.
Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
Uncertainty Regarding Waste Handling in Everyday Life
Directory of Open Access Journals (Sweden)
Susanne Ewert
2010-09-01
Full Text Available According to our study, based on interviews with households in a residential area in Sweden, uncertainty is a cultural barrier to improved recycling. Four causes of uncertainty are identified. Firstly, professional categories not matching cultural categories—people easily discriminate between certain categories (e.g., materials such as plastic and paper but not between others (e.g., packaging and “non-packaging”. Thus a frequent cause of uncertainty is that the basic categories of the waste recycling system do not coincide with the basic categories used in everyday life. Challenged habits—source separation in everyday life is habitual, but when a habit is challenged, by a particular element or feature of the waste system, uncertainty can arise. Lacking fractions—some kinds of items cannot be left for recycling and this makes waste collection incomplete from the user’s point of view and in turn lowers the credibility of the system. Missing or contradictory rules of thumb—the above causes seem to be particularly relevant if no motivating principle or rule of thumb (within the context of use is successfully conveyed to the user. This paper discusses how reducing uncertainty can improve recycling.
The uncertainties in estimating measurement uncertainties
International Nuclear Information System (INIS)
Clark, J.P.; Shull, A.H.
1994-01-01
All measurements include some error. Whether measurements are used for accountability, environmental programs or process support, they are of little value unless accompanied by an estimate of the measurements uncertainty. This fact is often overlooked by the individuals who need measurements to make decisions. This paper will discuss the concepts of measurement, measurements errors (accuracy or bias and precision or random error), physical and error models, measurement control programs, examples of measurement uncertainty, and uncertainty as related to measurement quality. Measurements are comparisons of unknowns to knowns, estimates of some true value plus uncertainty; and are no better than the standards to which they are compared. Direct comparisons of unknowns that match the composition of known standards will normally have small uncertainties. In the real world, measurements usually involve indirect comparisons of significantly different materials (e.g., measuring a physical property of a chemical element in a sample having a matrix that is significantly different from calibration standards matrix). Consequently, there are many sources of error involved in measurement processes that can affect the quality of a measurement and its associated uncertainty. How the uncertainty estimates are determined and what they mean is as important as the measurement. The process of calculating the uncertainty of a measurement itself has uncertainties that must be handled correctly. Examples of chemistry laboratory measurement will be reviewed in this report and recommendations made for improving measurement uncertainties
Design features of HTMR-Hybrid Toroidal Magnet Tokamak Reactor
International Nuclear Information System (INIS)
Rosatelli, F.; Avanzini, P.G.; Brunelli, B.; Derchi, D.; Magnasco, M.; Grattarola, M.; Peluffo, M.; Raia, G.; Zampaglione, V.
1985-01-01
The HTMR (Hybrid Toroidal Magnet Tokamak Reactor) conceptual design is aimed to demonstrate the feasibility of a Tokamak reactor which could fulfill the scientific and technological objectives expected from next generation devices (e.g. INTOR-NET) with size and costs as small as possible. An hybrid toroidal field magnet, made up by copper and superconducting coils, seems to be a promising solution, allowing a considerable flexibility in machine performances, so as to gain useful margins in front of the uncertainties in confinement time scaling laws and beta and plasma density limits. In this paper the authors describe the optimization procedure for the hybrid magnet configuration, the main design features of HTMR and the preliminary mechanical calculations of the superconducting toroidal coils
Fine-tuning the feature size of nanoporous silver
Detsi, Eric; Vukovic, Zorica; Punzhin, Sergey; Bronsveld, Paul M.; Onck, Patrick R.; De Hosson, Jeff Th M.
2012-01-01
We show that the characteristic ligament size of nanoporous Ag synthesized by chemical dissolution of Al from Ag-Al alloys can be tuned from the current submicrometer size (similar to 100-500 nm) down to a much smaller length scale (similar to 30-60 nm). This is achieved by suppressing the formation
Adult head CT scans: the uncertainties of effective dose estimates
International Nuclear Information System (INIS)
Gregory, Kent J.; Bibbo, Giovanni; Pattison, John E.
2008-01-01
sizes and positions within patients, and advances in CT scanner design that have not been taken into account by the effective dose estimation methods. The analysis excludes uncertainties due to variation in patient head size and the size of the model heads. For each of the four dose estimation methods analysed, the smallest and largest uncertainties (stated at the 95% confidence interval) were; 20-31% (Nagel), 14-28% (ImpaCT), 20-36% (Wellhoefer) and 21-32% (DLP). In each case, the smallest dose estimate uncertainties apply when the CT Dose Index for the scanner has been measured. In general, each of the four methods provide reasonable estimates of effective dose from head CT scans, with the ImpaCT method having the marginally smaller uncertainties. This uncertainty analysis method may be applied to other types of CT scans, such as chest, abdomen and pelvis studies, and may reveal where improvements can be made to reduce the uncertainty of those effective dose estimates. As identified in the BEIR VII report (2006), improvement in the uncertainty of effective dose estimates for individuals is expected to lead to a greater understanding of the hazards posed by diagnostic radiation exposures. (author)
Position-momentum uncertainty relations in the presence of quantum memory
Energy Technology Data Exchange (ETDEWEB)
Furrer, Fabian, E-mail: furrer@eve.phys.s.u-tokyo.ac.jp [Department of Physics, Graduate School of Science, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Berta, Mario [Institute for Quantum Information and Matter, Caltech, Pasadena, California 91125 (United States); Institute for Theoretical Physics, ETH Zurich, Wolfgang-Pauli-Str. 27, 8093 Zürich (Switzerland); Tomamichel, Marco [School of Physics, The University of Sydney, Sydney 2006 (Australia); Centre for Quantum Technologies, National University of Singapore, Singapore 117543 (Singapore); Scholz, Volkher B. [Institute for Theoretical Physics, ETH Zurich, Wolfgang-Pauli-Str. 27, 8093 Zürich (Switzerland); Christandl, Matthias [Institute for Theoretical Physics, ETH Zurich, Wolfgang-Pauli-Str. 27, 8093 Zürich (Switzerland); Department of Mathematical Sciences, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen (Denmark)
2014-12-15
A prominent formulation of the uncertainty principle identifies the fundamental quantum feature that no particle may be prepared with certain outcomes for both position and momentum measurements. Often the statistical uncertainties are thereby measured in terms of entropies providing a clear operational interpretation in information theory and cryptography. Recently, entropic uncertainty relations have been used to show that the uncertainty can be reduced in the presence of entanglement and to prove security of quantum cryptographic tasks. However, much of this recent progress has been focused on observables with only a finite number of outcomes not including Heisenberg’s original setting of position and momentum observables. Here, we show entropic uncertainty relations for general observables with discrete but infinite or continuous spectrum that take into account the power of an entangled observer. As an illustration, we evaluate the uncertainty relations for position and momentum measurements, which is operationally significant in that it implies security of a quantum key distribution scheme based on homodyne detection of squeezed Gaussian states.
Use of DEMs Derived from TLS and HRSI Data for Landslide Feature Recognition
Directory of Open Access Journals (Sweden)
Maurizio Barbarella
2018-04-01
Full Text Available This paper addresses the problems arising from the use of data acquired with two different remote sensing techniques—high-resolution satellite imagery (HRSI and terrestrial laser scanning (TLS—for the extraction of digital elevation models (DEMs used in the geomorphological analysis and recognition of landslides, taking into account the uncertainties associated with DEM production. In order to obtain a georeferenced and edited point cloud, the two data sets require quite different processes, which are more complex for satellite images than for TLS data. The differences between the two processes are highlighted. The point clouds are interpolated on a DEM with a 1 m grid size using kriging. Starting from these DEMs, a number of contour, slope, and aspect maps are extracted, together with their associated uncertainty maps. Comparative analysis of selected landslide features drawn from the two data sources allows recognition and classification of hierarchical and multiscale landslide components. Taking into account the uncertainty related to the map enables areas to be located for which one data source was able to give more reliable results than another. Our case study is located in Southern Italy, in an area known for active landslides.
Two-point method uncertainty during control and measurement of cylindrical element diameters
Glukhov, V. I.; Shalay, V. V.; Radev, H.
2018-04-01
The topic of the article is devoted to the urgent problem of the reliability of technical products geometric specifications measurements. The purpose of the article is to improve the quality of parts linear sizes control by the two-point measurement method. The article task is to investigate methodical extended uncertainties in measuring cylindrical element linear sizes. The investigation method is a geometric modeling of the element surfaces shape and location deviations in a rectangular coordinate system. The studies were carried out for elements of various service use, taking into account their informativeness, corresponding to the kinematic pairs classes in theoretical mechanics and the number of constrained degrees of freedom in the datum element function. Cylindrical elements with informativity of 4, 2, 1 and θ (zero) were investigated. The uncertainties estimation of in two-point measurements was made by comparing the results of of linear dimensions measurements with the functional diameters maximum and minimum of the element material. Methodical uncertainty is formed when cylindrical elements with maximum informativeness have shape deviations of the cut and the curvature types. Methodical uncertainty is formed by measuring the element average size for all types of shape deviations. The two-point measurement method cannot take into account the location deviations of a dimensional element, so its use for elements with informativeness less than the maximum creates unacceptable methodical uncertainties in measurements of the maximum, minimum and medium linear dimensions. Similar methodical uncertainties also exist in the arbitration control of the linear dimensions of the cylindrical elements by limiting two-point gauges.
Bayesian uncertainty quantification in linear models for diffusion MRI.
Sjölund, Jens; Eklund, Anders; Özarslan, Evren; Herberthson, Magnus; Bånkestad, Maria; Knutsson, Hans
2018-03-29
Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue microstructure. By fitting a model to the dMRI signal it is possible to derive various quantitative features. Several of the most popular dMRI signal models are expansions in an appropriately chosen basis, where the coefficients are determined using some variation of least-squares. However, such approaches lack any notion of uncertainty, which could be valuable in e.g. group analyses. In this work, we use a probabilistic interpretation of linear least-squares methods to recast popular dMRI models as Bayesian ones. This makes it possible to quantify the uncertainty of any derived quantity. In particular, for quantities that are affine functions of the coefficients, the posterior distribution can be expressed in closed-form. We simulated measurements from single- and double-tensor models where the correct values of several quantities are known, to validate that the theoretically derived quantiles agree with those observed empirically. We included results from residual bootstrap for comparison and found good agreement. The validation employed several different models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI) and Constrained Spherical Deconvolution (CSD). We also used in vivo data to visualize maps of quantitative features and corresponding uncertainties, and to show how our approach can be used in a group analysis to downweight subjects with high uncertainty. In summary, we convert successful linear models for dMRI signal estimation to probabilistic models, capable of accurate uncertainty quantification. Copyright © 2018 Elsevier Inc. All rights reserved.
Ye, Jongpil
2015-05-08
Templated solid-state dewetting of single-crystal films has been shown to be used to produce regular patterns of various shapes. However, the materials for which this patterning method is applicable, and the size range of the patterns produced are still limited. Here, it is shown that ordered arrays of micro- and nanoscale features can be produced with control over their shape and size via solid-state dewetting of patches patterned from single-crystal palladium and nickel films of different thicknesses and orientations. The shape and size characteristics of the patterns are found to be widely controllable with varying the shape, width, thickness, and orientation of the initial patches. The morphological evolution of the patches is also dependent on the film material, with different dewetting behaviors observed in palladium and nickel films. The mechanisms underlying the pattern formation are explained in terms of the influence on Rayleigh-like instability of the patch geometry and the surface energy anisotropy of the film material. This mechanistic understanding of pattern formation can be used to design patches for the precise fabrication of micro- and nanoscale structures with the desired shapes and feature sizes.
Approaches to Evaluating Probability of Collision Uncertainty
Hejduk, Matthew D.; Johnson, Lauren C.
2016-01-01
While the two-dimensional probability of collision (Pc) calculation has served as the main input to conjunction analysis risk assessment for over a decade, it has done this mostly as a point estimate, with relatively little effort made to produce confidence intervals on the Pc value based on the uncertainties in the inputs. The present effort seeks to try to carry these uncertainties through the calculation in order to generate a probability density of Pc results rather than a single average value. Methods for assessing uncertainty in the primary and secondary objects' physical sizes and state estimate covariances, as well as a resampling approach to reveal the natural variability in the calculation, are presented; and an initial proposal for operationally-useful display and interpretation of these data for a particular conjunction is given.
Online feature selection with streaming features.
Wu, Xindong; Yu, Kui; Ding, Wei; Wang, Hao; Zhu, Xingquan
2013-05-01
We propose a new online feature selection framework for applications with streaming features where the knowledge of the full feature space is unknown in advance. We define streaming features as features that flow in one by one over time whereas the number of training examples remains fixed. This is in contrast with traditional online learning methods that only deal with sequentially added observations, with little attention being paid to streaming features. The critical challenges for Online Streaming Feature Selection (OSFS) include 1) the continuous growth of feature volumes over time, 2) a large feature space, possibly of unknown or infinite size, and 3) the unavailability of the entire feature set before learning starts. In the paper, we present a novel Online Streaming Feature Selection method to select strongly relevant and nonredundant features on the fly. An efficient Fast-OSFS algorithm is proposed to improve feature selection performance. The proposed algorithms are evaluated extensively on high-dimensional datasets and also with a real-world case study on impact crater detection. Experimental results demonstrate that the algorithms achieve better compactness and higher prediction accuracy than existing streaming feature selection algorithms.
Recent activity of international comparison for nanoparticle size measurement
Takahashi, Kayori; Takahata, Keiji; Misumi, Ichiko; Sugawara, Kentaro; Gonda, Satoshi; Ehara, Kensei
2014-08-01
Nanoparticle sizing is the most fundamental measurement for producing nanomaterials, evaluation of nanostructure, and the risk assessment of nanomaterials for human health. Dynamic light scattering (DLS) is widely used as a useful and convenient technique for determining nanoparticle size in liquid; however, the precision of this technique has been unclear. Some international comparisons are now in progress to verify the measurement accuracy of nanoparticle sizing, as a typical example of Asia Pacific Metrology Programme Supplementary Comparison. In this study, we evaluated the precision of DLS technique for nanoparticle sizing and estimated the uncertainty of the DLS data for polystyrene latex suspensions. The extrapolations of apparent diffusion coefficients to infinite dilution and to lower angles yielded more precise values than those obtained at one angle and one concentration. The extrapolated particle size measured by DLS was compared to the size determined by differential mobility analyzer (DMA), atomic force microscopy (AFM), and scanning electron microscopy (SEM). Before the comparison, the intensity-averaged size measured by DLS was recalculated to the number-averaged size, and the thickness of water layer attaching on the surface of particles were added into uncertainty of particle sizing by DLS. After the recalculation, the consistent values of mean particle diameter were obtained between those by DLS and by DMA, AFM, and SEM within the estimated uncertainties.
Uncertainty Assessment of Hydrological Frequency Analysis Using Bootstrap Method
Directory of Open Access Journals (Sweden)
Yi-Ming Hu
2013-01-01
Full Text Available The hydrological frequency analysis (HFA is the foundation for the hydraulic engineering design and water resources management. Hydrological extreme observations or samples are the basis for HFA; the representativeness of a sample series to the population distribution is extremely important for the estimation reliability of the hydrological design value or quantile. However, for most of hydrological extreme data obtained in practical application, the size of the samples is usually small, for example, in China about 40~50 years. Generally, samples with small size cannot completely display the statistical properties of the population distribution, thus leading to uncertainties in the estimation of hydrological design values. In this paper, a new method based on bootstrap is put forward to analyze the impact of sampling uncertainty on the design value. By bootstrap resampling technique, a large number of bootstrap samples are constructed from the original flood extreme observations; the corresponding design value or quantile is estimated for each bootstrap sample, so that the sampling distribution of design value is constructed; based on the sampling distribution, the uncertainty of quantile estimation can be quantified. Compared with the conventional approach, this method provides not only the point estimation of a design value but also quantitative evaluation on uncertainties of the estimation.
Assessing uncertainty and risk in exploited marine populations
International Nuclear Information System (INIS)
Fogarty, M.J.; Mayo, R.K.; O'Brien, L.; Serchuk, F.M.; Rosenberg, A.A.
1996-01-01
The assessment and management of exploited fish and invertebrate populations is subject to several types of uncertainty. This uncertainty translates into risk to the population in the development and implementation of fishery management advice. Here, we define risk as the probability that exploitation rates will exceed a threshold level where long term sustainability of the stock is threatened. We distinguish among several sources of error or uncertainty due to (a) stochasticity in demographic rates and processes, particularly in survival rates during the early fife stages; (b) measurement error resulting from sampling variation in the determination of population parameters or in model estimation; and (c) the lack of complete information on population and ecosystem dynamics. The first represents a form of aleatory uncertainty while the latter two factors represent forms of epistemic uncertainty. To illustrate these points, we evaluate the recent status of the Georges Bank cod stock in a risk assessment framework. Short term stochastic projections are made accounting for uncertainty in population size and for random variability in the number of young surviving to enter the fishery. We show that recent declines in this cod stock can be attributed to exploitation rates that have substantially exceeded sustainable levels
Contribution to uncertainties computing: application to aerosol nanoparticles metrology
International Nuclear Information System (INIS)
Coquelin, Loic
2013-01-01
This thesis aims to provide SMPS users with a methodology to compute uncertainties associated with the estimation of aerosol size distributions. SMPS selects and detects airborne particles with a Differential Mobility Analyser (DMA) and a Condensation Particle Counter (CPC), respectively. The on-line measurement provides particle counting over a large measuring range. Then, recovering aerosol size distribution from CPC measurements yields to consider an inverse problem under uncertainty. A review of models to represent CPC measurements as a function of the aerosol size distribution is presented in the first chapter showing that competitive theories exist to model the physic involved in the measurement. It shows in the meantime the necessity of modelling parameters and other functions as uncertain. The physical model we established was first created to accurately represent the physic and second to be low time consuming. The first requirement is obvious as it characterizes the performance of the model. On the other hand, the time constraint is common to every large-scale problems for which an evaluation of the uncertainty is sought. To perform the estimation of the size distribution, a new criterion that couples regularization techniques and decomposition on a wavelet basis is described. Regularization is largely used to solve ill-posed problems. The regularized solution is computed as a trade-off between fidelity to the data and prior on the solution to be rebuilt, the trade-off being represented by a scalar known as the regularization parameter. Nevertheless, when dealing with size distributions showing broad and sharp profiles, an homogeneous prior is no longer suitable. Main improvement of this work is brought when such situations occur. The multi-scale approach we propose for the definition of the new prior is an alternative that enables to adjust the weights of the regularization on each scale of the signal. The method is tested against common regularization
Uncertainty and Complementarity in Axiomatic Quantum Mechanics
Lahti, Pekka J.
1980-11-01
In this work an investigation of the uncertainty principle and the complementarity principle is carried through. A study of the physical content of these principles and their representation in the conventional Hilbert space formulation of quantum mechanics forms a natural starting point for this analysis. Thereafter is presented more general axiomatic framework for quantum mechanics, namely, a probability function formulation of the theory. In this general framework two extra axioms are stated, reflecting the ideas of the uncertainty principle and the complementarity principle, respectively. The quantal features of these axioms are explicated. The sufficiency of the state system guarantees that the observables satisfying the uncertainty principle are unbounded and noncompatible. The complementarity principle implies a non-Boolean proposition structure for the theory. Moreover, nonconstant complementary observables are always noncompatible. The uncertainty principle and the complementarity principle, as formulated in this work, are mutually independent. Some order is thus brought into the confused discussion about the interrelations of these two important principles. A comparison of the present formulations of the uncertainty principle and the complementarity principle with the Jauch formulation of the superposition principle is also given. The mutual independence of the three fundamental principles of the quantum theory is hereby revealed.
Directory of Open Access Journals (Sweden)
Koen Degeling
2017-12-01
Full Text Available Abstract Background Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Methods Two approaches, 1 using non-parametric bootstrapping and 2 using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Results Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500, the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25, yielding infeasible modeling outcomes. Conclusions Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.
Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik
2017-12-15
Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.
Commonplaces and social uncertainty
DEFF Research Database (Denmark)
Lassen, Inger
2008-01-01
This article explores the concept of uncertainty in four focus group discussions about genetically modified food. In the discussions, members of the general public interact with food biotechnology scientists while negotiating their attitudes towards genetic engineering. Their discussions offer...... an example of risk discourse in which the use of commonplaces seems to be a central feature (Myers 2004: 81). My analyses support earlier findings that commonplaces serve important interactional purposes (Barton 1999) and that they are used for mitigating disagreement, for closing topics and for facilitating...
Majorization uncertainty relations for mixed quantum states
Puchała, Zbigniew; Rudnicki, Łukasz; Krawiec, Aleksandra; Życzkowski, Karol
2018-04-01
Majorization uncertainty relations are generalized for an arbitrary mixed quantum state ρ of a finite size N. In particular, a lower bound for the sum of two entropies characterizing the probability distributions corresponding to measurements with respect to two arbitrary orthogonal bases is derived in terms of the spectrum of ρ and the entries of a unitary matrix U relating both bases. The results obtained can also be formulated for two measurements performed on a single subsystem of a bipartite system described by a pure state, and consequently expressed as an uncertainty relation for the sum of conditional entropies.
Summary of existing uncertainty methods
International Nuclear Information System (INIS)
Glaeser, Horst
2013-01-01
A summary of existing and most used uncertainty methods is presented, and the main features are compared. One of these methods is the order statistics method based on Wilks' formula. It is applied in safety research as well as in licensing. This method has been first proposed by GRS for use in deterministic safety analysis, and is now used by many organisations world-wide. Its advantage is that the number of potential uncertain input and output parameters is not limited to a small number. Such a limitation was necessary for the first demonstration of the Code Scaling Applicability Uncertainty Method (CSAU) by the United States Regulatory Commission (USNRC). They did not apply Wilks' formula in their statistical method propagating input uncertainties to obtain the uncertainty of a single output variable, like peak cladding temperature. A Phenomena Identification and Ranking Table (PIRT) was set up in order to limit the number of uncertain input parameters, and consequently, the number of calculations to be performed. Another purpose of such a PIRT process is to identify the most important physical phenomena which a computer code should be suitable to calculate. The validation of the code should be focused on the identified phenomena. Response surfaces are used in some applications replacing the computer code for performing a high number of calculations. The second well known uncertainty method is the Uncertainty Methodology Based on Accuracy Extrapolation (UMAE) and the follow-up method 'Code with the Capability of Internal Assessment of Uncertainty (CIAU)' developed by the University Pisa. Unlike the statistical approaches, the CIAU does compare experimental data with calculation results. It does not consider uncertain input parameters. Therefore, the CIAU is highly dependent on the experimental database. The accuracy gained from the comparison between experimental data and calculated results are extrapolated to obtain the uncertainty of the system code predictions
Uncertainty Quantification in High Throughput Screening ...
Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of biochemical and cellular processes, including endocrine disruption, cytotoxicity, and zebrafish development. Over 2.6 million concentration response curves are fit to models to extract parameters related to potency and efficacy. Models built on ToxCast results are being used to rank and prioritize the toxicological risk of tested chemicals and to predict the toxicity of tens of thousands of chemicals not yet tested in vivo. However, the data size also presents challenges. When fitting the data, the choice of models, model selection strategy, and hit call criteria must reflect the need for computational efficiency and robustness, requiring hard and somewhat arbitrary cutoffs. When coupled with unavoidable noise in the experimental concentration response data, these hard cutoffs cause uncertainty in model parameters and the hit call itself. The uncertainty will then propagate through all of the models built on the data. Left unquantified, this uncertainty makes it difficult to fully interpret the data for risk assessment. We used bootstrap resampling methods to quantify the uncertainty in fitting models to the concentration response data. Bootstrap resampling determines confidence intervals for
Czech Academy of Sciences Publication Activity Database
Pfeifer, S.; Müller, T.; Weinhold, K.; Zíková, Naděžda; dos Santos, S.M.; Marinoni, A.; Bischof, O.F.; Kykal, C.; Ries, L.; Meinhardt, F.; Aalto, P.; Mihalopoulos, N.; Wiedensohler, A.
2016-01-01
Roč. 9, č. 4 (2016), s. 1545-1551 ISSN 1867-1381 EU Projects: European Commission(XE) 262254 - ACTRIS Institutional support: RVO:67985858 Keywords : counting efficiency * aerodynamic particle size spectrometers * laboratory study Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.089, year: 2016
Horizon Wavefunction of Generalized Uncertainty Principle Black Holes
Directory of Open Access Journals (Sweden)
Luciano Manfredi
2016-01-01
Full Text Available We study the Horizon Wavefunction (HWF description of a Generalized Uncertainty Principle inspired metric that admits sub-Planckian black holes, where the black hole mass m is replaced by M=m1+β/2MPl2/m2. Considering the case of a wave-packet shaped by a Gaussian distribution, we compute the HWF and the probability PBH that the source is a (quantum black hole, that is, that it lies within its horizon radius. The case β0, where a minimum in PBH is encountered, thus meaning that every particle has some probability of decaying to a black hole. Furthermore, for sufficiently large β we find that every particle is a quantum black hole, in agreement with the intuitive effect of increasing β, which creates larger M and RH terms. This is likely due to a “dimensional reduction” feature of the model, where the black hole characteristics for sub-Planckian black holes mimic those in (1+1 dimensions and the horizon size grows as RH~M-1.
Controlling Uncertainty Decision Making and Learning in Complex Worlds
Osman, Magda
2010-01-01
Controlling Uncertainty: Decision Making and Learning in Complex Worlds reviews and discusses the most current research relating to the ways we can control the uncertain world around us.: Features reviews and discussions of the most current research in a number of fields relevant to controlling uncertainty, such as psychology, neuroscience, computer science and engineering; Presents a new framework that is designed to integrate a variety of disparate fields of research; Represents the first book of its kind to provide a general overview of work related to understanding control
Tadini, A.; Bisson, M.; Neri, A.; Cioni, R.; Bevilacqua, A.; Aspinall, W. P.
2017-06-01
This study presents new and revised data sets about the spatial distribution of past volcanic vents, eruptive fissures, and regional/local structures of the Somma-Vesuvio volcanic system (Italy). The innovative features of the study are the identification and quantification of important sources of uncertainty affecting interpretations of the data sets. In this regard, the spatial uncertainty of each feature is modeled by an uncertainty area, i.e., a geometric element typically represented by a polygon drawn around points or lines. The new data sets have been assembled as an updatable geodatabase that integrates and complements existing databases for Somma-Vesuvio. The data are organized into 4 data sets and stored as 11 feature classes (points and lines for feature locations and polygons for the associated uncertainty areas), totaling more than 1700 elements. More specifically, volcanic vent and eruptive fissure elements are subdivided into feature classes according to their associated eruptive styles: (i) Plinian and sub-Plinian eruptions (i.e., large- or medium-scale explosive activity); (ii) violent Strombolian and continuous ash emission eruptions (i.e., small-scale explosive activity); and (iii) effusive eruptions (including eruptions from both parasitic vents and eruptive fissures). Regional and local structures (i.e., deep faults) are represented as linear feature classes. To support interpretation of the eruption data, additional data sets are provided for Somma-Vesuvio geological units and caldera morphological features. In the companion paper, the data presented here, and the associated uncertainties, are used to develop a first vent opening probability map for the Somma-Vesuvio caldera, with specific attention focused on large or medium explosive events.
Energy Technology Data Exchange (ETDEWEB)
Lenshin, A. S., E-mail: lenshinas@phys.vsu.ru; Seredin, P. V.; Kavetskaya, I. V.; Minakov, D. A.; Kashkarov, V. M. [Voronezh State University (Russian Federation)
2017-02-15
The deposition features of the organic dye Rhodamine B on the porous surface of silicon with average pore sizes of 50–100 and 100–250 nm are studied. Features of the composition and optical properties of the obtained systems are studied using infrared and photoluminescence spectroscopy. It is found that Rhodamine-B adsorption on the surface of por-Si with various porosities is preferentially physical. The optimal technological parameters of its deposition are determined.
Animation as a Visual Indicator of Positional Uncertainty in Geographic Information
DEFF Research Database (Denmark)
Kessler, Carsten; Lotstein, Enid
2018-01-01
Effectively communicating the uncertainty that is inherent in any kind of geographic information remains a challenge. This paper investigates the efficacy of animation as a visual variable to represent positional uncertainty in a web mapping context. More specifically, two different kinds...... of animation (a ‘bouncing’ and a ‘rubberband’ effect) have been compared to two static visual variables (symbol size and transparency), as well as different combinations of those variables in an online experiment with 163 participants. The participants’ task was to identify the most and least uncertain point...... visualizations using symbol size and transparency. Somewhat contradictory to those results, the participants showed a clear preference for those static visualizations....
Feature Extraction for Structural Dynamics Model Validation
Energy Technology Data Exchange (ETDEWEB)
Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield
2016-01-13
As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.
Featureous: A Tool for Feature-Centric Analysis of Java Software
DEFF Research Database (Denmark)
Olszak, Andrzej; Jørgensen, Bo Nørregaard
2010-01-01
Feature-centric comprehension of source code is necessary for incorporating user-requested modifications during software evolution and maintenance. However, such comprehension is difficult to achieve in case of large object-oriented programs due to the size, complexity, and implicit character...... of mappings between features and source code. To support programmers in overcoming these difficulties, we present a feature-centric analysis tool, Featureous. Our tool extends the NetBeans IDE with mechanisms for efficient location of feature implementations in legacy source code, and an extensive analysis...
Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model
Wang, Shitao
2016-05-27
Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model\\'s estimates of the plume\\'s trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate\\'s contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.
Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model
Wang, Shitao; Iskandarani, Mohamed; Srinivasan, Ashwanth; Thacker, W. Carlisle; Winokur, Justin; Knio, Omar
2016-01-01
Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model's estimates of the plume's trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate's contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.
Uncertainty of Water-hammer Loads for Safety Related Systems
Energy Technology Data Exchange (ETDEWEB)
Lee, Seung Chan; Yoon, Duk Joo [Korea Hydro and Nuclear Power Co., LT., Daejeon (Korea, Republic of)
2013-10-15
In this study, the basic methodology is base on ISO GUM (Guide to the Expression of Uncertainty in Measurements). For a given gas void volumes in the discharge piping, the maximum pressure of water hammer is defined in equation. From equation, uncertainty parameter is selected as U{sub s} (superficial velocity for the specific pipe size and corresponding area) of equation. The main uncertainty parameter (U{sub s}) is estimated by measurement method and Monte Carlo simulation. Two methods are in good agreement with the extended uncertainty. Extended uncertainty of the measurement and Monte Carlo simulation is 1.30 and 1.34 respectively in 95% confidence interval. In 99% confidence interval, the uncertainties are 1.95 and 1.97 respectively. NRC Generic Letter 2008-01 requires nuclear power plant operators to evaluate the possibility of noncondensable gas accumulation for the Emergency Core Cooling System. Specially, gas accumulation can result in system pressure transient in pump discharge piping at a pump start. Consequently, this evolves into a gas water, a water-hammer event and the force imbalances on the piping segments. In this paper, MCS (Monte Carlo Simulation) method is introduced in estimating the uncertainty of water hammer. The aim is to evaluate the uncertainty of the water hammer estimation results carried out by KHNP CRI in 2013.
Development of a Dynamic Lidar Uncertainty Framework
Energy Technology Data Exchange (ETDEWEB)
Newman, Jennifer [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Clifton, Andrew [WindForS; Bonin, Timothy [CIRES/NOAA ESRL; Choukulkar, Aditya [CIRES/NOAA ESRL; Brewer, W. Alan [NOAA ESRL; Delgado, Ruben [University of Maryland Baltimore County
2017-08-07
As wind turbine sizes increase and wind energy expands to more complex and remote sites, remote-sensing devices such as lidars are expected to play a key role in wind resource assessment and power performance testing. The switch to remote-sensing devices represents a paradigm shift in the way the wind industry typically obtains and interprets measurement data for wind energy. For example, the measurement techniques and sources of uncertainty for a remote-sensing device are vastly different from those associated with a cup anemometer on a meteorological tower. Current IEC standards for quantifying remote sensing device uncertainty for power performance testing consider uncertainty due to mounting, calibration, and classification of the remote sensing device, among other parameters. Values of the uncertainty are typically given as a function of the mean wind speed measured by a reference device and are generally fixed, leading to climatic uncertainty values that apply to the entire measurement campaign. However, real-world experience and a consideration of the fundamentals of the measurement process have shown that lidar performance is highly dependent on atmospheric conditions, such as wind shear, turbulence, and aerosol content. At present, these conditions are not directly incorporated into the estimated uncertainty of a lidar device. In this presentation, we describe the development of a new dynamic lidar uncertainty framework that adapts to current flow conditions and more accurately represents the actual uncertainty inherent in lidar measurements under different conditions. In this new framework, sources of uncertainty are identified for estimation of the line-of-sight wind speed and reconstruction of the three-dimensional wind field. These sources are then related to physical processes caused by the atmosphere and lidar operating conditions. The framework is applied to lidar data from a field measurement site to assess the ability of the framework to predict
Collaborative framework for PIV uncertainty quantification: the experimental database
International Nuclear Information System (INIS)
Neal, Douglas R; Sciacchitano, Andrea; Scarano, Fulvio; Smith, Barton L
2015-01-01
The uncertainty quantification of particle image velocimetry (PIV) measurements has recently become a topic of great interest as shown by the recent appearance of several different methods within the past few years. These approaches have different working principles, merits and limitations, which have been speculated upon in subsequent studies. This paper reports a unique experiment that has been performed specifically to test the efficacy of PIV uncertainty methods. The case of a rectangular jet, as previously studied by Timmins et al (2012) and Wilson and Smith (2013b), is used. The novel aspect of the experiment is simultaneous velocity measurements using two different time-resolved PIV systems and a hot-wire anemometry (HWA) system. The first PIV system, called the PIV measurement system (‘PIV-MS’), is intended for nominal measurements of which the uncertainty is to be evaluated. It is based on a single camera and features a dynamic velocity range (DVR) representative of typical PIV experiments. The second PIV system, called the ‘PIV-HDR’ (high dynamic range) system, features a significantly higher DVR obtained with a higher digital imaging resolution. The hot-wire is placed in close proximity to the PIV measurement domain. The three measurement systems were carefully set to simultaneously measure the flow velocity at the same time and location. The comparison between the PIV-HDR system and the HWA provides an estimate of the measurement precision of the reference velocity for evaluation of the instantaneous error in the measurement system. The discrepancy between the PIV-MS and the reference data provides the measurement error, which is later used to assess the different uncertainty quantification methods proposed in the literature. A detailed comparison of the uncertainty estimation methods based on the present datasets is presented in a second paper from Sciacchitano et al (2015). Furthermore, this database offers the potential to be used for
International Nuclear Information System (INIS)
Wider, H.
2005-01-01
New Light Water Reactors, whose regular safety systems are complemented by passive safety systems, are ready for the market. The special aspect of passive safety features is their actuation and functioning independent of the operator. They add significantly to reduce the core damage frequency (CDF) since the operator continues to play its independent role in actuating the regular safety devices based on modern instrumentation and control (I and C). The latter also has passive features regarding the prevention of accidents. Two reactors with significant passive features that are presently offered on the market are the AP1000 PWR and the SWR 1000 BWR. Their passive features are compared and also their core damage frequencies (CDF). The latter are also compared with those of a VVER-1000. A further discussion about the two passive plants concerns their mitigating features for severe accidents. Regarding core-melt retention both rely on in-vessel cooling of the melt. The new VVER-1000 reactor, on the other hand features a validated ex-vessel concept. (author)
A statistical view of uncertainty in expert systems
International Nuclear Information System (INIS)
Spiegelhalter, D.J.
1986-01-01
The constructors of expert systems interpret ''uncertainty'' in a wide sense and have suggested a variety of qualitative and quantitative techniques for handling the concept, such as the theory of ''endorsements,'' fuzzy reasoning, and belief functions. After a brief selective review of procedures that do not adhere to the laws of probability, it is argued that a subjectivist Bayesian view of uncertainty, if flexibly applied, can provide many of the features demanded by expert systems. This claim is illustrated with a number of examples of probabilistic reasoning, and a connection drawn with statistical work on the graphical representation of multivariate distributions. Possible areas of future research are outlined
Handling uncertainty and networked structure in robot control
Tamás, Levente
2015-01-01
This book focuses on two challenges posed in robot control by the increasing adoption of robots in the everyday human environment: uncertainty and networked communication. Part I of the book describes learning control to address environmental uncertainty. Part II discusses state estimation, active sensing, and complex scenario perception to tackle sensing uncertainty. Part III completes the book with control of networked robots and multi-robot teams. Each chapter features in-depth technical coverage and case studies highlighting the applicability of the techniques, with real robots or in simulation. Platforms include mobile ground, aerial, and underwater robots, as well as humanoid robots and robot arms. Source code and experimental data are available at http://extras.springer.com. The text gathers contributions from academic and industry experts, and offers a valuable resource for researchers or graduate students in robot control and perception. It also benefits researchers in related areas, such as computer...
Energy Technology Data Exchange (ETDEWEB)
Mesado, C.; Miro, R.; Barrachina, T.; Verdu, G.
2014-07-01
Due to the importance of calculating sensitivity and uncertainty in the calculation of field engineering, and especially in the nuclear world, it has been decided to present the main features of the new module present in the new version of SCALE 6.2 (currently beta 3 version) called SAMPLER. This module allows the calculation of uncertainty in a wide range of effective sections, neutron parameters, composition and physical parameters. However, the calculation of sensitivity is not present in the beta 3 release. Even so, this module can be helpful for participants of the proposed Benchmark by Expert Group on Uncertainty Analysis in Modelling (UAM-LWR), as well as to analysts in general. (Author)
Directory of Open Access Journals (Sweden)
Christine E Schmidt
2010-12-01
Full Text Available David Y Fozdar1*, Jae Y Lee2*, Christine E Schmidt2–6, Shaochen Chen1,3–5,7,1Departments of Mechanical Engineering, 2Chemical Engineering, 3Biomedical Engineering; 4Center for Nano Molecular Science and Technology; 5Texas Materials Institute; 6Institute of Neuroscience; 7Microelectronics Research Center, The University of Texas at Austin, Austin, TX, USA *Contributed equally to this workPurpose: Understanding how surface features influence the establishment and outgrowth of the axon of developing neurons at the single cell level may aid in designing implantable scaffolds for the regeneration of damaged nerves. Past studies have shown that micropatterned ridge-groove structures not only instigate axon polarization, alignment, and extension, but are also preferred over smooth surfaces and even neurotrophic ligands.Methods: Here, we performed axonal-outgrowth competition assays using a proprietary four-quadrant topography grid to determine the capacity of various micropatterned topographies to act as stimuli sequestering axon extension. Each topography in the grid consisted of an array of microscale (approximately 2 µm or submicroscale (approximately 300 nm holes or lines with variable dimensions. Individual rat embryonic hippocampal cells were positioned either between two juxtaposing topographies or at the borders of individual topographies juxtaposing unpatterned smooth surface, cultured for 24 hours, and analyzed with respect to axonal selection using conventional imaging techniques.Results: Topography was found to influence axon formation and extension relative to smooth surface, and the distance of neurons relative to topography was found to impact whether the topography could serve as an effective cue. Neurons were also found to prefer submicroscale over microscale features and holes over lines for a given feature size.Conclusion: The results suggest that implementing physical cues of various shapes and sizes on nerve guidance conduits
Concept of uncertainty in relation to the foresight research
Directory of Open Access Journals (Sweden)
Magruk Andrzej
2017-03-01
Full Text Available Uncertainty is one of the most important features of many areas of social and economic life, especially in the forward-looking context. On the one hand, the degree of uncertainty is associated with the objective essence of randomness of the phenomenon, and on the other, with the subjective perspective of a man. Future-oriented perception of human activities is laden with an incomplete specificity of the analysed phenomena, their volatility, and lack of continuity. A man is unable to determine, with complete certainty, the further course of these phenomena. According to the author of this article, in order to significantly reduce the uncertainty while making strategic decisions in a complex environment, we should focus our actions on the future through systemic research of foresight. This article attempts to answer the following research questions: 1 What is the relationship between foresight studies in the system perspective to studies of the uncertainty? 2 What classes of foresight methods enable the research of uncertainty in the process of system inquiry of the future? This study conducted deductive reasoning based on the results of the analysis methods and criticism of literature.
A long run intertemporal model of the oil market with uncertainty and strategic interaction
International Nuclear Information System (INIS)
Lensberg, T.; Rasmussen, H.
1991-06-01
This paper describes a model of the long run price uncertainty in the oil market. The main feature of the model is that the uncertainty about OPEC's price strategy is assumed to be generated not by irrational behavior on the part of OPEC, but by uncertainty about OPEC's size and time preference. The control of OPEC's pricing decision is assumed to shift among a set of OPEC-types over time according to a stochastic process, with each type implementing that price strategy which best fits the interests of its supporters. The model is fully dynamic on the supply side in the sense that all oil producers are assumed to understand the working of OPEC and the oil market, in particular, the non-OPEC producers base their investment decisions on rational price expectations. On the demand side, we assume that the market insight is less developed on the average, and model it by means of a long run demand curve on current prices and a simple lag structure. The long run demand curve for crude oil is generated by a fairly detailed static long-run equilibrium model of the product markets. Preliminary experience with the model indicate that prices are likely to stay below 20 dollars in the foreseeable future, but that prices around 30 dollars may occur if the present long run time perspective of OPEC is abandoned in favor of a more short run one. 26 refs., 4 figs., 7 tabs
A new computational method of a moment-independent uncertainty importance measure
International Nuclear Information System (INIS)
Liu Qiao; Homma, Toshimitsu
2009-01-01
For a risk assessment model, the uncertainty in input parameters is propagated through the model and leads to the uncertainty in the model output. The study of how the uncertainty in the output of a model can be apportioned to the uncertainty in the model inputs is the job of sensitivity analysis. Saltelli [Sensitivity analysis for importance assessment. Risk Analysis 2002;22(3):579-90] pointed out that a good sensitivity indicator should be global, quantitative and model free. Borgonovo [A new uncertainty importance measure. Reliability Engineering and System Safety 2007;92(6):771-84] further extended these three requirements by adding the fourth feature, moment-independence, and proposed a new sensitivity measure, δ i . It evaluates the influence of the input uncertainty on the entire output distribution without reference to any specific moment of the model output. In this paper, a new computational method of δ i is proposed. It is conceptually simple and easier to implement. The feasibility of this new method is proved by applying it to two examples.
A New Framework for Quantifying Lidar Uncertainty
Energy Technology Data Exchange (ETDEWEB)
Newman, Jennifer, F.; Clifton, Andrew; Bonin, Timothy A.; Churchfield, Matthew J.
2017-03-24
As wind turbine sizes increase and wind energy expands to more complex and remote sites, remote sensing devices such as lidars are expected to play a key role in wind resource assessment and power performance testing. The switch to remote sensing devices represents a paradigm shift in the way the wind industry typically obtains and interprets measurement data for wind energy. For example, the measurement techniques and sources of uncertainty for a remote sensing device are vastly different from those associated with a cup anemometer on a meteorological tower. Current IEC standards discuss uncertainty due to mounting, calibration, and classification of the remote sensing device, among other parameters. Values of the uncertainty are typically given as a function of the mean wind speed measured by a reference device. However, real-world experience has shown that lidar performance is highly dependent on atmospheric conditions, such as wind shear, turbulence, and aerosol content. At present, these conditions are not directly incorporated into the estimated uncertainty of a lidar device. In this presentation, we propose the development of a new lidar uncertainty framework that adapts to current flow conditions and more accurately represents the actual uncertainty inherent in lidar measurements under different conditions. In this new framework, sources of uncertainty are identified for estimation of the line-of-sight wind speed and reconstruction of the three-dimensional wind field. These sources are then related to physical processes caused by the atmosphere and lidar operating conditions. The framework is applied to lidar data from an operational wind farm to assess the ability of the framework to predict errors in lidar-measured wind speed.
Uncertainty quantification in lattice QCD calculations for nuclear physics
Energy Technology Data Exchange (ETDEWEB)
Beane, Silas R. [Univ. of Washington, Seattle, WA (United States); Detmold, William [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Orginos, Kostas [College of William and Mary, Williamsburg, VA (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Savage, Martin J. [Institute for Nuclear Theory, Seattle, WA (United States)
2015-02-05
The numerical technique of Lattice QCD holds the promise of connecting the nuclear forces, nuclei, the spectrum and structure of hadrons, and the properties of matter under extreme conditions with the underlying theory of the strong interactions, quantum chromodynamics. A distinguishing, and thus far unique, feature of this formulation is that all of the associated uncertainties, both statistical and systematic can, in principle, be systematically reduced to any desired precision with sufficient computational and human resources. As a result, we review the sources of uncertainty inherent in Lattice QCD calculations for nuclear physics, and discuss how each is quantified in current efforts.
Generalization of uncertainty relation for quantum and stochastic systems
Koide, T.; Kodama, T.
2018-06-01
The generalized uncertainty relation applicable to quantum and stochastic systems is derived within the stochastic variational method. This relation not only reproduces the well-known inequality in quantum mechanics but also is applicable to the Gross-Pitaevskii equation and the Navier-Stokes-Fourier equation, showing that the finite minimum uncertainty between the position and the momentum is not an inherent property of quantum mechanics but a common feature of stochastic systems. We further discuss the possible implication of the present study in discussing the application of the hydrodynamic picture to microscopic systems, like relativistic heavy-ion collisions.
International Nuclear Information System (INIS)
Allodji, Rodrigue S; Leuraud, Klervi; Laurier, Dominique; Bernhard, Sylvain; Henry, Stéphane; Bénichou, Jacques
2012-01-01
The reliability of exposure data directly affects the reliability of the risk estimates derived from epidemiological studies. Measurement uncertainty must be known and understood before it can be corrected. The literature on occupational exposure to radon ( 222 Rn) and its decay products reveals only a few epidemiological studies in which uncertainty has been accounted for explicitly. This work examined the sources, nature, distribution and magnitude of uncertainty of the exposure of French uranium miners to radon ( 222 Rn) and its decay products. We estimated the total size of uncertainty for this exposure with the root sum square (RSS) method, which may be an alternative when repeated measures are not available. As a result, we identified six main sources of uncertainty. The total size of the uncertainty decreased from about 47% in the period 1956–1974 to 10% after 1982, illustrating the improvement in the radiological monitoring system over time.
Czech Academy of Sciences Publication Activity Database
Somol, Petr; Novovičová, Jana
2010-01-01
Roč. 32, č. 11 (2010), s. 1921-1939 ISSN 0162-8828 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593; GA ČR GA102/07/1594 Grant - others:GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : feature selection * feature stability * stability measures * similarity measures * sequential search * individual ranking * feature subset-size optimization * high dimensionality * small sample size Subject RIV: BD - Theory of Information Impact factor: 5.027, year: 2010 http://library.utia.cas.cz/separaty/2010/RO/somol-0348726.pdf
Quantifying the uncertainty in heritability.
Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph
2014-05-01
The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.
Quantum Action Principle with Generalized Uncertainty Principle
Gu, Jie
2013-01-01
One of the common features in all promising candidates of quantum gravity is the existence of a minimal length scale, which naturally emerges with a generalized uncertainty principle, or equivalently a modified commutation relation. Schwinger's quantum action principle was modified to incorporate this modification, and was applied to the calculation of the kernel of a free particle, partly recovering the result previously studied using path integral.
Performance Assessment Uncertainty Analysis for Japan's HLW Program Feasibility Study (H12)
International Nuclear Information System (INIS)
BABA, T.; ISHIGURO, K.; ISHIHARA, Y.; SAWADA, A.; UMEKI, H.; WAKASUGI, K.; WEBB, ERIK K.
1999-01-01
Most HLW programs in the world recognize that any estimate of long-term radiological performance must be couched in terms of the uncertainties derived from natural variation, changes through time and lack of knowledge about the essential processes. The Japan Nuclear Cycle Development Institute followed a relatively standard procedure to address two major categories of uncertainty. First, a FEatures, Events and Processes (FEPs) listing, screening and grouping activity was pursued in order to define the range of uncertainty in system processes as well as possible variations in engineering design. A reference and many alternative cases representing various groups of FEPs were defined and individual numerical simulations performed for each to quantify the range of conceptual uncertainty. Second, parameter distributions were developed for the reference case to represent the uncertainty in the strength of these processes, the sequencing of activities and geometric variations. Both point estimates using high and low values for individual parameters as well as a probabilistic analysis were performed to estimate parameter uncertainty. A brief description of the conceptual model uncertainty analysis is presented. This paper focuses on presenting the details of the probabilistic parameter uncertainty assessment
Cheng, Nai-Ming; Fang, Yu-Hua Dean; Lee, Li-yu; Chang, Joseph Tung-Chieh; Tsan, Din-Li; Ng, Shu-Hang; Wang, Hung-Ming; Liao, Chun-Ta; Yang, Lan-Yan; Hsu, Ching-Han; Yen, Tzu-Chen
2015-03-01
The question as to whether the regional textural features extracted from PET images predict prognosis in oropharyngeal squamous cell carcinoma (OPSCC) remains open. In this study, we investigated the prognostic impact of regional heterogeneity in patients with T3/T4 OPSCC. We retrospectively reviewed the records of 88 patients with T3 or T4 OPSCC who had completed primary therapy. Progression-free survival (PFS) and disease-specific survival (DSS) were the main outcome measures. In an exploratory analysis, a standardized uptake value of 2.5 (SUV 2.5) was taken as the cut-off value for the detection of tumour boundaries. A fixed threshold at 42 % of the maximum SUV (SUVmax 42 %) and an adaptive threshold method were then used for validation. Regional textural features were extracted from pretreatment (18)F-FDG PET/CT images using the grey-level run length encoding method and grey-level size zone matrix. The prognostic significance of PET textural features was examined using receiver operating characteristic (ROC) curves and Cox regression analysis. Zone-size nonuniformity (ZSNU) was identified as an independent predictor of PFS and DSS. Its prognostic impact was confirmed using both the SUVmax 42 % and the adaptive threshold segmentation methods. Based on (1) total lesion glycolysis, (2) uniformity (a local scale texture parameter), and (3) ZSNU, we devised a prognostic stratification system that allowed the identification of four distinct risk groups. The model combining the three prognostic parameters showed a higher predictive value than each variable alone. ZSNU is an independent predictor of outcome in patients with advanced T-stage OPSCC, and may improve their prognostic stratification.
Lindley, Dennis V
2013-01-01
Praise for the First Edition ""...a reference for everyone who is interested in knowing and handling uncertainty.""-Journal of Applied Statistics The critically acclaimed First Edition of Understanding Uncertainty provided a study of uncertainty addressed to scholars in all fields, showing that uncertainty could be measured by probability, and that probability obeyed three basic rules that enabled uncertainty to be handled sensibly in everyday life. These ideas were extended to embrace the scientific method and to show how decisions, containing an uncertain element, could be rationally made.
Modelling geological uncertainty for mine planning
Energy Technology Data Exchange (ETDEWEB)
Mitchell, M
1980-07-01
Geosimplan is an operational gaming approach used in testing a proposed mining strategy against uncertainty in geological disturbance. Geoplan is a technique which facilitates the preparation of summary analyses to give an impression of size, distribution and quality of reserves, and to assist in calculation of year by year output estimates. Geoplan concentrates on variations in seam properties and the interaction between geological information and marketing and output requirements.
Rational consensus under uncertainty: Expert judgment in the EC-USNRC uncertainty study
International Nuclear Information System (INIS)
Cooke, R.; Kraan, B.; Goossens, L.
1999-01-01
? Simply choosing a maximally feasible pool of experts and combining their views by some method of equal representation might achieve a form of political consensus among the experts involved, but will not achieve rational consensus. If expert viewpoints are related to the institutions at which the experts are employed, then numerical representation of viewpoints in the pool may be, and/or may be perceived to be influenced by the size of the interests funding the institutes. We collect a number of conclusions regarding the use of structured expert judgment. 1 . Experts' subjective uncertainties may be used to advance rational consensus in the face of large uncertainties, in so far as the necessary conditions for rational consensus are satisfied. 2. Empirical control of experts' subjective uncertainties is possible. 3. Experts' performance as subjective probability assessors is not uniform, there are significant differences in performance. 4. Experts as a group may show poor performance. 5. A structured combination of expert judgment may show satisfactory performance, even though the experts individually perform poorly. 6. The performance based combination generally outperforms the equal weight combination. 7. The combination of experts' subjective probabilities, according to the schemes discussed here, generally has wider 90% central confidence intervals than the experts individually; particularly in the case of the equal weight combination. We note that poor performance as a subjective probability assessor does not indicate a lack of substantive expert knowledge. Rather, it indicates unfamiliarity with quantifying subjective uncertainty in terms of subjective probability distributions. Experts were provided with training in subjective probability assessment, but of course their formal training does not (yet) prepare them for such tasks
Uncertainties in bone (knee region) in vivo monitoring
International Nuclear Information System (INIS)
Venturini, Luzia; Sordi, Gian-Maria A.A.; Vanin, Vito R.
2008-01-01
Full text: The bones in the knee region are among the possible choices to estimate radionuclide deposit in the skeleton. Finding the optimum measurement conditions requires the determination of the uncertainties and their relationship with the detector arrangement in the available space, variations in bone anatomy, and non-uniformity in radionuclide deposit. In this work, geometric models for the bones in the knee region and Monte Carlo simulation of the measurement efficiency were used to estimate uncertainties in the in vivo monitoring in the 46 -- 186 keV gamma-ray energy range. The bone models are based on geometrical figures such as ellipsoids and cylinders and have already been published elsewhere. Their parameters are diameters, axis orientations, lengths, and relative positions determined from a survey on real pieces. A 1.70 m tall person was used as a reference; bone model parameters for 1.50 m and 1.90 m tall persons were deduced from the previously published data, to evaluate the uncertainties related to bone size. The simulated experimental arrangement consisted of four HPGe detectors measuring radiation from the knees in the bed geometry; uncertainties from radionuclide deposit distribution, compact bone density and bone size were also included. The detectors were placed at 22 cm height from the bed and it was assumed that the part of the bones seen by the detectors consists in the first 25 cm from the patella, both in feet and hip directions. The cover tissue was not taken as an uncertainty source, but its effect on the final detection efficiency was taken into account. The calculations consider the main interaction types between radiation and the detector crystal, and the radiation attenuation in the bones and the layers of materials between bones and detectors. It was found that the uncertainties depend strongly on the hypotheses made. For example, for 46 keV gamma-rays, a 1.70 m tall person with normal bone density and radionuclide deposit in the
Probabilistic Radiological Performance Assessment Modeling and Uncertainty
Tauxe, J.
2004-12-01
A generic probabilistic radiological Performance Assessment (PA) model is presented. The model, built using the GoldSim systems simulation software platform, concerns contaminant transport and dose estimation in support of decision making with uncertainty. Both the U.S. Nuclear Regulatory Commission (NRC) and the U.S. Department of Energy (DOE) require assessments of potential future risk to human receptors of disposal of LLW. Commercially operated LLW disposal facilities are licensed by the NRC (or agreement states), and the DOE operates such facilities for disposal of DOE-generated LLW. The type of PA model presented is probabilistic in nature, and hence reflects the current state of knowledge about the site by using probability distributions to capture what is expected (central tendency or average) and the uncertainty (e.g., standard deviation) associated with input parameters, and propagating through the model to arrive at output distributions that reflect expected performance and the overall uncertainty in the system. Estimates of contaminant release rates, concentrations in environmental media, and resulting doses to human receptors well into the future are made by running the model in Monte Carlo fashion, with each realization representing a possible combination of input parameter values. Statistical summaries of the results can be compared to regulatory performance objectives, and decision makers are better informed of the inherently uncertain aspects of the model which supports their decision-making. While this information may make some regulators uncomfortable, they must realize that uncertainties which were hidden in a deterministic analysis are revealed in a probabilistic analysis, and the chance of making a correct decision is now known rather than hoped for. The model includes many typical features and processes that would be part of a PA, but is entirely fictitious. This does not represent any particular site and is meant to be a generic example. A
Investment in different sized SMRs: Economic evaluation of stochastic scenarios by INCAS code
International Nuclear Information System (INIS)
Barenghi, S.; Boarin, S.; Ricotti, M. E.
2012-01-01
Small Modular LWR concepts are being developed and proposed to investors worldwide. They capitalize on operating track record of GEN II LWR, while introducing innovative design enhancements allowed by smaller size and additional benefits from the higher degree of modularization and from deployment of multiple units on the same site. (i.e. 'Economy of Multiple' paradigm) Nevertheless Small Modular Reactors pay for a dis-economy of scale that represents a relevant penalty on a capital intensive investment. Investors in the nuclear power generation industry face a very high financial risk, due to high capital commitment and exceptionally long pay-back time. Investment risk arise from uncertainty that affects scenario conditions over such a long time horizon. Risk aversion is increased by current adverse conditions of financial markets and general economic downturn, as is the case nowadays. This work investigates both the investment profitability and risk of alternative investments in a single Large Reactor or in multiple SMR of different sizes drawing information from project's Internal Rate of Return stochastic distribution. multiple SMR deployment on a single site with total power installed, equivalent to a single LR. Uncertain scenario conditions and stochastic input assumptions are included in the analysis, representing investment uncertainty and risk. Results show that, despite the combination of much larger number of stochastic variables in SMR fleets, uncertainty of project profitability is not increased, as compared to LR: SMR have features able to smooth IRR variance and control investment risk. Despite dis-economy of scale, SMR represent a limited capital commitment and a scalable investment option that meet investors' interest, even in developed and mature markets, that are traditional marketplace for LR. (authors)
Directory of Open Access Journals (Sweden)
Dieisson Pivoto
2016-04-01
Full Text Available ABSTRACT: The study aimed to i quantify the measurement uncertainty in the physical tests of rice and beans for a hypothetical defect, ii verify whether homogenization and sample reduction in the physical classification tests of rice and beans is effective to reduce the measurement uncertainty of the process and iii determine whether the increase in size of beans sample increases accuracy and reduces measurement uncertainty in a significant way. Hypothetical defects in rice and beans with different damage levels were simulated according to the testing methodology determined by the Normative Ruling of each product. The homogenization and sample reduction in the physical classification of rice and beans are not effective, transferring to the final test result a high measurement uncertainty. The sample size indicated by the Normative Ruling did not allow an appropriate homogenization and should be increased.
Dissociation between Features and Feature Relations in Infant Memory: Effects of Memory Load.
Bhatt, Ramesh S.; Rovee-Collier, Carolyn
1997-01-01
Four experiments examined effects of the number of features and feature relations on learning and long-term memory in 3-month olds. Findings suggested that memory load size selectively constrained infants' long-term memory for relational information, suggesting that in infants, features and relations are psychologically distinct and that memory…
An uncertainty inventory demonstration - a primary step in uncertainty quantification
Energy Technology Data Exchange (ETDEWEB)
Langenbrunner, James R. [Los Alamos National Laboratory; Booker, Jane M [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Salazar, Issac F [Los Alamos National Laboratory; Ross, Timothy J [UNM
2009-01-01
Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.
International Nuclear Information System (INIS)
Vinai, P.
2007-10-01
associated to various individual points over the state space. By applying a novel multi-dimensional clustering technique, based on the non-parametric statistical Kruskal-Wallis test, it has been possible to achieve a partitioning of the state space into regions differing in terms of the quality of the physical model's predictions. The second step has been the quantification of the model's uncertainty, for each of the identified state space regions, by applying a probability density function (pdf) estimator. This is a kernel-type estimator, modelled on a universal orthogonal series estimator, such that its behaviour takes advantage of the good features of both estimator types and yields reasonable pdfs, even with samples of small size and not very compact distributions. The pdfs provide a reliable basis for sampling 'error values' for use in Monte-Carlo-type uncertainty propagation studies, aimed at quantifying the impact of the physical model's uncertainty on the code's output variables of interest. The effectiveness of the developed methodology was demonstrated by applying it to the quantification of the uncertainty related to thermal-hydraulic (drift-flux) models implemented in the best-estimate safety analysis code RETRAN-3D. This has been done via the usage of a wide database of void-fraction experiments for saturated and sub-cooled conditions. Appropriate pdfs were generated for quantification of the physical model's uncertainty in a 2-dimensional (pressure/mass-flux) state space, partitioned into 3 separate regions. The impact of the RETRAN-3D drift-flux model uncertainties has been assessed at three different levels of the code's application: (a) Achilles Experiment No. 2, a separate effect experiment not included in the original assessment database; (b) Omega Rod Bundle Test No. 9, an integral experiment simulating a PWR loss-of-coolant accident (LOCA); and (c) the Peach Bottom turbine trip test, a NPP (BWR) plant transient in which the void feedback mechanism plays
Recognizing and responding to uncertainty: a grounded theory of nurses' uncertainty.
Cranley, Lisa A; Doran, Diane M; Tourangeau, Ann E; Kushniruk, Andre; Nagle, Lynn
2012-08-01
There has been little research to date exploring nurses' uncertainty in their practice. Understanding nurses' uncertainty is important because it has potential implications for how care is delivered. The purpose of this study is to develop a substantive theory to explain how staff nurses experience and respond to uncertainty in their practice. Between 2006 and 2008, a grounded theory study was conducted that included in-depth semi-structured interviews. Fourteen staff nurses working in adult medical-surgical intensive care units at two teaching hospitals in Ontario, Canada, participated in the study. The theory recognizing and responding to uncertainty characterizes the processes through which nurses' uncertainty manifested and how it was managed. Recognizing uncertainty involved the processes of assessing, reflecting, questioning, and/or being unable to predict aspects of the patient situation. Nurses' responses to uncertainty highlighted the cognitive-affective strategies used to manage uncertainty. Study findings highlight the importance of acknowledging uncertainty and having collegial support to manage uncertainty. The theory adds to our understanding the processes involved in recognizing uncertainty, strategies and outcomes of managing uncertainty, and influencing factors. Tailored nursing education programs should be developed to assist nurses in developing skills in articulating and managing their uncertainty. Further research is needed to extend, test and refine the theory of recognizing and responding to uncertainty to develop strategies for managing uncertainty. This theory advances the nursing perspective of uncertainty in clinical practice. The theory is relevant to nurses who are faced with uncertainty and complex clinical decisions, to managers who support nurses in their clinical decision-making, and to researchers who investigate ways to improve decision-making and care delivery. ©2012 Sigma Theta Tau International.
Evaluating variability and uncertainty in radiological impact assessment using SYMBIOSE
International Nuclear Information System (INIS)
Simon-Cornu, M.; Beaugelin-Seiller, K.; Boyer, P.; Calmon, P.; Garcia-Sanchez, L.; Mourlon, C.; Nicoulaud, V.; Sy, M.; Gonze, M.A.
2015-01-01
SYMBIOSE is a modelling platform that accounts for variability and uncertainty in radiological impact assessments, when simulating the environmental fate of radionuclides and assessing doses to human populations. The default database of SYMBIOSE is partly based on parameter values that are summarized within International Atomic Energy Agency (IAEA) documents. To characterize uncertainty on the transfer parameters, 331 Probability Distribution Functions (PDFs) were defined from the summary statistics provided within the IAEA documents (i.e. sample size, minimal and maximum values, arithmetic and geometric means, standard and geometric standard deviations) and are made available as spreadsheet files. The methods used to derive the PDFs without complete data sets, but merely the summary statistics, are presented. Then, a simple case-study illustrates the use of the database in a second-order Monte Carlo calculation, separating parametric uncertainty and inter-individual variability. - Highlights: • Parametric uncertainty in radioecology was derived from IAEA documents. • 331 Probability Distribution Functions were defined for transfer parameters. • Parametric uncertainty and inter-individual variability were propagated
Impact of advanced fuel cycles on uncertainty associated with geologic repositories
International Nuclear Information System (INIS)
Rechard, Rob P.; Lee, Joon; Sutton, Mark; Greenberg, Harris R.; Robinson, Bruce A.; Nutt, W. Mark
2013-01-01
This paper provides a qualitative evaluation of the impact of advanced fuel cycles, particularly partition and transmutation of actinides, on the uncertainty associated with geologic disposal. Based on the discussion, advanced fuel cycles, will not materially alter (1) the repository performance (2) the spread in dose results around the mean (3) the modeling effort to include significant features, events, and processes in the performance assessment, or (4) the characterization of uncertainty associated with a geologic disposal system in the regulatory environment of the United States. (authors)
Uncertainty and validation. Effect of model complexity on uncertainty estimates
Energy Technology Data Exchange (ETDEWEB)
Elert, M. [Kemakta Konsult AB, Stockholm (Sweden)] [ed.
1996-09-01
zone concentration. Models considering faster net downward flow in the upper part of the root zone predict a more rapid decline in root zone concentration than models that assume a constant infiltration throughout the soil column. A sensitivity analysis performed on two of the models shows that the important parameters are the effective precipitation, the root water uptake and the soil K{sub d}-values. For the advection-dispersion model, the dispersion length is also important for the maximum flux to the groundwater. The amount of dispersion in radionuclide transport is of importance for the release to groundwater. For the box models, an inherent dispersion is obtained by the assumption of instantaneous mixing in the boxes. The degree of dispersion in the calculation will be a function of the size of the boxes. It is therefore important that division of the soil column is made with care in order to obtain the intended values. For many models the uncertainty calculations give very skewed distributions for the flux to the groundwater. In some cases the mean of the stochastic calculation can be several orders of magnitude higher than the value from the deterministic calculations. In relation to the objectives set up for this study it can be concluded that: The analysis of the relationship between uncertainty and model complexity proved to be a difficult task. For the studied scenario, the uncertainty in the model predictions does not have a simple relationship with the complexity of the models used. However, a complete analysis could not be performed since uncertainty results were not available for the full range of models and furthermore were not the uncertainty analysis always carried out in a consistent way. The predicted uncertainty associated with the concentration in the root zone does not show very much variation between the modelling approaches. For the predictions of the flux to groundwater, the simple models and the more complex gave very different results for
Modelling ecosystem service flows under uncertainty with stochiastic SPAN
Johnson, Gary W.; Snapp, Robert R.; Villa, Ferdinando; Bagstad, Kenneth J.
2012-01-01
Ecosystem service models are increasingly in demand for decision making. However, the data required to run these models are often patchy, missing, outdated, or untrustworthy. Further, communication of data and model uncertainty to decision makers is often either absent or unintuitive. In this work, we introduce a systematic approach to addressing both the data gap and the difﬁculty in communicating uncertainty through a stochastic adaptation of the Service Path Attribution Networks (SPAN) framework. The SPAN formalism assesses ecosystem services through a set of up to 16 maps, which characterize the services in a study area in terms of ﬂow pathways between ecosystems and human beneﬁciaries. Although the SPAN algorithms were originally deﬁned deterministically, we present them here in a stochastic framework which combines probabilistic input data with a stochastic transport model in order to generate probabilistic spatial outputs. This enables a novel feature among ecosystem service models: the ability to spatially visualize uncertainty in the model results. The stochastic SPAN model can analyze areas where data limitations are prohibitive for deterministic models. Greater uncertainty in the model inputs (including missing data) should lead to greater uncertainty expressed in the model’s output distributions. By using Bayesian belief networks to ﬁll data gaps and expert-provided trust assignments to augment untrustworthy or outdated information, we can account for uncertainty in input data, producing a model that is still able to run and provide information where strictly deterministic models could not. Taken together, these attributes enable more robust and intuitive modelling of ecosystem services under uncertainty.
Energy Technology Data Exchange (ETDEWEB)
Forst, Michael
2012-11-01
The shakeout in the solar cell and module industry is in full swing. While the number of companies and production locations shutting down in the Western world is increasing, the capacity expansion in the Far East seems to be unbroken. Size in combination with a good sales network has become the key to success for surviving in the current storm. The trade war with China already looming on the horizon is adding to the uncertainties. (orig.)
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
On investment, uncertainty, and strategic interaction with applications in energy markets
International Nuclear Information System (INIS)
Murto, P.
2003-01-01
The thesis presents dynamic models on investment under uncertainty with the focus on strategic interaction and energy market applications. The uncertainty is modelled using stochastic processes as state variables. The specific questions analyzed include the effect of technological and revenue related uncertainties on the optimal timing of investment, the irreversibility in the choice between alternative investment projects with different degrees of uncertainty, and the effect of strategic interaction on the initiating of discrete investment projects, on the abandonment of a project, and on incremental capacity investments. The main methodological feature is the incorporation of game theoretic concepts in the theory of investment. It is argued that such an approach is often desirable in terms of real applications, because many industries are characterized by both uncertainty and strategic interaction between the firms. Besides extending the theory of investment, this line of work may be seen as an extension of the theory of industrial organization towards the direction that views market stability as one of the factors explaining rational behaviour of the firms. (orig.)
International Nuclear Information System (INIS)
Tilly, David; Tilly, Nina; Ahnesjö, Anders
2013-01-01
Calculation of accumulated dose in fractionated radiotherapy based on spatial mapping of the dose points generally requires deformable image registration (DIR). The accuracy of the accumulated dose thus depends heavily on the DIR quality. This motivates investigations of how the registration uncertainty influences dose planning objectives and treatment outcome predictions. A framework was developed where the dose mapping can be associated with a variable known uncertainty to simulate the DIR uncertainties in a clinical workflow. The framework enabled us to study the dependence of dose planning metrics, and the predicted treatment outcome, on the DIR uncertainty. The additional planning margin needed to compensate for the dose mapping uncertainties can also be determined. We applied the simulation framework to a hypofractionated proton treatment of the prostate using two different scanning beam spot sizes to also study the dose mapping sensitivity to penumbra widths. The planning parameter most sensitive to the DIR uncertainty was found to be the target D 95 . We found that the registration mean absolute error needs to be ≤0.20 cm to obtain an uncertainty better than 3% of the calculated D 95 for intermediate sized penumbras. Use of larger margins in constructing PTV from CTV relaxed the registration uncertainty requirements to the cost of increased dose burdens to the surrounding organs at risk. The DIR uncertainty requirements should be considered in an adaptive radiotherapy workflow since this uncertainty can have significant impact on the accumulated dose. The simulation framework enabled quantification of the accuracy requirement for DIR algorithms to provide satisfactory clinical accuracy in the accumulated dose
Statistical Uncertainty Quantification of Physical Models during Reflood of LBLOCA
Energy Technology Data Exchange (ETDEWEB)
Oh, Deog Yeon; Seul, Kwang Won; Woo, Sweng Woong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)
2015-05-15
The use of the best-estimate (BE) computer codes in safety analysis for loss-of-coolant accident (LOCA) is the major trend in many countries to reduce the significant conservatism. A key feature of this BE evaluation requires the licensee to quantify the uncertainty of the calculations. So, it is very important how to determine the uncertainty distribution before conducting the uncertainty evaluation. Uncertainty includes those of physical model and correlation, plant operational parameters, and so forth. The quantification process is often performed mainly by subjective expert judgment or obtained from reference documents of computer code. In this respect, more mathematical methods are needed to reasonably determine the uncertainty ranges. The first uncertainty quantification are performed with the various increments for two influential uncertainty parameters to get the calculated responses and their derivatives. The different data set with two influential uncertainty parameters for FEBA tests, are chosen applying more strict criteria for selecting responses and their derivatives, which may be considered as the user’s effect in the CIRCÉ applications. Finally, three influential uncertainty parameters are considered to study the effect on the number of uncertainty parameters due to the limitation of CIRCÉ method. With the determined uncertainty ranges, uncertainty evaluations for FEBA tests are performed to check whether the experimental responses such as the cladding temperature or pressure drop are inside the limits of calculated uncertainty bounds. A confirmation step will be performed to evaluate the quality of the information in the case of the different reflooding PERICLES experiments. The uncertainty ranges of physical model in MARS-KS thermal-hydraulic code during the reflooding were quantified by CIRCÉ method using FEBA experiment tests, instead of expert judgment. Also, through the uncertainty evaluation for FEBA and PERICLES tests, it was confirmed
Spafford, Marlee M; Schryer, Catherine F; Lingard, Lorelei; Hrynchak, Patricia K
2006-01-01
Healthcare students learn to manage clinical uncertainty amid the tensions that emerge between clinical omniscience and the 'truth for now' realities of the knowledge explosion in healthcare. The case presentation provides a portal to viewing the practitioner's ability to manage uncertainty. We examined the communicative features of uncertainty in 31 novice optometry case presentations and considered how these features contributed to the development of professional identity in optometry students. We also reflected on how these features compared with our earlier study of medical students' case presentations. Optometry students, like their counterparts in medicine, displayed a novice rhetoric of uncertainty that focused on personal deficits in knowledge. While optometry and medical students shared aspects of this rhetoric (seeking guidance and deflecting criticism), optometry students displayed instances of owning limits while medical students displayed instances of proving competence. We found that the nature of this novice rhetoric was shaped by professional identity (a tendency to assume an attitude of moral authority or defer to a higher authority) and the clinical setting (inpatient versus outpatient settings). More explicit discussions regarding uncertainty may help the novice unlock the code of contextual forces that cue the savvy member of the community to sanctioned discursive strategies.
Comparing particle-size distributions in modern and ancient sand-bed rivers
Hajek, E. A.; Lynds, R. M.; Huzurbazar, S. V.
2011-12-01
Particle-size distributions yield valuable insight into processes controlling sediment supply, transport, and deposition in sedimentary systems. This is especially true in ancient deposits, where effects of changing boundary conditions and autogenic processes may be detected from deposited sediment. In order to improve interpretations in ancient deposits and constrain uncertainty associated with new methods for paleomorphodynamic reconstructions in ancient fluvial systems, we compare particle-size distributions in three active sand-bed rivers in central Nebraska (USA) to grain-size distributions from ancient sandy fluvial deposits. Within the modern rivers studied, particle-size distributions of active-layer, suspended-load, and slackwater deposits show consistent relationships despite some morphological and sediment-supply differences between the rivers. In particular, there is substantial and consistent overlap between bed-material and suspended-load distributions, and the coarsest material found in slackwater deposits is comparable to the coarse fraction of suspended-sediment samples. Proxy bed-load and slackwater-deposit samples from the Kayenta Formation (Lower Jurassic, Utah/Colorado, USA) show overlap similar to that seen in the modern rivers, suggesting that these deposits may be sampled for paleomorphodynamic reconstructions, including paleoslope estimation. We also compare grain-size distributions of channel, floodplain, and proximal-overbank deposits in the Willwood (Paleocene/Eocene, Bighorn Basin, Wyoming, USA), Wasatch (Paleocene/Eocene, Piceance Creek Basin, Colorado, USA), and Ferris (Cretaceous/Paleocene, Hanna Basin, Wyoming, USA) formations. Grain-size characteristics in these deposits reflect how suspended- and bed-load sediment is distributed across the floodplain during channel avulsion events. In order to constrain uncertainty inherent in such estimates, we evaluate uncertainty associated with sample collection, preparation, analytical
Compilation of information on uncertainties involved in deposition modeling
International Nuclear Information System (INIS)
Lewellen, W.S.; Varma, A.K.; Sheng, Y.P.
1985-04-01
The current generation of dispersion models contains very simple parameterizations of deposition processes. The analysis here looks at the physical mechanisms governing these processes in an attempt to see if more valid parameterizations are available and what level of uncertainty is involved in either these simple parameterizations or any more advanced parameterization. The report is composed of three parts. The first, on dry deposition model sensitivity, provides an estimate of the uncertainty existing in current estimates of the deposition velocity due to uncertainties in independent variables such as meteorological stability, particle size, surface chemical reactivity and canopy structure. The range of uncertainty estimated for an appropriate dry deposition velocity for a plume generated by a nuclear power plant accident is three orders of magnitude. The second part discusses the uncertainties involved in precipitation scavenging rates for effluents resulting from a nuclear reactor accident. The conclusion is that major uncertainties are involved both as a result of the natural variability of the atmospheric precipitation process and due to our incomplete understanding of the underlying process. The third part involves a review of the important problems associated with modeling the interaction between the atmosphere and a forest. It gives an indication of the magnitude of the problem involved in modeling dry deposition in such environments. Separate analytics have been done for each section and are contained in the EDB
International Nuclear Information System (INIS)
Fischer, F.; Ehrhardt, J.
1988-06-01
Various techniques available for uncertainty analysis of large computer models are applied, described and selected as most appropriate for analyzing the uncertainty in the predictions of accident consequence assessments. The investigation refers to the atmospheric dispersion and deposition submodel (straight-line Gaussian plume model) of UFOMOD, whose most important input variables and parameters are linked with probability distributions derived from expert judgement. Uncertainty bands show how much variability exists, sensitivity measures determine what causes this variability in consequences. Results are presented as confidence bounds of complementary cumulative frequency distributions (CCFDs) of activity concentrations, organ doses and health effects, partially as a function of distance from the site. In addition the ranked influence of the uncertain parameters on the different consequence types is shown. For the estimation of confidence bounds it was sufficient to choose a model parameter sample size of n (n=59) equal to 1.5 times the number of uncertain model parameters. Different samples or an increase of sample size did not change the 5%-95% - confidence bands. To get statistically stable results of the sensitivity analysis, larger sample sizes are needed (n=100, 200). Random or Latin-hypercube sampling schemes as tools for uncertainty and sensitivity analyses led to comparable results. (orig.) [de
Uncertainty in projected impacts of climate change on biodiversity
DEFF Research Database (Denmark)
Garcia, Raquel A.
Evidence for shifts in the phenologies and distributions of species over recent decades has often been attributed to climate change. The prospect of greater and faster changes in climate during the 21st century has spurred a stream of studies anticipating future biodiversity impacts. Yet, uncerta......Evidence for shifts in the phenologies and distributions of species over recent decades has often been attributed to climate change. The prospect of greater and faster changes in climate during the 21st century has spurred a stream of studies anticipating future biodiversity impacts. Yet......, uncertainty is inherent to both projected climate changes and their effects on biodiversity, and needs to be understood before projections can be used. This thesis seeks to elucidate some of the uncertainties clouding assessments of biodiversity impacts from climate change, and explores ways to address them...... models, are shown to be affected by multiple uncertainties. Different model algorithms produce different outputs, as do alternative future climate models and scenarios of future emissions of greenhouse gases. Another uncertainty arises due to omission of species with small sample sizes, which...
SU-G-BRA-09: Estimation of Motion Tracking Uncertainty for Real-Time Adaptive Imaging
Energy Technology Data Exchange (ETDEWEB)
Yan, H [Capital Medical University, Beijing, Beijing (China); Chen, Z [Yale New Haven Hospital, New Haven, CT (United States); Nath, R; Liu, W [Yale University School of Medicine, New Haven, CT (United States)
2016-06-15
Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertainty through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the
SU-G-BRA-09: Estimation of Motion Tracking Uncertainty for Real-Time Adaptive Imaging
International Nuclear Information System (INIS)
Yan, H; Chen, Z; Nath, R; Liu, W
2016-01-01
Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertainty through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the
The magnitude and causes of uncertainty in global model simulations of cloud condensation nuclei
Directory of Open Access Journals (Sweden)
L. A. Lee
2013-09-01
Full Text Available Aerosol–cloud interaction effects are a major source of uncertainty in climate models so it is important to quantify the sources of uncertainty and thereby direct research efforts. However, the computational expense of global aerosol models has prevented a full statistical analysis of their outputs. Here we perform a variance-based analysis of a global 3-D aerosol microphysics model to quantify the magnitude and leading causes of parametric uncertainty in model-estimated present-day concentrations of cloud condensation nuclei (CCN. Twenty-eight model parameters covering essentially all important aerosol processes, emissions and representation of aerosol size distributions were defined based on expert elicitation. An uncertainty analysis was then performed based on a Monte Carlo-type sampling of an emulator built for each model grid cell. The standard deviation around the mean CCN varies globally between about ±30% over some marine regions to ±40–100% over most land areas and high latitudes, implying that aerosol processes and emissions are likely to be a significant source of uncertainty in model simulations of aerosol–cloud effects on climate. Among the most important contributors to CCN uncertainty are the sizes of emitted primary particles, including carbonaceous combustion particles from wildfires, biomass burning and fossil fuel use, as well as sulfate particles formed on sub-grid scales. Emissions of carbonaceous combustion particles affect CCN uncertainty more than sulfur emissions. Aerosol emission-related parameters dominate the uncertainty close to sources, while uncertainty in aerosol microphysical processes becomes increasingly important in remote regions, being dominated by deposition and aerosol sulfate formation during cloud-processing. The results lead to several recommendations for research that would result in improved modelling of cloud–active aerosol on a global scale.
Demonstration uncertainty/sensitivity analysis using the health and economic consequence model CRAC2
International Nuclear Information System (INIS)
Alpert, D.J.; Iman, R.L.; Johnson, J.D.; Helton, J.C.
1984-12-01
The techniques for performing uncertainty/sensitivity analyses compiled as part of the MELCOR program appear to be well suited for use with a health and economic consequence model. Two replicate samples of size 50 gave essentially identical results, indicating that for this case, a Latin hypercube sample of size 50 seems adequate to represent the distribution of results. Though the intent of this study was a demonstration of uncertainty/sensitivity analysis techniques, a number of insights relevant to health and economic consequence modeling can be gleaned: uncertainties in early deaths are significantly greater than uncertainties in latent cancer deaths; though the magnitude of the source term is the largest source of variation in estimated distributions of early deaths, a number of additional parameters are also important; even with the release fractions for a full SST1, one quarter of the CRAC2 runs gave no early deaths; and comparison of the estimates of mean early deaths for a full SST1 release in this study with those of recent point estimates for similar conditions indicates that the recent estimates may be significant overestimations of early deaths. Estimates of latent cancer deaths, however, are roughly comparable. An analysis of the type described here can provide insights in a number of areas. First, the variability in the results gives an indication of the potential uncertainty associated with the calculations. Second, the sensitivity of the results to assumptions about the input variables can be determined. Research efforts can then be concentrated on reducing the uncertainty in the variables which are the largest contributors to uncertainty in results
Quantification of margins and uncertainties: Alternative representations of epistemic uncertainty
International Nuclear Information System (INIS)
Helton, Jon C.; Johnson, Jay D.
2011-01-01
In 2001, the National Nuclear Security Administration of the U.S. Department of Energy in conjunction with the national security laboratories (i.e., Los Alamos National Laboratory, Lawrence Livermore National Laboratory and Sandia National Laboratories) initiated development of a process designated Quantification of Margins and Uncertainties (QMU) for the use of risk assessment methodologies in the certification of the reliability and safety of the nation's nuclear weapons stockpile. A previous presentation, 'Quantification of Margins and Uncertainties: Conceptual and Computational Basis,' describes the basic ideas that underlie QMU and illustrates these ideas with two notional examples that employ probability for the representation of aleatory and epistemic uncertainty. The current presentation introduces and illustrates the use of interval analysis, possibility theory and evidence theory as alternatives to the use of probability theory for the representation of epistemic uncertainty in QMU-type analyses. The following topics are considered: the mathematical structure of alternative representations of uncertainty, alternative representations of epistemic uncertainty in QMU analyses involving only epistemic uncertainty, and alternative representations of epistemic uncertainty in QMU analyses involving a separation of aleatory and epistemic uncertainty. Analyses involving interval analysis, possibility theory and evidence theory are illustrated with the same two notional examples used in the presentation indicated above to illustrate the use of probability to represent aleatory and epistemic uncertainty in QMU analyses.
International Nuclear Information System (INIS)
Andres, T.H.
2002-05-01
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
Energy Technology Data Exchange (ETDEWEB)
Andres, T.H
2002-05-01
This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)
Uncertainty analysis for hot channel
International Nuclear Information System (INIS)
Panka, I.; Kereszturi, A.
2006-01-01
The fulfillment of the safety analysis acceptance criteria is usually evaluated by separate hot channel calculations using the results of neutronic or/and thermo hydraulic system calculations. In case of an ATWS event (inadvertent withdrawal of control assembly), according to the analysis, a number of fuel rods are experiencing DNB for a longer time and must be regarded as failed. Their number must be determined for a further evaluation of the radiological consequences. In the deterministic approach, the global power history must be multiplied by different hot channel factors (kx) taking into account the radial power peaking factors for each fuel pin. If DNB occurs it is necessary to perform a few number of hot channel calculations to determine the limiting kx leading just to DNB and fuel failure (the conservative DNBR limit is 1.33). Knowing the pin power distribution from the core design calculation, the number of failed fuel pins can be calculated. The above procedure can be performed by conservative assumptions (e.g. conservative input parameters in the hot channel calculations), as well. In case of hot channel uncertainty analysis, the relevant input parameters (k x, mass flow, inlet temperature of the coolant, pin average burnup, initial gap size, selection of power history influencing the gap conductance value) of hot channel calculations and the DNBR limit are varied considering the respective uncertainties. An uncertainty analysis methodology was elaborated combining the response surface method with the one sided tolerance limit method of Wilks. The results of deterministic and uncertainty hot channel calculations are compared regarding to the number of failed fuel rods, max. temperature of the clad surface and max. temperature of the fuel (Authors)
Energy Technology Data Exchange (ETDEWEB)
Cheng, Nai-Ming [Chang Gung Memorial Hospital and Chang Gung University, Departments of Nuclear Medicine, Taiyuan (China); Chang Gung Memorial Hospital, Department of Nuclear Medicine, Keelung (China); National Tsing Hua University, Department of Biomedical Engineering and Environmental Sciences, Hsinchu (China); Fang, Yu-Hua Dean [Chang Gung University, Department of Electrical Engineering, Taiyuan (China); Lee, Li-yu [Chang Gung University College of Medicine, Department of Pathology, Chang Gung Memorial Hospital, Taoyuan (China); Chang, Joseph Tung-Chieh; Tsan, Din-Li [Chang Gung University College of Medicine, Department of Radiation Oncology, Chang Gung Memorial Hospital, Taoyuan (China); Ng, Shu-Hang [Chang Gung University College of Medicine, Department of Diagnostic Radiology, Chang Gung Memorial Hospital, Taoyuan (China); Wang, Hung-Ming [Chang Gung University College of Medicine, Division of Hematology/Oncology, Department of Internal Medicine, Chang Gung Memorial Hospital, Taoyuan (China); Liao, Chun-Ta [Chang Gung University College of Medicine, Department of Otolaryngology-Head and Neck Surgery, Chang Gung Memorial Hospital, Taoyuan (China); Yang, Lan-Yan [Chang Gung Memorial Hospital, Biostatistics Unit, Clinical Trial Center, Taoyuan (China); Hsu, Ching-Han [National Tsing Hua University, Department of Biomedical Engineering and Environmental Sciences, Hsinchu (China); Yen, Tzu-Chen [Chang Gung Memorial Hospital and Chang Gung University, Departments of Nuclear Medicine, Taiyuan (China); Chang Gung University College of Medicine, Department of Nuclear Medicine and Molecular Imaging Center, Chang Gung Memorial Hospital, Taipei (China)
2014-10-23
The question as to whether the regional textural features extracted from PET images predict prognosis in oropharyngeal squamous cell carcinoma (OPSCC) remains open. In this study, we investigated the prognostic impact of regional heterogeneity in patients with T3/T4 OPSCC. We retrospectively reviewed the records of 88 patients with T3 or T4 OPSCC who had completed primary therapy. Progression-free survival (PFS) and disease-specific survival (DSS) were the main outcome measures. In an exploratory analysis, a standardized uptake value of 2.5 (SUV 2.5) was taken as the cut-off value for the detection of tumour boundaries. A fixed threshold at 42 % of the maximum SUV (SUV{sub max} 42 %) and an adaptive threshold method were then used for validation. Regional textural features were extracted from pretreatment {sup 18}F-FDG PET/CT images using the grey-level run length encoding method and grey-level size zone matrix. The prognostic significance of PET textural features was examined using receiver operating characteristic (ROC) curves and Cox regression analysis. Zone-size nonuniformity (ZSNU) was identified as an independent predictor of PFS and DSS. Its prognostic impact was confirmed using both the SUV{sub max} 42 % and the adaptive threshold segmentation methods. Based on (1) total lesion glycolysis, (2) uniformity (a local scale texture parameter), and (3) ZSNU, we devised a prognostic stratification system that allowed the identification of four distinct risk groups. The model combining the three prognostic parameters showed a higher predictive value than each variable alone. ZSNU is an independent predictor of outcome in patients with advanced T-stage OPSCC, and may improve their prognostic stratification. (orig.)
Decoherence effect on quantum-memory-assisted entropic uncertainty relations
Ming, Fei; Wang, Dong; Huang, Ai-Jun; Sun, Wen-Yang; Ye, Liu
2018-01-01
Uncertainty principle significantly provides a bound to predict precision of measurement with regard to any two incompatible observables, and thereby plays a nontrivial role in quantum precision measurement. In this work, we observe the dynamical features of the quantum-memory-assisted entropic uncertainty relations (EUR) for a pair of incompatible measurements in an open system characterized by local generalized amplitude damping (GAD) noises. Herein, we derive the dynamical evolution of the entropic uncertainty with respect to the measurement affecting by the canonical GAD noises when particle A is initially entangled with quantum memory B. Specifically, we examine the dynamics of EUR in the frame of three realistic scenarios: one case is that particle A is affected by environmental noise (GAD) while particle B as quantum memory is free from any noises, another case is that particle B is affected by the external noise while particle A is not, and the last case is that both of the particles suffer from the noises. By analytical methods, it turns out that the uncertainty is not full dependent of quantum correlation evolution of the composite system consisting of A and B, but the minimal conditional entropy of the measured subsystem. Furthermore, we present a possible physical interpretation for the behavior of the uncertainty evolution by means of the mixedness of the observed system; we argue that the uncertainty might be dramatically correlated with the systematic mixedness. Furthermore, we put forward a simple and effective strategy to reduce the measuring uncertainty of interest upon quantum partially collapsed measurement. Therefore, our explorations might offer an insight into the dynamics of the entropic uncertainty relation in a realistic system, and be of importance to quantum precision measurement during quantum information processing.
Feature-Based Retinal Image Registration Using D-Saddle Feature
Directory of Open Access Journals (Sweden)
Roziana Ramli
2017-01-01
Full Text Available Retinal image registration is important to assist diagnosis and monitor retinal diseases, such as diabetic retinopathy and glaucoma. However, registering retinal images for various registration applications requires the detection and distribution of feature points on the low-quality region that consists of vessels of varying contrast and sizes. A recent feature detector known as Saddle detects feature points on vessels that are poorly distributed and densely positioned on strong contrast vessels. Therefore, we propose a multiresolution difference of Gaussian pyramid with Saddle detector (D-Saddle to detect feature points on the low-quality region that consists of vessels with varying contrast and sizes. D-Saddle is tested on Fundus Image Registration (FIRE Dataset that consists of 134 retinal image pairs. Experimental results show that D-Saddle successfully registered 43% of retinal image pairs with average registration accuracy of 2.329 pixels while a lower success rate is observed in other four state-of-the-art retinal image registration methods GDB-ICP (28%, Harris-PIIFD (4%, H-M (16%, and Saddle (16%. Furthermore, the registration accuracy of D-Saddle has the weakest correlation (Spearman with the intensity uniformity metric among all methods. Finally, the paired t-test shows that D-Saddle significantly improved the overall registration accuracy of the original Saddle.
Walz, Michael; Leckebusch, Gregor C.
2016-04-01
Extratropical wind storms pose one of the most dangerous and loss intensive natural hazards for Europe. However, due to only 50 years of high quality observational data, it is difficult to assess the statistical uncertainty of these sparse events just based on observations. Over the last decade seasonal ensemble forecasts have become indispensable in quantifying the uncertainty of weather prediction on seasonal timescales. In this study seasonal forecasts are used in a climatological context: By making use of the up to 51 ensemble members, a broad and physically consistent statistical base can be created. This base can then be used to assess the statistical uncertainty of extreme wind storm occurrence more accurately. In order to determine the statistical uncertainty of storms with different paths of progression, a probabilistic clustering approach using regression mixture models is used to objectively assign storm tracks (either based on core pressure or on extreme wind speeds) to different clusters. The advantage of this technique is that the entire lifetime of a storm is considered for the clustering algorithm. Quadratic curves are found to describe the storm tracks most accurately. Three main clusters (diagonal, horizontal or vertical progression of the storm track) can be identified, each of which have their own particulate features. Basic storm features like average velocity and duration are calculated and compared for each cluster. The main benefit of this clustering technique, however, is to evaluate if the clusters show different degrees of uncertainty, e.g. more (less) spread for tracks approaching Europe horizontally (diagonally). This statistical uncertainty is compared for different seasonal forecast products.
Kalman filter approach for uncertainty quantification in time-resolved laser-induced incandescence.
Hadwin, Paul J; Sipkens, Timothy A; Thomson, Kevin A; Liu, Fengshan; Daun, Kyle J
2018-03-01
Time-resolved laser-induced incandescence (TiRe-LII) data can be used to infer spatially and temporally resolved volume fractions and primary particle size distributions of soot-laden aerosols, but these estimates are corrupted by measurement noise as well as uncertainties in the spectroscopic and heat transfer submodels used to interpret the data. Estimates of the temperature, concentration, and size distribution of soot primary particles within a sample aerosol are typically made by nonlinear regression of modeled spectral incandescence decay, or effective temperature decay, to experimental data. In this work, we employ nonstationary Bayesian estimation techniques to infer aerosol properties from simulated and experimental LII signals, specifically the extended Kalman filter and Schmidt-Kalman filter. These techniques exploit the time-varying nature of both the measurements and the models, and they reveal how uncertainty in the estimates computed from TiRe-LII data evolves over time. Both techniques perform better when compared with standard deterministic estimates; however, we demonstrate that the Schmidt-Kalman filter produces more realistic uncertainty estimates.
Directory of Open Access Journals (Sweden)
A. Wiedensohler
2012-03-01
Full Text Available Mobility particle size spectrometers often referred to as DMPS (Differential Mobility Particle Sizers or SMPS (Scanning Mobility Particle Sizers have found a wide range of applications in atmospheric aerosol research. However, comparability of measurements conducted world-wide is hampered by lack of generally accepted technical standards and guidelines with respect to the instrumental set-up, measurement mode, data evaluation as well as quality control. Technical standards were developed for a minimum requirement of mobility size spectrometry to perform long-term atmospheric aerosol measurements. Technical recommendations include continuous monitoring of flow rates, temperature, pressure, and relative humidity for the sheath and sample air in the differential mobility analyzer.
We compared commercial and custom-made inversion routines to calculate the particle number size distributions from the measured electrical mobility distribution. All inversion routines are comparable within few per cent uncertainty for a given set of raw data.
Furthermore, this work summarizes the results from several instrument intercomparison workshops conducted within the European infrastructure project EUSAAR (European Supersites for Atmospheric Aerosol Research and ACTRIS (Aerosols, Clouds, and Trace gases Research InfraStructure Network to determine present uncertainties especially of custom-built mobility particle size spectrometers. Under controlled laboratory conditions, the particle number size distributions from 20 to 200 nm determined by mobility particle size spectrometers of different design are within an uncertainty range of around ±10% after correcting internal particle losses, while below and above this size range the discrepancies increased. For particles larger than 200 nm, the uncertainty range increased to 30%, which could not be explained. The network reference mobility spectrometers with identical design agreed within ±4% in the
Tsai, Meng-Yin; Lan, Kuo-Chung; Ou, Chia-Yo; Chen, Jen-Huang; Chang, Shiuh-Young; Hsu, Te-Yao
2004-02-01
Our purpose was to evaluate whether the application of serial three-dimensional (3D) sonography and the mandibular size monogram can allow observation of dynamic changes in facial features, as well as chin development in utero. The mandibular size monogram has been established through a cross-sectional study involving 183 fetal images. The serial changes of facial features and chin development are assessed in a cohort study involving 40 patients. The monogram reveals that the Biparietal distance (BPD)/Mandibular body length (MBL) ratio is gradually decreased with the advance of gestational age. The cohort study conducted with serial 3D sonography shows the same tendency. Both the images and the results of paired-samples t test (Pmonogram display disproportionate growth of the fetal head and chin that leads to changes in facial features in late gestation. This fact must be considered when we evaluate fetuses at risk for development of micrognathia.
The Size and Structure of Government
Michael, Bryane; Popov, Maja
2015-01-01
Does government size and structure adapt to changes in government’s organisational environment (particularly to uncertainty and complexity) as predicted by organisational theory? We find – using a range of statistical analyses – support for each of the major theories of organisation adaptation (the contingency-based view, resource-based view, and rational choice view). We find that both government size and structure change – holding other factors constant – for changes in the uncertaint...
Investment in different sized SMRs: Economic evaluation of stochastic scenarios by INCAS code
Energy Technology Data Exchange (ETDEWEB)
Barenghi, S.; Boarin, S.; Ricotti, M. E. [Politecnico di Milano, Dept. of Energy, CeSNEF-Nuclear Engineering Div., via La Masa 34, 20156 Milano (Italy)
2012-07-01
Small Modular LWR concepts are being developed and proposed to investors worldwide. They capitalize on operating track record of GEN II LWR, while introducing innovative design enhancements allowed by smaller size and additional benefits from the higher degree of modularization and from deployment of multiple units on the same site. (i.e. 'Economy of Multiple' paradigm) Nevertheless Small Modular Reactors pay for a dis-economy of scale that represents a relevant penalty on a capital intensive investment. Investors in the nuclear power generation industry face a very high financial risk, due to high capital commitment and exceptionally long pay-back time. Investment risk arise from uncertainty that affects scenario conditions over such a long time horizon. Risk aversion is increased by current adverse conditions of financial markets and general economic downturn, as is the case nowadays. This work investigates both the investment profitability and risk of alternative investments in a single Large Reactor or in multiple SMR of different sizes drawing information from project's Internal Rate of Return stochastic distribution. multiple SMR deployment on a single site with total power installed, equivalent to a single LR. Uncertain scenario conditions and stochastic input assumptions are included in the analysis, representing investment uncertainty and risk. Results show that, despite the combination of much larger number of stochastic variables in SMR fleets, uncertainty of project profitability is not increased, as compared to LR: SMR have features able to smooth IRR variance and control investment risk. Despite dis-economy of scale, SMR represent a limited capital commitment and a scalable investment option that meet investors' interest, even in developed and mature markets, that are traditional marketplace for LR. (authors)
Introduction of risk size in the determination of uncertainty factor UFL in risk assessment
International Nuclear Information System (INIS)
Xue Jinling; Lu Yun; Velasquez, Natalia; Hu Hongying; Yu Ruozhen; Liu Zhengtao; Meng Wei
2012-01-01
The methodology for using uncertainty factors in health risk assessment has been developed for several decades. A default value is usually applied for the uncertainty factor UF L , which is used to extrapolate from LOAEL (lowest observed adverse effect level) to NAEL (no adverse effect level). Here, we have developed a new method that establishes a linear relationship between UF L and the additional risk level at LOAEL based on the dose–response information, which represents a very important factor that should be carefully considered. This linear formula makes it possible to select UF L properly in the additional risk range from 5.3% to 16.2%. Also the results remind us that the default value 10 may not be conservative enough when the additional risk level at LOAEL exceeds 16.2%. Furthermore, this novel method not only provides a flexible UF L instead of the traditional default value, but also can ensure a conservative estimation of the UF L with fewer errors, and avoid the benchmark response selection involved in the benchmark dose method. These advantages can improve the estimation of the extrapolation starting point in the risk assessment. (letter)
Impact of Damping Uncertainty on SEA Model Response Variance
Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand
2010-01-01
Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.
Gates, Kevin; Chang, Ning; Dilek, Isil; Jian, Huahua; Pogue, Sherri; Sreenivasan, Uma
2009-10-01
Certified solution standards are widely used in forensic toxicological, clinical/diagnostic, and environmental testing. Typically, these standards are purchased as ampouled solutions with a certified concentration. Vendors present concentration and uncertainty differently on their Certificates of Analysis. Understanding the factors that impact uncertainty and which factors have been considered in the vendor's assignment of uncertainty are critical to understanding the accuracy of the standard and the impact on testing results. Understanding these variables is also important for laboratories seeking to comply with ISO/IEC 17025 requirements and for those preparing reference solutions from neat materials at the bench. The impact of uncertainty associated with the neat material purity (including residual water, residual solvent, and inorganic content), mass measurement (weighing techniques), and solvent addition (solution density) on the overall uncertainty of the certified concentration is described along with uncertainty calculations.
Heisenberg's principle of uncertainty and the uncertainty relations
International Nuclear Information System (INIS)
Redei, Miklos
1987-01-01
The usual verbal form of the Heisenberg uncertainty principle and the usual mathematical formulation (the so-called uncertainty theorem) are not equivalent. The meaning of the concept 'uncertainty' is not unambiguous and different interpretations are used in the literature. Recently a renewed interest has appeared to reinterpret and reformulate the precise meaning of Heisenberg's principle and to find adequate mathematical form. The suggested new theorems are surveyed and critically analyzed. (D.Gy.) 20 refs
Koch, Michael
Measurement uncertainty is one of the key issues in quality assurance. It became increasingly important for analytical chemistry laboratories with the accreditation to ISO/IEC 17025. The uncertainty of a measurement is the most important criterion for the decision whether a measurement result is fit for purpose. It also delivers help for the decision whether a specification limit is exceeded or not. Estimation of measurement uncertainty often is not trivial. Several strategies have been developed for this purpose that will shortly be described in this chapter. In addition the different possibilities to take into account the uncertainty in compliance assessment are explained.
Al-Ghraibah, Amani
Solar flares release stored magnetic energy in the form of radiation and can have significant detrimental effects on earth including damage to technological infrastructure. Recent work has considered methods to predict future flare activity on the basis of quantitative measures of the solar magnetic field. Accurate advanced warning of solar flare occurrence is an area of increasing concern and much research is ongoing in this area. Our previous work 111] utilized standard pattern recognition and classification techniques to determine (classify) whether a region is expected to flare within a predictive time window, using a Relevance Vector Machine (RVM) classification method. We extracted 38 features which describing the complexity of the photospheric magnetic field, the result classification metrics will provide the baseline against which we compare our new work. We find a true positive rate (TPR) of 0.8, true negative rate (TNR) of 0.7, and true skill score (TSS) of 0.49. This dissertation proposes three basic topics; the first topic is an extension to our previous work [111, where we consider a feature selection method to determine an appropriate feature subset with cross validation classification based on a histogram analysis of selected features. Classification using the top five features resulting from this analysis yield better classification accuracies across a large unbalanced dataset. In particular, the feature subsets provide better discrimination of the many regions that flare where we find a TPR of 0.85, a TNR of 0.65 sightly lower than our previous work, and a TSS of 0.5 which has an improvement comparing with our previous work. In the second topic, we study the prediction of solar flare size and time-to-flare using support vector regression (SVR). When we consider flaring regions only, we find an average error in estimating flare size of approximately half a GOES class. When we additionally consider non-flaring regions, we find an increased average
Arnaud, Patrick; Cantet, Philippe; Odry, Jean
2017-11-01
Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with
Reducing measurement uncertainty drives the use of multiple technologies for supporting metrology
Banke, Bill, Jr.; Archie, Charles N.; Sendelbach, Matthew; Robert, Jim; Slinkman, James A.; Kaszuba, Phil; Kontra, Rick; DeVries, Mick; Solecky, Eric P.
2004-05-01
Perhaps never before in semiconductor microlithography has there been such an interest in the accuracy of measurement. This interest places new demands on our in-line metrology systems as well as the supporting metrology for verification. This also puts a burden on the users and suppliers of new measurement tools, which both challenge and complement existing manufacturing metrology. The metrology community needs to respond to these challenges by using new methods to assess the fab metrologies. An important part of this assessment process is the ability to obtain accepted reference measurements as a way of determining the accuracy and Total Measurement Uncertainty (TMU) of an in-line critical dimension (CD). In this paper, CD can mean any critical dimension including, for example, such measures as feature height or sidewall angle. This paper describes the trade-offs of in-line metrology systems as well as the limitations of Reference Measurement Systems (RMS). Many factors influence each application such as feature shape, material properties, proximity, sampling, and critical dimension. These factors, along with the metrology probe size, interaction volume, and probe type such as e-beam, optical beam, and mechanical probe, are considered. As the size of features shrinks below 100nm some of the stalwarts of reference metrology come into question, such as the electrically determined transistor gate length. The concept of the RMS is expanded to show how multiple metrologies are needed to achieve the right balance of accuracy and sampling. This is also demonstrated for manufacturing metrology. Various comparisons of CDSEM, scatterometry, AFM, cross section SEM, electrically determined CDs, and TEM are shown. An example is given which demonstrates the importance in obtaining TMU by balancing accuracy and precision for selecting manufacturing measurement strategy and optimizing manufacturing metrology. It is also demonstrated how the necessary supporting metrology will
International Nuclear Information System (INIS)
HELTON, JON CRAIG; MARTELL, MARY-ALENA; TIERNEY, MARTIN S.
2000-01-01
The 1996 performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP) maintains a separation between stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty, with stochastic uncertainty arising from the possible disruptions that could occur at the WIPP over the 10,000 yr regulatory period specified by the US Environmental Protection Agency (40 CFR 191,40 CFR 194) and subjective uncertainty arising from an inability to uniquely characterize many of the inputs required in the 1996 WIPP PA. The characterization of subjective uncertainty is discussed, including assignment of distributions, uncertain variables selected for inclusion in analysis, correlation control, sample size, statistical confidence on mean complementary cumulative distribution functions, generation of Latin hypercube samples, sensitivity analysis techniques, and scenarios involving stochastic and subjective uncertainty
Energy Technology Data Exchange (ETDEWEB)
HELTON,JON CRAIG; MARTELL,MARY-ALENA; TIERNEY,MARTIN S.
2000-05-18
The 1996 performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP) maintains a separation between stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty, with stochastic uncertainty arising from the possible disruptions that could occur at the WIPP over the 10,000 yr regulatory period specified by the US Environmental Protection Agency (40 CFR 191,40 CFR 194) and subjective uncertainty arising from an inability to uniquely characterize many of the inputs required in the 1996 WIPP PA. The characterization of subjective uncertainty is discussed, including assignment of distributions, uncertain variables selected for inclusion in analysis, correlation control, sample size, statistical confidence on mean complementary cumulative distribution functions, generation of Latin hypercube samples, sensitivity analysis techniques, and scenarios involving stochastic and subjective uncertainty.
Sensitivity to Uncertainty in Asteroid Impact Risk Assessment
Mathias, D.; Wheeler, L.; Prabhu, D. K.; Aftosmis, M.; Dotson, J.; Robertson, D. K.
2015-12-01
The Engineering Risk Assessment (ERA) team at NASA Ames Research Center is developing a physics-based impact risk model for probabilistically assessing threats from potential asteroid impacts on Earth. The model integrates probabilistic sampling of asteroid parameter ranges with physics-based analyses of entry, breakup, and impact to estimate damage areas and casualties from various impact scenarios. Assessing these threats is a highly coupled, dynamic problem involving significant uncertainties in the range of expected asteroid characteristics, how those characteristics may affect the level of damage, and the fidelity of various modeling approaches and assumptions. The presented model is used to explore the sensitivity of impact risk estimates to these uncertainties in order to gain insight into what additional data or modeling refinements are most important for producing effective, meaningful risk assessments. In the extreme cases of very small or very large impacts, the results are generally insensitive to many of the characterization and modeling assumptions. However, the nature of the sensitivity can change across moderate-sized impacts. Results will focus on the value of additional information in this critical, mid-size range, and how this additional data can support more robust mitigation decisions.
Risk, unexpected uncertainty, and estimation uncertainty: Bayesian learning in unstable settings.
Directory of Open Access Journals (Sweden)
Elise Payzan-LeNestour
Full Text Available Recently, evidence has emerged that humans approach learning using Bayesian updating rather than (model-free reinforcement algorithms in a six-arm restless bandit problem. Here, we investigate what this implies for human appreciation of uncertainty. In our task, a Bayesian learner distinguishes three equally salient levels of uncertainty. First, the Bayesian perceives irreducible uncertainty or risk: even knowing the payoff probabilities of a given arm, the outcome remains uncertain. Second, there is (parameter estimation uncertainty or ambiguity: payoff probabilities are unknown and need to be estimated. Third, the outcome probabilities of the arms change: the sudden jumps are referred to as unexpected uncertainty. We document how the three levels of uncertainty evolved during the course of our experiment and how it affected the learning rate. We then zoom in on estimation uncertainty, which has been suggested to be a driving force in exploration, in spite of evidence of widespread aversion to ambiguity. Our data corroborate the latter. We discuss neural evidence that foreshadowed the ability of humans to distinguish between the three levels of uncertainty. Finally, we investigate the boundaries of human capacity to implement Bayesian learning. We repeat the experiment with different instructions, reflecting varying levels of structural uncertainty. Under this fourth notion of uncertainty, choices were no better explained by Bayesian updating than by (model-free reinforcement learning. Exit questionnaires revealed that participants remained unaware of the presence of unexpected uncertainty and failed to acquire the right model with which to implement Bayesian updating.
Energy Technology Data Exchange (ETDEWEB)
Vinai, P
2007-10-15
database, are associated to various individual points over the state space. By applying a novel multi-dimensional clustering technique, based on the non-parametric statistical Kruskal-Wallis test, it has been possible to achieve a partitioning of the state space into regions differing in terms of the quality of the physical model's predictions. The second step has been the quantification of the model's uncertainty, for each of the identified state space regions, by applying a probability density function (pdf) estimator. This is a kernel-type estimator, modelled on a universal orthogonal series estimator, such that its behaviour takes advantage of the good features of both estimator types and yields reasonable pdfs, even with samples of small size and not very compact distributions. The pdfs provide a reliable basis for sampling 'error values' for use in Monte-Carlo-type uncertainty propagation studies, aimed at quantifying the impact of the physical model's uncertainty on the code's output variables of interest. The effectiveness of the developed methodology was demonstrated by applying it to the quantification of the uncertainty related to thermal-hydraulic (drift-flux) models implemented in the best-estimate safety analysis code RETRAN-3D. This has been done via the usage of a wide database of void-fraction experiments for saturated and sub-cooled conditions. Appropriate pdfs were generated for quantification of the physical model's uncertainty in a 2-dimensional (pressure/mass-flux) state space, partitioned into 3 separate regions. The impact of the RETRAN-3D drift-flux model uncertainties has been assessed at three different levels of the code's application: (a) Achilles Experiment No. 2, a separate effect experiment not included in the original assessment database; (b) Omega Rod Bundle Test No. 9, an integral experiment simulating a PWR loss-of-coolant accident (LOCA); and (c) the Peach Bottom turbine trip test, a NPP (BWR
Uncertainty Flow Facilitates Zero-Shot Multi-Label Learning in Affective Facial Analysis
Directory of Open Access Journals (Sweden)
Wenjun Bai
2018-02-01
Full Text Available Featured Application: The proposed Uncertainty Flow framework may benefit the facial analysis with its promised elevation in discriminability in multi-label affective classification tasks. Moreover, this framework also allows the efficient model training and between tasks knowledge transfer. The applications that rely heavily on continuous prediction on emotional valance, e.g., to monitor prisoners’ emotional stability in jail, can be directly benefited from our framework. Abstract: To lower the single-label dependency on affective facial analysis, it urges the fruition of multi-label affective learning. The impediment to practical implementation of existing multi-label algorithms pertains to scarcity of scalable multi-label training datasets. To resolve this, an inductive transfer learning based framework, i.e.,Uncertainty Flow, is put forward in this research to allow knowledge transfer from a single labelled emotion recognition task to a multi-label affective recognition task. I.e., the model uncertainty—which can be quantified in Uncertainty Flow—is distilled from a single-label learning task. The distilled model uncertainty ensures the later efficient zero-shot multi-label affective learning. On the theoretical perspective, within our proposed Uncertainty Flow framework, the feasibility of applying weakly informative priors, e.g., uniform and Cauchy prior, is fully explored in this research. More importantly, based on the derived weight uncertainty, three sets of prediction related uncertainty indexes, i.e., soft-max uncertainty, pure uncertainty and uncertainty plus are proposed to produce reliable and accurate multi-label predictions. Validated on our manual annotated evaluation dataset, i.e., the multi-label annotated FER2013, our proposed Uncertainty Flow in multi-label facial expression analysis exhibited superiority to conventional multi-label learning algorithms and multi-label compatible neural networks. The success of our
Lee, Hee-Sun; Pallant, Amy; Pryputniewicz, Sarah
2015-08-01
Teaching scientific argumentation has emerged as an important goal for K-12 science education. In scientific argumentation, students are actively involved in coordinating evidence with theory based on their understanding of the scientific content and thinking critically about the strengths and weaknesses of the cited evidence in the context of the investigation. We developed a one-week-long online curriculum module called "Is there life in space?" where students conduct a series of four model-based tasks to learn how scientists detect extrasolar planets through the “wobble” and transit methods. The simulation model allows students to manipulate various parameters of an imaginary star and planet system such as planet size, orbit size, planet-orbiting-plane angle, and sensitivity of telescope equipment, and to adjust the display settings for graphs illustrating the relative velocity and light intensity of the star. Students can use model-based evidence to formulate an argument on whether particular signals in the graphs guarantee the presence of a planet. Students' argumentation is facilitated by the four-part prompts consisting of multiple-choice claim, open-ended explanation, Likert-scale uncertainty rating, and open-ended uncertainty rationale. We analyzed 1,013 scientific arguments formulated by 302 high school student groups taught by 7 teachers. We coded these arguments in terms of the accuracy of their claim, the sophistication of explanation connecting evidence to the established knowledge base, the uncertainty rating, and the scientific validity of uncertainty. We found that (1) only 18% of the students' uncertainty rationale involved critical reflection on limitations inherent in data and concepts, (2) 35% of students' uncertainty rationale reflected their assessment of personal ability and knowledge, rather than scientific sources of uncertainty related to the evidence, and (3) the nature of task such as the use of noisy data or the framing of
Geological-structural models used in SR 97. Uncertainty analysis
Energy Technology Data Exchange (ETDEWEB)
Saksa, P.; Nummela, J. [FINTACT Oy (Finland)
1998-10-01
The uncertainty of geological-structural models was studied for the three sites in SR 97, called Aberg, Beberg and Ceberg. The evaluation covered both regional and site scale models, the emphasis being placed on fracture zones in the site scale. Uncertainty is a natural feature of all geoscientific investigations. It originates from measurements (errors in data, sampling limitations, scale variation) and conceptualisation (structural geometries and properties, ambiguous geometric or parametric solutions) to name the major ones. The structures of A-, B- and Ceberg are fracture zones of varying types. No major differences in the conceptualisation between the sites were noted. One source of uncertainty in the site models is the non-existence of fracture and zone information in the scale from 10 to 300 - 1000 m. At Aberg the development of the regional model has been performed very thoroughly. At the site scale one major source of uncertainty is that a clear definition of the target area is missing. Structures encountered in the boreholes are well explained and an interdisciplinary approach in interpretation have taken place. Beberg and Ceberg regional models contain relatively large uncertainties due to the investigation methodology and experience available at that time. In site scale six additional structures were proposed both to Beberg and Ceberg to variant analysis of these sites. Both sites include uncertainty in the form of many non-interpreted fractured sections along the boreholes. Statistical analysis gives high occurrences of structures for all three sites: typically 20 - 30 structures/km{sup 3}. Aberg has highest structural frequency, Beberg comes next and Ceberg has the lowest. The borehole configuration, orientations and surveying goals were inspected to find whether preferences or factors causing bias were present. Data from Aberg supports the conclusion that Aespoe sub volume would be an anomalously fractured, tectonised unit of its own. This means that
Geological-structural models used in SR 97. Uncertainty analysis
International Nuclear Information System (INIS)
Saksa, P.; Nummela, J.
1998-10-01
The uncertainty of geological-structural models was studied for the three sites in SR 97, called Aberg, Beberg and Ceberg. The evaluation covered both regional and site scale models, the emphasis being placed on fracture zones in the site scale. Uncertainty is a natural feature of all geoscientific investigations. It originates from measurements (errors in data, sampling limitations, scale variation) and conceptualisation (structural geometries and properties, ambiguous geometric or parametric solutions) to name the major ones. The structures of A-, B- and Ceberg are fracture zones of varying types. No major differences in the conceptualisation between the sites were noted. One source of uncertainty in the site models is the non-existence of fracture and zone information in the scale from 10 to 300 - 1000 m. At Aberg the development of the regional model has been performed very thoroughly. At the site scale one major source of uncertainty is that a clear definition of the target area is missing. Structures encountered in the boreholes are well explained and an interdisciplinary approach in interpretation have taken place. Beberg and Ceberg regional models contain relatively large uncertainties due to the investigation methodology and experience available at that time. In site scale six additional structures were proposed both to Beberg and Ceberg to variant analysis of these sites. Both sites include uncertainty in the form of many non-interpreted fractured sections along the boreholes. Statistical analysis gives high occurrences of structures for all three sites: typically 20 - 30 structures/km 3 . Aberg has highest structural frequency, Beberg comes next and Ceberg has the lowest. The borehole configuration, orientations and surveying goals were inspected to find whether preferences or factors causing bias were present. Data from Aberg supports the conclusion that Aespoe sub volume would be an anomalously fractured, tectonised unit of its own. This means that the
Urban-rural migration: uncertainty and the effect of a change in the minimum wage.
Ingene, C A; Yu, E S
1989-01-01
"This paper extends the neoclassical, Harris-Todaro model of urban-rural migration to the case of production uncertainty in the agricultural sector. A unique feature of the Harris-Todaro model is an exogenously determined minimum wage in the urban sector that exceeds the rural wage. Migration occurs until the rural wage equals the expected urban wage ('expected' due to employment uncertainty). The effects of a change in the minimum wage upon regional outputs, resource allocation, factor rewards, expected profits, and expected national income are explored, and the influence of production uncertainty upon the obtained results are delineated." The geographical focus is on developing countries. excerpt
Medical Need, Equality, and Uncertainty.
Horne, L Chad
2016-10-01
Many hold that distributing healthcare according to medical need is a requirement of equality. Most egalitarians believe, however, that people ought to be equal on the whole, by some overall measure of well-being or life-prospects; it would be a massive coincidence if distributing healthcare according to medical need turned out to be an effective way of promoting equality overall. I argue that distributing healthcare according to medical need is important for reducing individuals' uncertainty surrounding their future medical needs. In other words, distributing healthcare according to medical need is a natural feature of healthcare insurance; it is about indemnity, not equality. © 2016 John Wiley & Sons Ltd.
[Influence of Uncertainty and Uncertainty Appraisal on Self-management in Hemodialysis Patients].
Jang, Hyung Suk; Lee, Chang Suk; Yang, Young Hee
2015-04-01
This study was done to examine the relation of uncertainty, uncertainty appraisal, and self-management in patients undergoing hemodialysis, and to identify factors influencing self-management. A convenience sample of 92 patients receiving hemodialysis was selected. Data were collected using a structured questionnaire and medical records. The collected data were analyzed using descriptive statistics, t-test, ANOVA, Pearson correlations and multiple regression analysis with the SPSS/WIN 20.0 program. The participants showed a moderate level of uncertainty with the highest score being for ambiguity among the four uncertainty subdomains. Scores for uncertainty danger or opportunity appraisals were under the mid points. The participants were found to perform a high level of self-management such as diet control, management of arteriovenous fistula, exercise, medication, physical management, measurements of body weight and blood pressure, and social activity. The self-management of participants undergoing hemodialysis showed a significant relationship with uncertainty and uncertainty appraisal. The significant factors influencing self-management were uncertainty, uncertainty opportunity appraisal, hemodialysis duration, and having a spouse. These variables explained 32.8% of the variance in self-management. The results suggest that intervention programs to reduce the level of uncertainty and to increase the level of uncertainty opportunity appraisal among patients would improve the self-management of hemodialysis patients.
Finite size effects of a pion matrix element
International Nuclear Information System (INIS)
Guagnelli, M.; Jansen, K.; Palombi, F.; Petronzio, R.; Shindler, A.; Wetzorke, I.
2004-01-01
We investigate finite size effects of the pion matrix element of the non-singlet, twist-2 operator corresponding to the average momentum of non-singlet quark densities. Using the quenched approximation, they come out to be surprisingly large when compared to the finite size effects of the pion mass. As a consequence, simulations of corresponding nucleon matrix elements could be affected by finite size effects even stronger which could lead to serious systematic uncertainties in their evaluation
UNCERTAINTY SUPPLY CHAIN MODEL AND TRANSPORT IN ITS DEPLOYMENTS
Directory of Open Access Journals (Sweden)
Fabiana Lucena Oliveira
2014-05-01
Full Text Available This article discusses the Model Uncertainty of Supply Chain, and proposes a matrix with their transportation modes best suited to their chains. From the detailed analysis of the matrix of uncertainty, it is suggested transportation modes best suited to the management of these chains, so that transport is the most appropriate optimization of the gains previously proposed by the original model, particularly when supply chains are distant from suppliers of raw materials and / or supplies.Here we analyze in detail Agile Supply Chains, which is a result of Uncertainty Supply Chain Model, with special attention to Manaus Industrial Center. This research was done at Manaus Industrial Pole, which is a model of industrial agglomerations, based in Manaus, State of Amazonas (Brazil, which contemplates different supply chains and strategies sharing same infrastructure of transport, handling and storage and clearance process and uses inbound for suppliers of raw material. The state of art contemplates supply chain management, uncertainty supply chain model, agile supply chains, Manaus Industrial Center (MIC and Brazilian legislation, as a business case, and presents concepts and features, of each one. The main goal is to present and discuss how transport is able to support Uncertainty Supply Chain Model, in order to complete management model. The results obtained confirms the hypothesis of integrated logistics processes are able to guarantee attractivity for industrial agglomerations, and open discussions when the suppliers are far from the manufacturer center, in a logistics management.
Uncertainty in counting ice nucleating particles with continuous flow diffusion chambers
Garimella, Sarvesh; Rothenberg, Daniel A.; Wolf, Martin J.; David, Robert O.; Kanji, Zamin A.; Wang, Chien; Rösch, Michael; Cziczo, Daniel J.
2017-09-01
This study investigates the measurement of ice nucleating particle (INP) concentrations and sizing of crystals using continuous flow diffusion chambers (CFDCs). CFDCs have been deployed for decades to measure the formation of INPs under controlled humidity and temperature conditions in laboratory studies and by ambient aerosol populations. These measurements have, in turn, been used to construct parameterizations for use in models by relating the formation of ice crystals to state variables such as temperature and humidity as well as aerosol particle properties such as composition and number. We show here that assumptions of ideal instrument behavior are not supported by measurements made with a commercially available CFDC, the SPectrometer for Ice Nucleation (SPIN), and the instrument on which it is based, the Zurich Ice Nucleation Chamber (ZINC). Non-ideal instrument behavior, which is likely inherent to varying degrees in all CFDCs, is caused by exposure of particles to different humidities and/or temperatures than predicated from instrument theory of operation. This can result in a systematic, and variable, underestimation of reported INP concentrations. We find here variable correction factors from 1.5 to 9.5, consistent with previous literature values. We use a machine learning approach to show that non-ideality is most likely due to small-scale flow features where the aerosols are combined with sheath flows. Machine learning is also used to minimize the uncertainty in measured INP concentrations. We suggest that detailed measurement, on an instrument-by-instrument basis, be performed to characterize this uncertainty.
Uncertainty in counting ice nucleating particles with continuous flow diffusion chambers
Directory of Open Access Journals (Sweden)
S. Garimella
2017-09-01
Full Text Available This study investigates the measurement of ice nucleating particle (INP concentrations and sizing of crystals using continuous flow diffusion chambers (CFDCs. CFDCs have been deployed for decades to measure the formation of INPs under controlled humidity and temperature conditions in laboratory studies and by ambient aerosol populations. These measurements have, in turn, been used to construct parameterizations for use in models by relating the formation of ice crystals to state variables such as temperature and humidity as well as aerosol particle properties such as composition and number. We show here that assumptions of ideal instrument behavior are not supported by measurements made with a commercially available CFDC, the SPectrometer for Ice Nucleation (SPIN, and the instrument on which it is based, the Zurich Ice Nucleation Chamber (ZINC. Non-ideal instrument behavior, which is likely inherent to varying degrees in all CFDCs, is caused by exposure of particles to different humidities and/or temperatures than predicated from instrument theory of operation. This can result in a systematic, and variable, underestimation of reported INP concentrations. We find here variable correction factors from 1.5 to 9.5, consistent with previous literature values. We use a machine learning approach to show that non-ideality is most likely due to small-scale flow features where the aerosols are combined with sheath flows. Machine learning is also used to minimize the uncertainty in measured INP concentrations. We suggest that detailed measurement, on an instrument-by-instrument basis, be performed to characterize this uncertainty.
Directory of Open Access Journals (Sweden)
Akhtar Hussain
2017-06-01
Full Text Available Integration of demand response (DR programs and battery energy storage system (BESS in microgrids are beneficial for both microgrid owners and consumers. The intensity of DR programs and BESS size can alter the operation of microgrids. Meanwhile, the optimal size for BESS units is linked with the uncertainties associated with renewable energy sources and load variations. Similarly, the participation of enrolled customers in DR programs is also uncertain and, among various other factors, uncertainty in market prices is a major cause. Therefore, in this paper, the impact of DR program intensity and BESS size on the operation of networked microgrids is analyzed while considering the prevailing uncertainties. The uncertainties associated with forecast load values, output of renewable generators, and market price are realized via the robust optimization method. Robust optimization has the capability to provide immunity against the worst-case scenario, provided the uncertainties lie within the specified bounds. The worst-case scenario of the prevailing uncertainties is considered for evaluating the feasibility of the proposed method. The two representative categories of DR programs, i.e., price-based and incentive-based DR programs are considered. The impact of change in DR intensity and BESS size on operation cost of the microgrid network, external power trading, internal power transfer, load profile of the network, and state-of-charge (SOC of battery energy storage system (BESS units is analyzed. Simulation results are analyzed to determine the integration of favorable DR program and/or BESS units for different microgrid networks with diverse objectives.
The Italian primary school-size distribution and the city-size: a complex nexus
Belmonte, Alessandro; di Clemente, Riccardo; Buldyrev, Sergey V.
2014-06-01
We characterize the statistical law according to which Italian primary school-size distributes. We find that the school-size can be approximated by a log-normal distribution, with a fat lower tail that collects a large number of very small schools. The upper tail of the school-size distribution decreases exponentially and the growth rates are distributed with a Laplace PDF. These distributions are similar to those observed for firms and are consistent with a Bose-Einstein preferential attachment process. The body of the distribution features a bimodal shape suggesting some source of heterogeneity in the school organization that we uncover by an in-depth analysis of the relation between schools-size and city-size. We propose a novel cluster methodology and a new spatial interaction approach among schools which outline the variety of policies implemented in Italy. Different regional policies are also discussed shedding lights on the relation between policy and geographical features.
Controlling quantum memory-assisted entropic uncertainty in non-Markovian environments
Zhang, Yanliang; Fang, Maofa; Kang, Guodong; Zhou, Qingping
2018-03-01
Quantum memory-assisted entropic uncertainty relation (QMA EUR) addresses that the lower bound of Maassen and Uffink's entropic uncertainty relation (without quantum memory) can be broken. In this paper, we investigated the dynamical features of QMA EUR in the Markovian and non-Markovian dissipative environments. It is found that dynamical process of QMA EUR is oscillation in non-Markovian environment, and the strong interaction is favorable for suppressing the amount of entropic uncertainty. Furthermore, we presented two schemes by means of prior weak measurement and posterior weak measurement reversal to control the amount of entropic uncertainty of Pauli observables in dissipative environments. The numerical results show that the prior weak measurement can effectively reduce the wave peak values of the QMA-EUA dynamic process in non-Markovian environment for long periods of time, but it is ineffectual on the wave minima of dynamic process. However, the posterior weak measurement reversal has an opposite effects on the dynamic process. Moreover, the success probability entirely depends on the quantum measurement strength. We hope that our proposal could be verified experimentally and might possibly have future applications in quantum information processing.
Kim, Ki Joon; Sundar, S Shyam
2013-05-01
Aggressiveness attributed to violent video game play is typically studied as a function of the content features of the game. However, can interface features of the game also affect aggression? Guided by the General Aggression Model (GAM), we examine the controller type (gun replica vs. mouse) and screen size (large vs. small) as key technological aspects that may affect the state aggression of gamers, with spatial presence and arousal as potential mediators. Results from a between-subjects experiment showed that a realistic controller and a large screen display induced greater aggression, presence, and arousal than a conventional mouse and a small screen display, respectively, and confirmed that trait aggression was a significant predictor of gamers' state aggression. Contrary to GAM, however, arousal showed no effects on aggression; instead, presence emerged as a significant mediator.
Directory of Open Access Journals (Sweden)
Eleanor S Devenish Nelson
Full Text Available BACKGROUND: Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. METHODOLOGY/PRINCIPAL FINDINGS: We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. CONCLUSIONS/SIGNIFICANCE: Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species.
Wang, Z.
2015-12-01
For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.
Instrument uncertainty predictions
International Nuclear Information System (INIS)
Coutts, D.A.
1991-07-01
The accuracy of measurements and correlations should normally be provided for most experimental activities. The uncertainty is a measure of the accuracy of a stated value or equation. The uncertainty term reflects a combination of instrument errors, modeling limitations, and phenomena understanding deficiencies. This report provides several methodologies to estimate an instrument's uncertainty when used in experimental work. Methods are shown to predict both the pretest and post-test uncertainty
Uncertainty in hydrological signatures
McMillan, Hilary; Westerberg, Ida
2015-04-01
Information that summarises the hydrological behaviour or flow regime of a catchment is essential for comparing responses of different catchments to understand catchment organisation and similarity, and for many other modelling and water-management applications. Such information types derived as an index value from observed data are known as hydrological signatures, and can include descriptors of high flows (e.g. mean annual flood), low flows (e.g. mean annual low flow, recession shape), the flow variability, flow duration curve, and runoff ratio. Because the hydrological signatures are calculated from observed data such as rainfall and flow records, they are affected by uncertainty in those data. Subjective choices in the method used to calculate the signatures create a further source of uncertainty. Uncertainties in the signatures may affect our ability to compare different locations, to detect changes, or to compare future water resource management scenarios. The aim of this study was to contribute to the hydrological community's awareness and knowledge of data uncertainty in hydrological signatures, including typical sources, magnitude and methods for its assessment. We proposed a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrated it for a variety of commonly used signatures. The study was made for two data rich catchments, the 50 km2 Mahurangi catchment in New Zealand and the 135 km2 Brue catchment in the UK. For rainfall data the uncertainty sources included point measurement uncertainty, the number of gauges used in calculation of the catchment spatial average, and uncertainties relating to lack of quality control. For flow data the uncertainty sources included uncertainties in stage/discharge measurement and in the approximation of the true stage-discharge relation by a rating curve. The resulting uncertainties were compared across the different signatures and catchments, to quantify uncertainty
Dealing with Uncertainties in Initial Orbit Determination
Armellin, Roberto; Di Lizia, Pierluigi; Zanetti, Renato
2015-01-01
A method to deal with uncertainties in initial orbit determination (IOD) is presented. This is based on the use of Taylor differential algebra (DA) to nonlinearly map the observation uncertainties from the observation space to the state space. When a minimum set of observations is available DA is used to expand the solution of the IOD problem in Taylor series with respect to measurement errors. When more observations are available high order inversion tools are exploited to obtain full state pseudo-observations at a common epoch. The mean and covariance of these pseudo-observations are nonlinearly computed by evaluating the expectation of high order Taylor polynomials. Finally, a linear scheme is employed to update the current knowledge of the orbit. Angles-only observations are considered and simplified Keplerian dynamics adopted to ease the explanation. Three test cases of orbit determination of artificial satellites in different orbital regimes are presented to discuss the feature and performances of the proposed methodology.
Uncertainty Quantification in Alchemical Free Energy Methods.
Bhati, Agastya P; Wan, Shunzhou; Hu, Yuan; Sherborne, Brad; Coveney, Peter V
2018-05-02
Alchemical free energy methods have gained much importance recently from several reports of improved ligand-protein binding affinity predictions based on their implementation using molecular dynamics simulations. A large number of variants of such methods implementing different accelerated sampling techniques and free energy estimators are available, each claimed to be better than the others in its own way. However, the key features of reproducibility and quantification of associated uncertainties in such methods have barely been discussed. Here, we apply a systematic protocol for uncertainty quantification to a number of popular alchemical free energy methods, covering both absolute and relative free energy predictions. We show that a reliable measure of error estimation is provided by ensemble simulation-an ensemble of independent MD simulations-which applies irrespective of the free energy method. The need to use ensemble methods is fundamental and holds regardless of the duration of time of the molecular dynamics simulations performed.
The Density of Mid-sized Kuiper Belt Objects from ALMA Thermal Observations
Energy Technology Data Exchange (ETDEWEB)
Brown, Michael E. [California Institute of Technology, 1200 E California Blvd., Pasadena CA 91125 (United States); Butler, Bryan J. [National Radio Astronomy Observatory, 1003 Lopezville Rd., Socorro NM 87801 (United States)
2017-07-01
The densities of mid-sized Kuiper Belt objects (KBOs) are a key constraint in understanding the assembly of objects in the outer solar system. These objects are critical for understanding the currently unexplained transition from the smallest KBOs with densities lower than that of water, to the largest objects with significant rock content. Mapping this transition is made difficult by the uncertainties in the diameters of these objects, which maps into an even larger uncertainty in volume and thus density. The substantial collecting area of the Atacama Large Millimeter Array allows significantly more precise measurements of thermal emission from outer solar system objects and could potentially greatly improve the density measurements. Here we use new thermal observations of four objects with satellites to explore the improvements possible with millimeter data. We find that effects due to effective emissivity at millimeter wavelengths make it difficult to use the millimeter data directly to find diameters and thus volumes for these bodies. In addition, we find that when including the effects of model uncertainty, the true uncertainties on the sizes of outer solar system objects measured with radiometry are likely larger than those previously published. Substantial improvement in object sizes will likely require precise occultation measurements.
Operational hydrological forecasting in Bavaria. Part I: Forecast uncertainty
Ehret, U.; Vogelbacher, A.; Moritz, K.; Laurent, S.; Meyer, I.; Haag, I.
2009-04-01
In Bavaria, operational flood forecasting has been established since the disastrous flood of 1999. Nowadays, forecasts based on rainfall information from about 700 raingauges and 600 rivergauges are calculated and issued for nearly 100 rivergauges. With the added experience of the 2002 and 2005 floods, awareness grew that the standard deterministic forecast, neglecting the uncertainty associated with each forecast is misleading, creating a false feeling of unambiguousness. As a consequence, a system to identify, quantify and communicate the sources and magnitude of forecast uncertainty has been developed, which will be presented in part I of this study. In this system, the use of ensemble meteorological forecasts plays a key role which will be presented in part II. Developing the system, several constraints stemming from the range of hydrological regimes and operational requirements had to be met: Firstly, operational time constraints obviate the variation of all components of the modeling chain as would be done in a full Monte Carlo simulation. Therefore, an approach was chosen where only the most relevant sources of uncertainty were dynamically considered while the others were jointly accounted for by static error distributions from offline analysis. Secondly, the dominant sources of uncertainty vary over the wide range of forecasted catchments: In alpine headwater catchments, typically of a few hundred square kilometers in size, rainfall forecast uncertainty is the key factor for forecast uncertainty, with a magnitude dynamically changing with the prevailing predictability of the atmosphere. In lowland catchments encompassing several thousands of square kilometers, forecast uncertainty in the desired range (usually up to two days) is mainly dependent on upstream gauge observation quality, routing and unpredictable human impact such as reservoir operation. The determination of forecast uncertainty comprised the following steps: a) From comparison of gauge
Spectral-based features ranking for gamelan instruments identification using filter techniques
Directory of Open Access Journals (Sweden)
Diah P Wulandari
2013-03-01
Full Text Available In this paper, we describe an approach of spectral-based features ranking for Javanese gamelaninstruments identification using filter techniques. The model extracted spectral-based features set of thesignal using Short Time Fourier Transform (STFT. The rank of the features was determined using the fivealgorithms; namely ReliefF, Chi-Squared, Information Gain, Gain Ratio, and Symmetric Uncertainty. Then,we tested the ranked features by cross validation using Support Vector Machine (SVM. The experimentshowed that Gain Ratio algorithm gave the best result, it yielded accuracy of 98.93%.
International Nuclear Information System (INIS)
Landsberg, P.T.
1990-01-01
This paper explores how the quantum mechanics uncertainty relation can be considered to result from measurements. A distinction is drawn between the uncertainties obtained by scrutinising experiments and the standard deviation type of uncertainty definition used in quantum formalism. (UK)
Layout Optimization of Structures with Finite-size Features using Multiresolution Analysis
DEFF Research Database (Denmark)
Chellappa, S.; Diaz, A. R.; Bendsøe, Martin P.
2004-01-01
A scheme for layout optimization in structures with multiple finite-sized heterogeneities is presented. Multiresolution analysis is used to compute reduced operators (stiffness matrices) representing the elastic behavior of material distributions with heterogeneities of sizes that are comparable...
Directory of Open Access Journals (Sweden)
Herwig Reiter
2010-01-01
Full Text Available The article proposes a general, empirically grounded model for analyzing biographical uncertainty. The model is based on findings from a qualitative-explorative study of transforming meanings of unemployment among young people in post-Soviet Lithuania. In a first step, the particular features of the uncertainty puzzle in post-communist youth transitions are briefly discussed. A historical event like the collapse of state socialism in Europe, similar to the recent financial and economic crisis, is a generator of uncertainty par excellence: it undermines the foundations of societies and the taken-for-grantedness of related expectations. Against this background, the case of a young woman and how she responds to the novel threat of unemployment in the transition to the world of work is introduced. Her uncertainty management in the specific time perspective of certainty production is then conceptually rephrased by distinguishing three types or levels of biographical uncertainty: knowledge, outcome, and recognition uncertainty. Biographical uncertainty, it is argued, is empirically observable through the analysis of acting and projecting at the biographical level. The final part synthesizes the empirical findings and the conceptual discussion into a stratification model of biographical uncertainty as a general tool for the biographical analysis of uncertainty phenomena. URN: urn:nbn:de:0114-fqs100120
Uncertainty and validation. Effect of model complexity on uncertainty estimates
International Nuclear Information System (INIS)
Elert, M.
1996-09-01
zone concentration. Models considering faster net downward flow in the upper part of the root zone predict a more rapid decline in root zone concentration than models that assume a constant infiltration throughout the soil column. A sensitivity analysis performed on two of the models shows that the important parameters are the effective precipitation, the root water uptake and the soil K d -values. For the advection-dispersion model, the dispersion length is also important for the maximum flux to the groundwater. The amount of dispersion in radionuclide transport is of importance for the release to groundwater. For the box models, an inherent dispersion is obtained by the assumption of instantaneous mixing in the boxes. The degree of dispersion in the calculation will be a function of the size of the boxes. It is therefore important that division of the soil column is made with care in order to obtain the intended values. For many models the uncertainty calculations give very skewed distributions for the flux to the groundwater. In some cases the mean of the stochastic calculation can be several orders of magnitude higher than the value from the deterministic calculations. In relation to the objectives set up for this study it can be concluded that: The analysis of the relationship between uncertainty and model complexity proved to be a difficult task. For the studied scenario, the uncertainty in the model predictions does not have a simple relationship with the complexity of the models used. However, a complete analysis could not be performed since uncertainty results were not available for the full range of models and furthermore were not the uncertainty analysis always carried out in a consistent way. The predicted uncertainty associated with the concentration in the root zone does not show very much variation between the modelling approaches. For the predictions of the flux to groundwater, the simple models and the more complex gave very different results for the
Uncertainty Evaluation of Best Estimate Calculation Results
International Nuclear Information System (INIS)
Glaeser, H.
2006-01-01
Efforts are underway in Germany to perform analysis using best estimate computer codes and to include uncertainty evaluation in licensing. The German Reactor Safety Commission (RSK) issued a recommendation to perform uncertainty analysis in loss of coolant accident safety analyses (LOCA), recently. A more general requirement is included in a draft revision of the German Nuclear Regulation which is an activity of the German Ministry of Environment and Reactor Safety (BMU). According to the recommendation of the German RSK to perform safety analyses for LOCA in licensing the following deterministic requirements have still to be applied: Most unfavourable single failure, Unavailability due to preventive maintenance, Break location, Break size and break type, Double ended break, 100 percent through 200 percent, Large, medium and small break, Loss of off-site power, Core power (at accident initiation the most unfavourable conditions and values have to be assumed which may occur under normal operation taking into account the set-points of integral power and power density control. Measurement and calibration errors can be considered statistically), Time of fuel cycle. Analysis using best estimate codes with evaluation of uncertainties is the only way to quantify conservatisms with regard to code models and uncertainties of plant, fuel parameters and decay heat. This is especially the case for approaching licensing limits, e.g. due to power up-rates, higher burn-up and higher enrichment. Broader use of best estimate analysis is therefore envisaged in the future. Since some deterministic unfavourable assumptions regarding availability of NPP systems are still used, some conservatism in best-estimate analyses remains. Methods of uncertainty analyses have been developed and applied by the vendor Framatome ANP as well as by GRS in Germany. The GRS development was sponsored by the German Ministry of Economy and Labour (BMWA). (author)
International Nuclear Information System (INIS)
Ngamroo, Issarachai
2011-01-01
Even the superconducting magnetic energy storage (SMES) is the smart stabilizing device in electric power systems, the installation cost of SMES is very high. Especially, the superconducting magnetic coil size which is the critical part of SMES, must be well designed. On the contrary, various system operating conditions result in system uncertainties. The power controller of SMES designed without taking such uncertainties into account, may fail to stabilize the system. By considering both coil size and system uncertainties, this paper copes with the optimization of robust SMES controller. No need of exact mathematic equations, the normalized coprime factorization is applied to model system uncertainties. Based on the normalized integral square error index of inter-area rotor angle difference and specified structured H ∞ loop shaping optimization, the robust SMES controller with the smallest coil size, can be achieved by the genetic algorithm. The robustness of the proposed SMES with the smallest coil size can be confirmed by simulation study.
International Nuclear Information System (INIS)
2004-01-01
This workshop was attended by experts in Canadian and international hydroelectric utilities to exchange information on current practices and opportunities for improvement or future cooperation. The discussions focused on reducing the uncertainties associated with hydroelectric power production. Although significant improvements have been made in the efficiency, reliability and safety of hydroelectric power production, the sector is still challenged by the uncertainty of water supply which depends greatly on weather conditions. Energy markets pose another challenge to power producers in terms of energy supply, energy demand and energy prices. The workshop focused on 3 themes: (1) weather and hydrologic uncertainty, (2) market uncertainty, and (3) decision making models using uncertainty principles surrounding water resource planning and operation. The workshop featured 22 presentations of which 11 have been indexed separately for inclusion in this database. refs., tabs., figs
International Nuclear Information System (INIS)
Renaud, M; Seuntjens, J; Roberge, D
2014-01-01
Purpose: Assessing the performance and uncertainty of a pre-calculated Monte Carlo (PMC) algorithm for proton and electron transport running on graphics processing units (GPU). While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from recycling a limited number of tracks in the pre-generated track bank is missing from the literature. With a proper uncertainty analysis, an optimal pre-generated track bank size can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pre-generated for electrons and protons using EGSnrc and GEANT4, respectively. The PMC algorithm for track transport was implemented on the CUDA programming framework. GPU-PMC dose distributions were compared to benchmark dose distributions simulated using general-purpose MC codes in the same conditions. A latent uncertainty analysis was performed by comparing GPUPMC dose values to a “ground truth” benchmark while varying the track bank size and primary particle histories. Results: GPU-PMC dose distributions and benchmark doses were within 1% of each other in voxels with dose greater than 50% of Dmax. In proton calculations, a submillimeter distance-to-agreement error was observed at the Bragg Peak. Latent uncertainty followed a Poisson distribution with the number of tracks per energy (TPE) and a track bank of 20,000 TPE produced a latent uncertainty of approximately 1%. Efficiency analysis showed a 937× and 508× gain over a single processor core running DOSXYZnrc for 16 MeV electrons in water and bone, respectively. Conclusion: The GPU-PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty below 1%. The track bank size necessary to achieve an optimal efficiency can be tuned based on the desired uncertainty. Coupled with a model to calculate dose contributions from uncharged particles, GPU-PMC is a candidate for inverse planning of modulated electron radiotherapy
International Nuclear Information System (INIS)
Kim, Song Hyun; Song, Myung Sub; Shin, Chang Ho; Noh, Jae Man
2014-01-01
In using the perturbation theory, the uncertainty of the response can be estimated by a single transport simulation, and therefore it requires small computational load. However, it has a disadvantage that the computation methodology must be modified whenever estimating different response type such as multiplication factor, flux, or power distribution. Hence, it is suitable for analyzing few responses with lots of perturbed parameters. Statistical approach is a sampling based method which uses randomly sampled cross sections from covariance data for analyzing the uncertainty of the response. XSUSA is a code based on the statistical approach. The cross sections are only modified with the sampling based method; thus, general transport codes can be directly utilized for the S/U analysis without any code modifications. However, to calculate the uncertainty distribution from the result, code simulation should be enough repeated with randomly sampled cross sections. Therefore, this inefficiency is known as a disadvantage of the stochastic method. In this study, an advanced sampling method of the cross sections is proposed and verified to increase the estimation efficiency of the sampling based method. In this study, to increase the estimation efficiency of the sampling based S/U method, an advanced sampling and estimation method was proposed. The main feature of the proposed method is that the cross section averaged from each single sampled cross section is used. For the use of the proposed method, the validation was performed using the perturbation theory
Uncertainty as Knowledge: Constraints on Policy Choices Provided by Analysis of Uncertainty
Lewandowsky, S.; Risbey, J.; Smithson, M.; Newell, B. R.
2012-12-01
Uncertainty forms an integral part of climate science, and it is often cited in connection with arguments against mitigative action. We argue that an analysis of uncertainty must consider existing knowledge as well as uncertainty, and the two must be evaluated with respect to the outcomes and risks associated with possible policy options. Although risk judgments are inherently subjective, an analysis of the role of uncertainty within the climate system yields two constraints that are robust to a broad range of assumptions. Those constraints are that (a) greater uncertainty about the climate system is necessarily associated with greater expected damages from warming, and (b) greater uncertainty translates into a greater risk of the failure of mitigation efforts. These ordinal constraints are unaffected by subjective or cultural risk-perception factors, they are independent of the discount rate, and they are independent of the magnitude of the estimate for climate sensitivity. The constraints mean that any appeal to uncertainty must imply a stronger, rather than weaker, need to cut greenhouse gas emissions than in the absence of uncertainty.
Small-angle X-ray scattering (SAXS) for metrological size determination of nanoparticles
Energy Technology Data Exchange (ETDEWEB)
Gleber, Gudrun; Krumrey, Michael; Cibik, Levent; Marggraf, Stefanie; Mueller, Peter [Physikalisch-Technische Bundesanstalt, Abbestr. 2-12, 10587 Berlin (Germany); Hoell, Armin [Helmholtz-Zentrum Berlin, Albert-Einstein-Str. 15, 12489 Berlin (Germany)
2011-07-01
To measure the size of nanoparticles, different measurement methods are available but their results are often not compatible. In the framework of an European metrology project we use Small-Angle X-ray Scattering (SAXS) to determine the size and size distribution of nanoparticles in aqueous solution, where the special challange is the traceability of the results. The experiments were performed at the Four-Crystal Monochromator (FCM) beamline in the laboratory of Physikalisch-Technische Bundesanstalt (PTB) at BESSY II using the SAXS setup of the Helmholtz-Zentrum Berlin (HZB). We measured different particles made of PMMA and gold in a diameter range of 200 nm down to about 10 nm. The aspects of traceability can be classified in two parts: the first is the experimental part with the uncertainties of distances, angles, and wavelength, the second is the part of analysis, with the uncertainty of the choice of the model used for fitting the data. In this talk we want to show the degree of uncertainty, which we reached in this work yet.
When size matters: attention affects performance by contrast or response gain.
Herrmann, Katrin; Montaser-Kouhsari, Leila; Carrasco, Marisa; Heeger, David J
2010-12-01
Covert attention, the selective processing of visual information in the absence of eye movements, improves behavioral performance. We found that attention, both exogenous (involuntary) and endogenous (voluntary), can affect performance by contrast or response gain changes, depending on the stimulus size and the relative size of the attention field. These two variables were manipulated in a cueing task while stimulus contrast was varied. We observed a change in behavioral performance consonant with a change in contrast gain for small stimuli paired with spatial uncertainty and a change in response gain for large stimuli presented at one location (no uncertainty) and surrounded by irrelevant flanking distracters. A complementary neuroimaging experiment revealed that observers' attention fields were wider with than without spatial uncertainty. Our results support important predictions of the normalization model of attention and reconcile previous, seemingly contradictory findings on the effects of visual attention.
The effects of geometric uncertainties on computational modelling of knee biomechanics
Meng, Qingen; Fisher, John; Wilcox, Ruth
2017-08-01
The geometry of the articular components of the knee is an important factor in predicting joint mechanics in computational models. There are a number of uncertainties in the definition of the geometry of cartilage and meniscus, and evaluating the effects of these uncertainties is fundamental to understanding the level of reliability of the models. In this study, the sensitivity of knee mechanics to geometric uncertainties was investigated by comparing polynomial-based and image-based knee models and varying the size of meniscus. The results suggested that the geometric uncertainties in cartilage and meniscus resulting from the resolution of MRI and the accuracy of segmentation caused considerable effects on the predicted knee mechanics. Moreover, even if the mathematical geometric descriptors can be very close to the imaged-based articular surfaces, the detailed contact pressure distribution produced by the mathematical geometric descriptors was not the same as that of the image-based model. However, the trends predicted by the models based on mathematical geometric descriptors were similar to those of the imaged-based models.
Liu, Baoding
2015-01-01
When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, c...
Quantifying Uncertainty in Satellite-Retrieved Land Surface Temperature from Cloud Detection Errors
Directory of Open Access Journals (Sweden)
Claire E. Bulgin
2018-04-01
Full Text Available Clouds remain one of the largest sources of uncertainty in remote sensing of surface temperature in the infrared, but this uncertainty has not generally been quantified. We present a new approach to do so, applied here to the Advanced Along-Track Scanning Radiometer (AATSR. We use an ensemble of cloud masks based on independent methodologies to investigate the magnitude of cloud detection uncertainties in area-average Land Surface Temperature (LST retrieval. We find that at a grid resolution of 625 km 2 (commensurate with a 0.25 ∘ grid size at the tropics, cloud detection uncertainties are positively correlated with cloud-cover fraction in the cell and are larger during the day than at night. Daytime cloud detection uncertainties range between 2.5 K for clear-sky fractions of 10–20% and 1.03 K for clear-sky fractions of 90–100%. Corresponding night-time uncertainties are 1.6 K and 0.38 K, respectively. Cloud detection uncertainty shows a weaker positive correlation with the number of biomes present within a grid cell, used as a measure of heterogeneity in the background against which the cloud detection must operate (e.g., surface temperature, emissivity and reflectance. Uncertainty due to cloud detection errors is strongly dependent on the dominant land cover classification. We find cloud detection uncertainties of a magnitude of 1.95 K over permanent snow and ice, 1.2 K over open forest, 0.9–1 K over bare soils and 0.09 K over mosaic cropland, for a standardised clear-sky fraction of 74.2%. As the uncertainties arising from cloud detection errors are of a significant magnitude for many surface types and spatially heterogeneous where land classification varies rapidly, LST data producers are encouraged to quantify cloud-related uncertainties in gridded products.
Uncertainty and conservatism in safety evaluations based on a BEPU approach
International Nuclear Information System (INIS)
Yamaguchi, A.; Mizokami, S.; Kudo, Y.; Hotta, A.
2009-01-01
Atomic Energy Society of Japan has published 'Standard Method for Safety Evaluation using Best Estimate Code Based on Uncertainty and Scaling Analyses with Statistical Approach' to be applied to accidents and AOOs in the safety evaluation of LWRs. In this method, hereafter named as the AESJ-SSE (Statistical Safety Evaluation) method, identification and quantification of uncertainties will be performed and then a combination of the best estimate code and the evaluation of uncertainty propagation will be performed. Uncertainties are categorized into bias and variability. In general, bias is related to our state-of-knowledge on uncertainty objects (modeling, scaling, input data, etc.) while variability reflects stochastic features involved in these objects. Considering many kinds of uncertainties in thermal-hydraulics models and experimental databases show variabilities that will be strongly influenced by our state of knowledge, it seems reasonable that these variabilities are also related to state-of-knowledge. The design basis events (DBEs) that are employed for licensing analyses form a main part of the given or prior conservatism. The regulatory acceptance criterion is also regarded as the prior conservatism. In addition to these prior conservatisms, a certain amount of the posterior conservatism is added with maintaining intimate relationships with state-of-knowledge. In the AESJ-SSE method, this posterior conservatism can be incorporated into the safety evaluation in a combination of the following three ways, (1) broadening ranges of variability relevant to uncertainty objects, (2) employing more disadvantageous biases relevant to uncertainty objects and (3) adding an extra bias to the safety evaluation results. Knowing implemented quantitative bases of uncertainties and conservatism, the AESJ-SSE method provides a useful ground for rational decision-making. In order to seek for 'the best estimation' as well as reasonably setting the analytical margin, a degree
THE UNCERTAINTIES ON THE GIS BASED LAND SUITABILITY ASSESSMENT FOR URBAN AND RURAL PLANNING
Directory of Open Access Journals (Sweden)
H. Liu
2017-09-01
Full Text Available The majority of the research on the uncertainties of spatial data and spatial analysis focuses on some specific data feature or analysis tool. Few have accomplished the uncertainties of the whole process of an application like planning, making the research of uncertainties detached from practical applications. The paper discusses the uncertainties of the geographical information systems (GIS based land suitability assessment in planning on the basis of literature review. The uncertainties considered range from index system establishment to the classification of the final result. Methods to reduce the uncertainties arise from the discretization of continuous raster data and the index weight determination are summarized. The paper analyzes the merits and demerits of the “Nature Breaks” method which is broadly used by planners. It also explores the other factors which impact the accuracy of the final classification like the selection of class numbers, intervals and the autocorrelation of the spatial data. In the conclusion part, the paper indicates that the adoption of machine learning methods should be modified to integrate the complexity of land suitability assessment. The work contributes to the application of spatial data and spatial analysis uncertainty research on land suitability assessment, and promotes the scientific level of the later planning and decision-making.
Souverijns, Niels; Gossart, Alexandra; Lhermitte, Stef; Gorodetskaya, Irina; Kneifel, Stefan; Maahn, Maximilian; Bliven, Francis; van Lipzig, Nicole
2017-04-01
The Antarctic Ice Sheet (AIS) is the largest ice body on earth, having a volume equivalent to 58.3 m global mean sea level rise. Precipitation is the dominant source term in the surface mass balance of the AIS. However, this quantity is not well constrained in both models and observations. Direct observations over the AIS are also not coherent, as they are sparse in space and time and acquisition techniques differ. As a result, precipitation observations stay mostly limited to continent-wide averages based on satellite radar observations. Snowfall rate (SR) at high temporal resolution can be derived from the ground-based radar effective reflectivity factor (Z) using information about snow particle size and shape. Here we present reflectivity snowfall rate relations (Z = aSRb) for the East Antarctic escarpment region using the measurements at the Princess Elisabeth (PE) station and an overview of their uncertainties. A novel technique is developed by combining an optical disdrometer (NASA's Precipitation Imaging Package; PIP) and a vertically pointing 24 GHz FMCW micro rain radar (Metek's MRR) in order to reduce the uncertainty in SR estimates. PIP is used to obtain information about snow particle characteristics and to get an estimate of Z, SR and the Z-SR relation. For PE, located 173 km inland, the relation equals Z = 18SR1.1. The prefactor (a) of the relation is sensitive to the median diameter of the particles. Larger particles, found closer to the coast, lead to an increase of the value of the prefactor. More inland locations, where smaller snow particles are found, obtain lower values for the prefactor. The exponent of the Z-SR relation (b) is insensitive to the median diameter of the snow particles. This dependence of the prefactor of the Z-SR relation to the particle size needs to be taken into account when converting radar reflectivities to snowfall rates over Antarctica. The uncertainty on the Z-SR relations is quantified using a bootstrapping approach
Latent uncertainties of the precalculated track Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Renaud, Marc-André; Seuntjens, Jan [Medical Physics Unit, McGill University, Montreal, Quebec H3G 1A4 (Canada); Roberge, David [Département de radio-oncologie, Centre Hospitalier de l’Université de Montréal, Montreal, Quebec H2L 4M1 (Canada)
2015-01-15
Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D{sub max}. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of
Latent uncertainties of the precalculated track Monte Carlo method
International Nuclear Information System (INIS)
Renaud, Marc-André; Seuntjens, Jan; Roberge, David
2015-01-01
Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D max . Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the
Moving Beyond 2% Uncertainty: A New Framework for Quantifying Lidar Uncertainty
Energy Technology Data Exchange (ETDEWEB)
Newman, Jennifer F.; Clifton, Andrew
2017-03-08
Remote sensing of wind using lidar is revolutionizing wind energy. However, current generations of wind lidar are ascribed a climatic value of uncertainty, which is based on a poor description of lidar sensitivity to external conditions. In this presentation, we show how it is important to consider the complete lidar measurement process to define the measurement uncertainty, which in turn offers the ability to define a much more granular and dynamic measurement uncertainty. This approach is a progression from the 'white box' lidar uncertainty method.
Integrated uncertainty analysis using RELAP/SCDAPSIM/MOD4.0
International Nuclear Information System (INIS)
Perez, M.; Reventos, F.; Wagner, R.; Allison, C.
2009-01-01
The RELAP/SCDAPSIM/MOD4.0 code, designed to predict the behavior of reactor systems during normal and accident conditions, is being developed as part of an international nuclear technology Software Development and Training Program (SDTP). RELAP/SCDAPSIM/MOD4.0, which is the first version of RELAP5 completely rewritten to FORTRAN 90/95/2000 standards, uses the publicly available RELAP5 and SCDAP models in combination with (a) advanced programming and numerical techniques, (b) advanced SDTP-member-developed models for LWR, HWR, and research reactor analysis, and (c) a variety of other member-developed computational packages. One such computational package is an integrated uncertainty analysis package being developed jointly by the Technical University of Catalunya (UPC) and Innovative Systems Software (ISS). The integrated uncertainty analysis approach used in the package uses the following steps: 1. Selection of the plant; 2. Selection of the scenario; 3. Selection of the safety criteria; 4. Identification and ranking of the relevant phenomena based on the safety criteria; 5. Selection of the appropriate code parameters to represent those phenomena; 6. Association of uncertainty by means of Probability Distribution Functions (PDFs) for each selected parameter; 7. Random sampling of the selected parameters according to its PDF and performing multiple computer runs to obtain uncertainty bands with a certain percentile and confidence level; 8. Processing the results of the multiple computer runs to estimate the uncertainty bands for the computed quantities associated with the selected safety criteria. The first four steps are performed by the user prior to the RELAP/SCDAPSIM/MOD4.0 analysis. The remaining steps are included with the MOD4.0 integrated uncertainty analysis (IUA) package. This paper briefly describes the integrated uncertainty analysis package including (a) the features of the package, (b) the implementation of the package into RELAP/SCDAPSIM/MOD4.0, and
Quantifying uncertainty in NDSHA estimates due to earthquake catalogue
Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano
2014-05-01
The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate
Validation uncertainty of MATRA code for subchannel void distributions
Energy Technology Data Exchange (ETDEWEB)
Hwang, Dae-Hyun; Kim, S. J.; Kwon, H.; Seo, K. W. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
To extend code capability to the whole core subchannel analysis, pre-conditioned Krylov matrix solvers such as BiCGSTAB and GMRES are implemented in MATRA code as well as parallel computing algorithms using MPI and OPENMP. It is coded by fortran 90, and has some user friendly features such as graphic user interface. MATRA code was approved by Korean regulation body for design calculation of integral-type PWR named SMART. The major role subchannel code is to evaluate core thermal margin through the hot channel analysis and uncertainty evaluation for CHF predictions. In addition, it is potentially used for the best estimation of core thermal hydraulic field by incorporating into multiphysics and/or multi-scale code systems. In this study we examined a validation process for the subchannel code MATRA specifically in the prediction of subchannel void distributions. The primary objective of validation is to estimate a range within which the simulation modeling error lies. The experimental data for subchannel void distributions at steady state and transient conditions was provided on the framework of OECD/NEA UAM benchmark program. The validation uncertainty of MATRA code was evaluated for a specific experimental condition by comparing the simulation result and experimental data. A validation process should be preceded by code and solution verification. However, quantification of verification uncertainty was not addressed in this study. The validation uncertainty of the MATRA code for predicting subchannel void distribution was evaluated for a single data point of void fraction measurement at a 5x5 PWR test bundle on the framework of OECD UAM benchmark program. The validation standard uncertainties were evaluated as 4.2%, 3.9%, and 2.8% with the Monte-Carlo approach at the axial levels of 2216 mm, 2669 mm, and 3177 mm, respectively. The sensitivity coefficient approach revealed similar results of uncertainties but did not account for the nonlinear effects on the
Different methodologies to quantify uncertainties of air emissions.
Romano, Daniela; Bernetti, Antonella; De Lauretis, Riccardo
2004-10-01
Characterization of the uncertainty associated with air emission estimates is of critical importance especially in the compilation of air emission inventories. In this paper, two different theories are discussed and applied to evaluate air emissions uncertainty. In addition to numerical analysis, which is also recommended in the framework of the United Nation Convention on Climate Change guidelines with reference to Monte Carlo and Bootstrap simulation models, fuzzy analysis is also proposed. The methodologies are discussed and applied to an Italian example case study. Air concentration values are measured from two electric power plants: a coal plant, consisting of two boilers and a fuel oil plant, of four boilers; the pollutants considered are sulphur dioxide (SO(2)), nitrogen oxides (NO(X)), carbon monoxide (CO) and particulate matter (PM). Monte Carlo, Bootstrap and fuzzy methods have been applied to estimate uncertainty of these data. Regarding Monte Carlo, the most accurate results apply to Gaussian distributions; a good approximation is also observed for other distributions with almost regular features either positive asymmetrical or negative asymmetrical. Bootstrap, on the other hand, gives a good uncertainty estimation for irregular and asymmetrical distributions. The logic of fuzzy analysis, where data are represented as vague and indefinite in opposition to the traditional conception of neatness, certain classification and exactness of the data, follows a different description. In addition to randomness (stochastic variability) only, fuzzy theory deals with imprecision (vagueness) of data. Fuzzy variance of the data set was calculated; the results cannot be directly compared with empirical data but the overall performance of the theory is analysed. Fuzzy theory may appear more suitable for qualitative reasoning than for a quantitative estimation of uncertainty, but it suits well when little information and few measurements are available and when
International Nuclear Information System (INIS)
Aydogan, F.; Hochreiter, L.; Ivanov, K.; Martin, M.; Utsuno, H.; Sartori, E.
2010-01-01
This report provides the specification for the uncertainty exercises of the international OECD/NEA, NRC and NUPEC BFBT benchmark problem including the elemental task. The specification was prepared jointly by Pennsylvania State University (PSU), USA and the Japan Nuclear Energy Safety (JNES) Organisation, in cooperation with the OECD/NEA and the Commissariat a l'energie atomique (CEA Saclay, France). The work is sponsored by the US NRC, METI-Japan, the OECD/NEA and the Nuclear Engineering Program (NEP) of Pennsylvania State University. This uncertainty specification covers the fourth exercise of Phase I (Exercise-I-4), and the third exercise of Phase II (Exercise II-3) as well as the elemental task. The OECD/NRC BFBT benchmark provides a very good opportunity to apply uncertainty analysis (UA) and sensitivity analysis (SA) techniques and to assess the accuracy of thermal-hydraulic models for two-phase flows in rod bundles. During the previous OECD benchmarks, participants usually carried out sensitivity analysis on their models for the specification (initial conditions, boundary conditions, etc.) to identify the most sensitive models or/and to improve the computed results. The comprehensive BFBT experimental database (NEA, 2006) leads us one step further in investigating modelling capabilities by taking into account the uncertainty analysis in the benchmark. The uncertainties in input data (boundary conditions) and geometry (provided in the benchmark specification) as well as the uncertainties in code models can be accounted for to produce results with calculational uncertainties and compare them with the measurement uncertainties. Therefore, uncertainty analysis exercises were defined for the void distribution and critical power phases of the BFBT benchmark. This specification is intended to provide definitions related to UA/SA methods, sensitivity/ uncertainty parameters, suggested probability distribution functions (PDF) of sensitivity parameters, and selected
Automatic feature-based grouping during multiple object tracking.
Erlikhman, Gennady; Keane, Brian P; Mettler, Everett; Horowitz, Todd S; Kellman, Philip J
2013-12-01
Contour interpolation automatically binds targets with distractors to impair multiple object tracking (Keane, Mettler, Tsoi, & Kellman, 2011). Is interpolation special in this regard or can other features produce the same effect? To address this question, we examined the influence of eight features on tracking: color, contrast polarity, orientation, size, shape, depth, interpolation, and a combination (shape, color, size). In each case, subjects tracked 4 of 8 objects that began as undifferentiated shapes, changed features as motion began (to enable grouping), and returned to their undifferentiated states before halting. We found that intertarget grouping improved performance for all feature types except orientation and interpolation (Experiment 1 and Experiment 2). Most importantly, target-distractor grouping impaired performance for color, size, shape, combination, and interpolation. The impairments were, at times, large (>15% decrement in accuracy) and occurred relative to a homogeneous condition in which all objects had the same features at each moment of a trial (Experiment 2), and relative to a "diversity" condition in which targets and distractors had different features at each moment (Experiment 3). We conclude that feature-based grouping occurs for a variety of features besides interpolation, even when irrelevant to task instructions and contrary to the task demands, suggesting that interpolation is not unique in promoting automatic grouping in tracking tasks. Our results also imply that various kinds of features are encoded automatically and in parallel during tracking.
Efficient perovskite light-emitting diodes featuring nanometre-sized crystallites
Xiao, Zhengguo; Kerner, Ross A.; Zhao, Lianfeng; Tran, Nhu L.; Lee, Kyung Min; Koh, Tae-Wook; Scholes, Gregory D.; Rand, Barry P.
2017-01-01
Organic-inorganic hybrid perovskite materials are emerging as highly attractive semiconductors for use in optoelectronics. In addition to their use in photovoltaics, perovskites are promising for realizing light-emitting diodes (LEDs) due to their high colour purity, low non-radiative recombination rates and tunable bandgap. Here, we report highly efficient perovskite LEDs enabled through the formation of self-assembled, nanometre-sized crystallites. Large-group ammonium halides added to the perovskite precursor solution act as a surfactant that dramatically constrains the growth of 3D perovskite grains during film forming, producing crystallites with dimensions as small as 10 nm and film roughness of less than 1 nm. Coating these nanometre-sized perovskite grains with longer-chain organic cations yields highly efficient emitters, resulting in LEDs that operate with external quantum efficiencies of 10.4% for the methylammonium lead iodide system and 9.3% for the methylammonium lead bromide system, with significantly improved shelf and operational stability.
Development of Property Models with Uncertainty Estimate for Process Design under Uncertainty
DEFF Research Database (Denmark)
Hukkerikar, Amol; Sarup, Bent; Abildskov, Jens
more reliable predictions with a new and improved set of model parameters for GC (group contribution) based and CI (atom connectivity index) based models and to quantify the uncertainties in the estimated property values from a process design point-of-view. This includes: (i) parameter estimation using....... The comparison of model prediction uncertainties with reported range of measurement uncertainties is presented for the properties with related available data. The application of the developed methodology to quantify the effect of these uncertainties on the design of different unit operations (distillation column......, the developed methodology can be used to quantify the sensitivity of process design to uncertainties in property estimates; obtain rationally the risk/safety factors in process design; and identify additional experimentation needs in order to reduce most critical uncertainties....
DEFF Research Database (Denmark)
Thomsen, Nanna Isbak; Binning, Philip John; McKnight, Ursula S.
2016-01-01
the most important site-specific features and processes that may affect the contaminant transport behavior at the site. However, the development of a CSM will always be associated with uncertainties due to limited data and lack of understanding of the site conditions. CSM uncertainty is often found...... to be a major source of model error and it should therefore be accounted for when evaluating uncertainties in risk assessments. We present a Bayesian belief network (BBN) approach for constructing CSMs and assessing their uncertainty at contaminated sites. BBNs are graphical probabilistic models...... that are effective for integrating quantitative and qualitative information, and thus can strengthen decisions when empirical data are lacking. The proposed BBN approach facilitates a systematic construction of multiple CSMs, and then determines the belief in each CSM using a variety of data types and/or expert...
Working memory for visual features and conjunctions in schizophrenia.
Gold, James M; Wilk, Christopher M; McMahon, Robert P; Buchanan, Robert W; Luck, Steven J
2003-02-01
The visual working memory (WM) storage capacity of patients with schizophrenia was investigated using a change detection paradigm. Participants were presented with 2, 3, 4, or 6 colored bars with testing of both single feature (color, orientation) and feature conjunction conditions. Patients performed significantly worse than controls at all set sizes but demonstrated normal feature binding. Unlike controls, patient WM capacity declined at set size 6 relative to set size 4. Impairments with subcapacity arrays suggest a deficit in task set maintenance: Greater impairment for supercapacity set sizes suggests a deficit in the ability to selectively encode information for WM storage. Thus, the WM impairment in schizophrenia appears to be a consequence of attentional deficits rather than a reduction in storage capacity.
MRI features of peripheral traumatic neuromas
Energy Technology Data Exchange (ETDEWEB)
Ahlawat, Shivani [Johns Hopkins University School of Medicine, Musculoskeletal Radiology Section, The Russell H. Morgan Department of Radiology and Radiological Science, Baltimore, MD (United States); Belzberg, Allan J. [The Johns Hopkins Hospital, Department of Neurosurgery, Baltimore, MD (United States); Montgomery, Elizabeth A. [The Johns Hopkins Hospital, Pathology, Oncology and Orthopedic Surgery, Baltimore, MD (United States); Fayad, Laura M. [Department of Orthopedic Surgery, Department of Radiology and Radiological Science, Musculoskeletal Imaging Section Chief, The Johns Hopkins Medical Institutions, Baltimore, MD (United States); The Johns Hopkins Medical Institutions, Department of Orthopedic Surgery, Baltimore, MD (United States)
2016-04-15
To describe the MRI appearance of traumatic neuromas on non-contrast and contrast-enhanced MRI sequences. This IRB-approved, HIPAA-compliant study retrospectively reviewed 13 subjects with 20 neuromas. Two observers reviewed pre-operative MRIs for imaging features of neuroma (size, margin, capsule, signal intensity, heterogeneity, enhancement, neurogenic features and denervation) and the nerve segment distal to the traumatic neuroma. Descriptive statistics were reported. Pearson's correlation was used to examine the relationship between size of neuroma and parent nerve. Of 20 neuromas, 13 were neuromas-in-continuity and seven were end-bulb neuromas. Neuromas had a mean size of 1.5 cm (range 0.6-4.8 cm), 100 % (20/20) had indistinct margins and 0 % (0/20) had a capsule. Eighty-eight percent (7/8) showed enhancement. All 100 % (20/20) had tail sign; 35 % (7/20) demonstrated discontinuity from the parent nerve. None showed a target sign. There was moderate positive correlation (r = 0.68, p = 0.001) with larger neuromas arising from larger parent nerves. MRI evaluation of the nerve segment distal to the neuroma showed increased size (mean size 0.5 cm ± 0.4 cm) compared to the parent nerve (mean size 0.3 cm ± 0.2 cm). Since MRI features of neuromas include enhancement, intravenous contrast medium cannot be used to distinguish neuromas from peripheral nerve sheath tumours. The clinical history of trauma with the lack of a target sign are likely the most useful clues. (orig.)
Ming, Fei; Wang, Dong; Shi, Wei-Nan; Huang, Ai-Jun; Sun, Wen-Yang; Ye, Liu
2018-04-01
The uncertainty principle is recognized as an elementary ingredient of quantum theory and sets up a significant bound to predict outcome of measurement for a couple of incompatible observables. In this work, we develop dynamical features of quantum memory-assisted entropic uncertainty relations (QMA-EUR) in a two-qubit Heisenberg XXZ spin chain with an inhomogeneous magnetic field. We specifically derive the dynamical evolutions of the entropic uncertainty with respect to the measurement in the Heisenberg XXZ model when spin A is initially correlated with quantum memory B. It has been found that the larger coupling strength J of the ferromagnetism ( J 0 ) chains can effectively degrade the measuring uncertainty. Besides, it turns out that the higher temperature can induce the inflation of the uncertainty because the thermal entanglement becomes relatively weak in this scenario, and there exists a distinct dynamical behavior of the uncertainty when an inhomogeneous magnetic field emerges. With the growing magnetic field | B | , the variation of the entropic uncertainty will be non-monotonic. Meanwhile, we compare several different optimized bounds existing with the initial bound proposed by Berta et al. and consequently conclude Adabi et al.'s result is optimal. Moreover, we also investigate the mixedness of the system of interest, dramatically associated with the uncertainty. Remarkably, we put forward a possible physical interpretation to explain the evolutionary phenomenon of the uncertainty. Finally, we take advantage of a local filtering operation to steer the magnitude of the uncertainty. Therefore, our explorations may shed light on the entropic uncertainty under the Heisenberg XXZ model and hence be of importance to quantum precision measurement over solid state-based quantum information processing.
Failure probability under parameter uncertainty.
Gerrard, R; Tsanakas, A
2011-05-01
In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.
Results from the Application of Uncertainty Methods in the CSNI Uncertainty Methods Study (UMS)
International Nuclear Information System (INIS)
Glaeser, H.
2008-01-01
Within licensing procedures there is the incentive to replace the conservative requirements for code application by a - best estimate - concept supplemented by an uncertainty analysis to account for predictive uncertainties of code results. Methods have been developed to quantify these uncertainties. The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI, has compared five methods for calculating the uncertainty in the predictions of advanced -best estimate- thermal-hydraulic codes. Most of the methods identify and combine input uncertainties. The major differences between the predictions of the methods came from the choice of uncertain parameters and the quantification of the input uncertainties, i.e. the wideness of the uncertainty ranges. Therefore, suitable experimental and analytical information has to be selected to specify these uncertainty ranges or distributions. After the closure of the Uncertainty Method Study (UMS) and after the report was issued comparison calculations of experiment LSTF-SB-CL-18 were performed by University of Pisa using different versions of the RELAP 5 code. It turned out that the version used by two of the participants calculated a 170 K higher peak clad temperature compared with other versions using the same input deck. This may contribute to the differences of the upper limit of the uncertainty ranges.
Rains, Stephen A; Tukachinsky, Riva
2015-01-01
Uncertainty management theory outlines the processes through which individuals cope with health-related uncertainty. Information seeking has been frequently documented as an important uncertainty management strategy. The reported study investigates exposure to specific types of medical information during a search, and one's information-processing orientation as predictors of successful uncertainty management (i.e., a reduction in the discrepancy between the level of uncertainty one feels and the level one desires). A lab study was conducted in which participants were primed to feel more or less certain about skin cancer and then were allowed to search the World Wide Web for skin cancer information. Participants' search behavior was recorded and content analyzed. The results indicate that exposure to two health communication constructs that pervade medical forms of uncertainty (i.e., severity and susceptibility) and information-processing orientation predicted uncertainty management success.
Conditional uncertainty principle
Gour, Gilad; Grudka, Andrzej; Horodecki, Michał; Kłobus, Waldemar; Łodyga, Justyna; Narasimhachar, Varun
2018-04-01
We develop a general operational framework that formalizes the concept of conditional uncertainty in a measure-independent fashion. Our formalism is built upon a mathematical relation which we call conditional majorization. We define conditional majorization and, for the case of classical memory, we provide its thorough characterization in terms of monotones, i.e., functions that preserve the partial order under conditional majorization. We demonstrate the application of this framework by deriving two types of memory-assisted uncertainty relations, (1) a monotone-based conditional uncertainty relation and (2) a universal measure-independent conditional uncertainty relation, both of which set a lower bound on the minimal uncertainty that Bob has about Alice's pair of incompatible measurements, conditioned on arbitrary measurement that Bob makes on his own system. We next compare the obtained relations with their existing entropic counterparts and find that they are at least independent.
Uncertainty and Cognitive Control
Directory of Open Access Journals (Sweden)
Faisal eMushtaq
2011-10-01
Full Text Available A growing trend of neuroimaging, behavioural and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1 There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2 There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3 The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the need for control; (4 Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders.
International Nuclear Information System (INIS)
Zwermann, W.; Krzykacz-Hausmann, B.; Gallner, L.; Klein, M.; Pautz, A.; Velkov, K.
2012-01-01
Sampling based uncertainty and sensitivity analyses due to epistemic input uncertainties, i.e. to an incomplete knowledge of uncertain input parameters, can be performed with arbitrary application programs to solve the physical problem under consideration. For the description of steady-state particle transport, direct simulations of the microscopic processes with Monte Carlo codes are often used. This introduces an additional source of uncertainty, the aleatoric sampling uncertainty, which is due to the randomness of the simulation process performed by sampling, and which adds to the total combined output sampling uncertainty. So far, this aleatoric part of uncertainty is minimized by running a sufficiently large number of Monte Carlo histories for each sample calculation, thus making its impact negligible as compared to the impact from sampling the epistemic uncertainties. Obviously, this process may cause high computational costs. The present paper shows that in many applications reliable epistemic uncertainty results can also be obtained with substantially lower computational effort by performing and analyzing two appropriately generated series of samples with much smaller number of Monte Carlo histories each. The method is applied along with the nuclear data uncertainty and sensitivity code package XSUSA in combination with the Monte Carlo transport code KENO-Va to various critical assemblies and a full scale reactor calculation. It is shown that the proposed method yields output uncertainties and sensitivities equivalent to the traditional approach, with a high reduction of computing time by factors of the magnitude of 100. (authors)
Uncertainty, probability and information-gaps
International Nuclear Information System (INIS)
Ben-Haim, Yakov
2004-01-01
This paper discusses two main ideas. First, we focus on info-gap uncertainty, as distinct from probability. Info-gap theory is especially suited for modelling and managing uncertainty in system models: we invest all our knowledge in formulating the best possible model; this leaves the modeller with very faulty and fragmentary information about the variation of reality around that optimal model. Second, we examine the interdependence between uncertainty modelling and decision-making. Good uncertainty modelling requires contact with the end-use, namely, with the decision-making application of the uncertainty model. The most important avenue of uncertainty-propagation is from initial data- and model-uncertainties into uncertainty in the decision-domain. Two questions arise. Is the decision robust to the initial uncertainties? Is the decision prone to opportune windfall success? We apply info-gap robustness and opportunity functions to the analysis of representation and propagation of uncertainty in several of the Sandia Challenge Problems
Fischer, Andreas
2016-11-01
Optical flow velocity measurements are important for understanding the complex behavior of flows. Although a huge variety of methods exist, they are either based on a Doppler or a time-of-flight measurement principle. Doppler velocimetry evaluates the velocity-dependent frequency shift of light scattered at a moving particle, whereas time-of-flight velocimetry evaluates the traveled distance of a scattering particle per time interval. Regarding the aim of achieving a minimal measurement uncertainty, it is unclear if one principle allows to achieve lower uncertainties or if both principles can achieve equal uncertainties. For this reason, the natural, fundamental uncertainty limit according to Heisenberg's uncertainty principle is derived for Doppler and time-of-flight measurement principles, respectively. The obtained limits of the velocity uncertainty are qualitatively identical showing, e.g., a direct proportionality for the absolute value of the velocity to the power of 32 and an indirect proportionality to the square root of the scattered light power. Hence, both measurement principles have identical potentials regarding the fundamental uncertainty limit due to the quantum mechanical behavior of photons. This fundamental limit can be attained (at least asymptotically) in reality either with Doppler or time-of-flight methods, because the respective Cramér-Rao bounds for dominating photon shot noise, which is modeled as white Poissonian noise, are identical with the conclusions from Heisenberg's uncertainty principle.
Influence of wind energy forecast in deterministic and probabilistic sizing of reserves
Energy Technology Data Exchange (ETDEWEB)
Gil, A.; Torre, M. de la; Dominguez, T.; Rivas, R. [Red Electrica de Espana (REE), Madrid (Spain). Dept. Centro de Control Electrico
2010-07-01
One of the challenges in large-scale wind energy integration in electrical systems is coping with wind forecast uncertainties at the time of sizing generation reserves. These reserves must be sized large enough so that they don't compromise security of supply or the balance of the system, but economic efficiency must be also kept in mind. This paper describes two methods of sizing spinning reserves taking into account wind forecast uncertainties, deterministic using a probabilistic wind forecast and probabilistic using stochastic variables. The deterministic method calculates the spinning reserve needed by adding components each of them in order to overcome one single uncertainty: demand errors, the biggest thermal generation loss and wind forecast errors. The probabilistic method assumes that demand forecast errors, short-term thermal group unavailability and wind forecast errors are independent stochastic variables and calculates the probability density function of the three variables combined. These methods are being used in the case of the Spanish peninsular system, in which wind energy accounted for 14% of the total electrical energy produced in the year 2009 and is one of the systems in the world with the highest wind penetration levels. (orig.)
Towards quantifying uncertainty in predictions of Amazon 'dieback'.
Huntingford, Chris; Fisher, Rosie A; Mercado, Lina; Booth, Ben B B; Sitch, Stephen; Harris, Phil P; Cox, Peter M; Jones, Chris D; Betts, Richard A; Malhi, Yadvinder; Harris, Glen R; Collins, Mat; Moorcroft, Paul
2008-05-27
Simulations with the Hadley Centre general circulation model (HadCM3), including carbon cycle model and forced by a 'business-as-usual' emissions scenario, predict a rapid loss of Amazonian rainforest from the middle of this century onwards. The robustness of this projection to both uncertainty in physical climate drivers and the formulation of the land surface scheme is investigated. We analyse how the modelled vegetation cover in Amazonia responds to (i) uncertainty in the parameters specified in the atmosphere component of HadCM3 and their associated influence on predicted surface climate. We then enhance the land surface description and (ii) implement a multilayer canopy light interception model and compare with the simple 'big-leaf' approach used in the original simulations. Finally, (iii) we investigate the effect of changing the method of simulating vegetation dynamics from an area-based model (TRIFFID) to a more complex size- and age-structured approximation of an individual-based model (ecosystem demography). We find that the loss of Amazonian rainforest is robust across the climate uncertainty explored by perturbed physics simulations covering a wide range of global climate sensitivity. The introduction of the refined light interception model leads to an increase in simulated gross plant carbon uptake for the present day, but, with altered respiration, the net effect is a decrease in net primary productivity. However, this does not significantly affect the carbon loss from vegetation and soil as a consequence of future simulated depletion in soil moisture; the Amazon forest is still lost. The introduction of the more sophisticated dynamic vegetation model reduces but does not halt the rate of forest dieback. The potential for human-induced climate change to trigger the loss of Amazon rainforest appears robust within the context of the uncertainties explored in this paper. Some further uncertainties should be explored, particularly with respect to the
CSIR Research Space (South Africa)
Salmon, BP
2017-07-01
Full Text Available the effect which the length of a temporal sliding window has on the success of detecting land cover change. It is shown using a short Fourier transform as a feature extraction method provides meaningful robust input to a machine learning method. In theory...
A Fatigue Crack Size Evaluation Method Based on Lamb Wave Simulation and Limited Experimental Data
Directory of Open Access Journals (Sweden)
Jingjing He
2017-09-01
Full Text Available This paper presents a systematic and general method for Lamb wave-based crack size quantification using finite element simulations and Bayesian updating. The method consists of construction of a baseline quantification model using finite element simulation data and Bayesian updating with limited Lamb wave data from target structure. The baseline model correlates two proposed damage sensitive features, namely the normalized amplitude and phase change, with the crack length through a response surface model. The two damage sensitive features are extracted from the first received S0 mode wave package. The model parameters of the baseline model are estimated using finite element simulation data. To account for uncertainties from numerical modeling, geometry, material and manufacturing between the baseline model and the target model, Bayesian method is employed to update the baseline model with a few measurements acquired from the actual target structure. A rigorous validation is made using in-situ fatigue testing and Lamb wave data from coupon specimens and realistic lap-joint components. The effectiveness and accuracy of the proposed method is demonstrated under different loading and damage conditions.
International Nuclear Information System (INIS)
Tyobeka, Bismark; Reitsma, Frederik; Ivanov, Kostadin
2011-01-01
The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of HTGR design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The predictive capability of coupled neutronics/thermal hydraulics and depletion simulations for reactor design and safety analysis can be assessed with sensitivity analysis and uncertainty analysis methods. In order to benefit from recent advances in modeling and simulation and the availability of new covariance data (nuclear data uncertainties) extensive sensitivity and uncertainty studies are needed for quantification of the impact of different sources of uncertainties on the design and safety parameters of HTGRs. Uncertainty and sensitivity studies are an essential component of any significant effort in data and simulation improvement. In February 2009, the Technical Working Group on Gas-Cooled Reactors recommended that the proposed IAEA Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modeling be implemented. In the paper the current status and plan are presented. The CRP will also benefit from interactions with the currently ongoing OECD/NEA Light Water Reactor (LWR) UAM benchmark activity by taking into consideration the peculiarities of HTGR designs and simulation requirements. (author)
Directory of Open Access Journals (Sweden)
Pradeep Pillai
Full Text Available We utilize a standard competition-colonization metapopulation model in order to study the evolutionary assembly of species. Based on earlier work showing how models assuming strict competitive hierarchies will likely lead to runaway evolution and self-extinction for all species, we adopt a continuous competition function that allows for levels of uncertainty in the outcome of competition. We then, by extending the standard patch-dynamic metapopulation model in order to include evolutionary dynamics, allow for the coevolution of species into stable communities composed of species with distinct limiting similarities. Runaway evolution towards stochastic extinction then becomes a limiting case controlled by the level of competitive uncertainty. We demonstrate how intermediate competitive uncertainty maximizes the equilibrium species richness as well as maximizes the adaptive radiation and self-assembly of species under adaptive dynamics with mutations of non-negligible size. By reconciling competition-colonization tradeoff theory with co-evolutionary dynamics, our results reveal the importance of intermediate levels of competitive uncertainty for the evolutionary assembly of species.
Evaluating the uncertainty of input quantities in measurement models
Possolo, Antonio; Elster, Clemens
2014-06-01
The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM. While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia). Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in
Directory of Open Access Journals (Sweden)
Mawardi Bahri
2017-01-01
Full Text Available The continuous quaternion wavelet transform (CQWT is a generalization of the classical continuous wavelet transform within the context of quaternion algebra. First of all, we show that the directional quaternion Fourier transform (QFT uncertainty principle can be obtained using the component-wise QFT uncertainty principle. Based on this method, the directional QFT uncertainty principle using representation of polar coordinate form is easily derived. We derive a variation on uncertainty principle related to the QFT. We state that the CQWT of a quaternion function can be written in terms of the QFT and obtain a variation on uncertainty principle related to the CQWT. Finally, we apply the extended uncertainty principles and properties of the CQWT to establish logarithmic uncertainty principles related to generalized transform.
Clinicopathological features of patients with breast cancer aged 70 years or over.
Aytekin, Aydin; Karatas, Fatih; Sahin, Suleyman; Erdem, Gokmen U; Altundag, Kadri
2017-01-01
The risk of breast cancer (BC) increases in parallel with increasing age. Despite the increased disease burden in elderly patients, there is still a great uncertainty regarding "how to manage" BC in aging-population. The purpose of this study was to investigate the clinicopathological features and treatment approaches of patients with BC aged 70 years or over. The medical records of 4413 patients with BC followed between 1994-2015 were retrospectively analyzed. Of the 4413 patients, 238 with stage I to III disease aged 70 years or over at BC diagnosis were enrolled into this study. Patients were divided into 2 groups according to the age as group 1 (70-79 years, N=192) and group 2 (80 or over, N=46). Clinicopathological features of patients including tumor histology, grade, estrogen (ER) and progesterone receptor (PgR) status, human epidermal growth factor receptor 2 (HER2) status, tumor size, lymph node involvement (LNI), lymphovascular invasion (LVI), perineural invasion (PNI), clinical stage, type of surgery, treatments and comorbid diseases were evaluated. The median age was 74 for group 1 (range 70-79) and 82 for group 2 (range 80-92). Excluding tumor size and grade, no statictically significant difference was found between the two groups according to histopathological characteristics. Patients in group 2 had more commonly larger T stage (T4), and less frequently presented with grade I tumor (p=0.014 and p=0.044, respectively). Modified radical mastectomy and adjuvant chemotherapy were more commonly performed in group 1 (p=0.001 and p=0.001, respectively). In contrast, neoadjuvant treatment was more frequently applied in group 2 (p=0.003). There was no difference in disease-free survival (DFS) between the groups (p=0.012), however, median overall survival (OS) was significantly higher in group 1 (p=0.03). Excluding the tumor grade and tumor size, both groups had similar histopathological features. However, patients aged between 70-79 years were likely to
Size and stochasticity in irrigated social-ecological systems
Puy, Arnald; Muneepeerakul, Rachata; Balbo, Andrea L.
2017-03-01
This paper presents a systematic study of the relation between the size of irrigation systems and the management of uncertainty. We specifically focus on studying, through a stylized theoretical model, how stochasticity in water availability and taxation interacts with the stochastic behavior of the population within irrigation systems. Our results indicate the existence of two key population thresholds for the sustainability of any irrigation system: or the critical population size required to keep the irrigation system operative, and N* or the population threshold at which the incentive to work inside the irrigation system equals the incentives to work elsewhere. Crossing irretrievably leads to system collapse. N* is the population level with a sub-optimal per capita payoff towards which irrigation systems tend to gravitate. When subjected to strong stochasticity in water availability or taxation, irrigation systems might suffer sharp population drops and irreversibly disintegrate into a system collapse, via a mechanism we dub ‘collapse trap’. Our conceptual study establishes the basis for further work aiming at appraising the dynamics between size and stochasticity in irrigation systems, whose understanding is key for devising mitigation and adaptation measures to ensure their sustainability in the face of increasing and inevitable uncertainty.
On the relationship between aerosol model uncertainty and radiative forcing uncertainty.
Lee, Lindsay A; Reddington, Carly L; Carslaw, Kenneth S
2016-05-24
The largest uncertainty in the historical radiative forcing of climate is caused by the interaction of aerosols with clouds. Historical forcing is not a directly measurable quantity, so reliable assessments depend on the development of global models of aerosols and clouds that are well constrained by observations. However, there has been no systematic assessment of how reduction in the uncertainty of global aerosol models will feed through to the uncertainty in the predicted forcing. We use a global model perturbed parameter ensemble to show that tight observational constraint of aerosol concentrations in the model has a relatively small effect on the aerosol-related uncertainty in the calculated forcing between preindustrial and present-day periods. One factor is the low sensitivity of present-day aerosol to natural emissions that determine the preindustrial aerosol state. However, the major cause of the weak constraint is that the full uncertainty space of the model generates a large number of model variants that are equally acceptable compared to present-day aerosol observations. The narrow range of aerosol concentrations in the observationally constrained model gives the impression of low aerosol model uncertainty. However, these multiple "equifinal" models predict a wide range of forcings. To make progress, we need to develop a much deeper understanding of model uncertainty and ways to use observations to constrain it. Equifinality in the aerosol model means that tuning of a small number of model processes to achieve model-observation agreement could give a misleading impression of model robustness.
Directory of Open Access Journals (Sweden)
Jinzhao Shi
2016-01-01
Full Text Available With a stochastic price-dependent market demand, this paper investigates how demand uncertainty and capital constraint affect retailer’s integrated ordering and pricing policies towards seasonal products. The retailer with capital constraint is normalized to be with zero capital endowment while it can be financed by an external bank. The problems are studied under a low and high demand uncertainty scenario, respectively. Results show that when demand uncertainty level is relatively low, the retailer faced with demand uncertainty always sets a lower price than the riskless one, while its order quantity may be smaller or larger than the riskless retailer’s which depends on the level of market size. When adding a capital constraint, the retailer will strictly prefer a higher price but smaller quantity policy. However, in a high demand uncertainty scenario, the impacts are more intricate. The retailer faced with demand uncertainty will always order a larger quantity than the riskless one if demand uncertainty level is high enough (above a critical value, while the capital-constrained retailer is likely to set a lower price than the well-funded one when demand uncertainty level falls within a specific interval. Therefore, it can be further concluded that the impact of capital constraint on the retailer’s pricing decision can be influenced by different demand uncertainty levels.
Decision-making under great uncertainty
International Nuclear Information System (INIS)
Hansson, S.O.
1992-01-01
Five types of decision-uncertainty are distinguished: uncertainty of consequences, of values, of demarcation, of reliance, and of co-ordination. Strategies are proposed for each type of uncertainty. The general conclusion is that it is meaningful for decision theory to treat cases with greater uncertainty than the textbook case of 'decision-making under uncertainty'. (au)
Traceable size determination of nanoparticles, a comparison among European metrology institutes
International Nuclear Information System (INIS)
Meli, Felix; Klein, Tobias; Buhr, Egbert; Frase, Carl Georg; Gleber, Gudrun; Krumrey, Michael; Duta, Alexandru; Duta, Steluta; Korpelainen, Virpi; Bellotti, Roberto; Picotto, Gian Bartolo; Boyd, Robert D; Cuenat, Alexandre
2012-01-01
Within the European iMERA-Plus project ‘Traceable Characterisation of Nanoparticles’ various particle measurement procedures were developed and finally a measurement comparison for particle size was carried out among seven laboratories across six national metrology institutes. Seven high quality particle samples made from three different materials and having nominal sizes in the range from 10 to 200 nm were used. The participants applied five fundamentally different measurement methods, atomic force microscopy, dynamic light scattering (DLS), small-angle x-ray scattering, scanning electron microscopy and scanning electron microscopy in transmission mode, and provided a total of 48 independent, traceable results. The comparison reference values were determined as weighted means based on the estimated measurement uncertainties of the participants. The comparison reference values have combined standard uncertainties smaller than 1.4 nm for particles with sizes up to 100 nm. All methods, except DLS, provided consistent results. (paper)
Tolerance analysis in manufacturing using process capability ratio with measurement uncertainty
DEFF Research Database (Denmark)
Mahshid, Rasoul; Mansourvar, Zahra; Hansen, Hans Nørgaard
2017-01-01
. In this paper, a new statistical analysis was applied to manufactured products to assess achieved tolerances when the process is known while using capability ratio and expanded uncertainty. The analysis has benefits for process planning, determining actual precision limits, process optimization, troubleshoot......Tolerance analysis provides valuable information regarding performance of manufacturing process. It allows determining the maximum possible variation of a quality feature in production. Previous researches have focused on application of tolerance analysis to the design of mechanical assemblies...... malfunctioning existing part. The capability measure is based on a number of measurements performed on part’s quality variable. Since the ratio relies on measurements, elimination of any possible error has notable negative impact on results. Therefore, measurement uncertainty was used in combination with process...
Charles, P H; Cranmer-Sargison, G; Thwaites, D I; Crowe, S B; Kairn, T; Knight, R T; Kenny, J; Langton, C M; Trapp, J V
2014-04-01
This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated into additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom, and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 to 100 mm, using a nominal photon energy of 6 MV. According to the practical definition established in this project, field sizes ≤ 15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0% to 2.0%, or field size uncertainties are 0.5 mm, field sizes ≤ 12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes ≤ 12 mm. Source occlusion also caused a large change in OPF for field sizes ≤ 8 mm. Based on the results of this study, field sizes ≤ 12 mm were considered to be theoretically very small for 6 MV beams. Extremely
Mascio, J.; Mace, G. G.
2015-12-01
CloudSat and CALIPSO, two of the satellites in the A-Train constellation, use algorithms to calculate the scattering properties of small cloud particles, such as the T-matrix method. Ice clouds (i.e. cirrus) cause problems with these cloud property retrieval algorithms because of their variability in ice mass as a function of particle size. Assumptions regarding the microphysical properties, such as mass-dimensional (m-D) relationships, are often necessary in retrieval algorithms for simplification, but these assumptions create uncertainties of their own. Therefore, ice cloud property retrieval uncertainties can be substantial and are often not well known. To investigate these uncertainties, reflectivity factors measured by CloudSat are compared to those calculated from particle size distributions (PSDs) to which different m-D relationships are applied. These PSDs are from data collected in situ during three flights of the Small Particles in Cirrus (SPartICus) campaign. We find that no specific habit emerges as preferred and instead we conclude that the microphysical characteristics of ice crystal populations tend to be distributed over a continuum and, therefore, cannot be categorized easily. To quantify the uncertainties in the mass-dimensional relationships, an optimal estimation inversion was run to retrieve the m-D relationship per SPartICus flight, as well as to calculate uncertainties of the m-D power law.
Gomez, Luis J; Yücel, Abdulkadir C; Hernandez-Garcia, Luis; Taylor, Stephan F; Michielssen, Eric
2015-01-01
A computational framework for uncertainty quantification in transcranial magnetic stimulation (TMS) is presented. The framework leverages high-dimensional model representations (HDMRs), which approximate observables (i.e., quantities of interest such as electric (E) fields induced inside targeted cortical regions) via series of iteratively constructed component functions involving only the most significant random variables (i.e., parameters that characterize the uncertainty in a TMS setup such as the position and orientation of TMS coils, as well as the size, shape, and conductivity of the head tissue). The component functions of HDMR expansions are approximated via a multielement probabilistic collocation (ME-PC) method. While approximating each component function, a quasi-static finite-difference simulator is used to compute observables at integration/collocation points dictated by the ME-PC method. The proposed framework requires far fewer simulations than traditional Monte Carlo methods for providing highly accurate statistical information (e.g., the mean and standard deviation) about the observables. The efficiency and accuracy of the proposed framework are demonstrated via its application to the statistical characterization of E-fields generated by TMS inside cortical regions of an MRI-derived realistic head model. Numerical results show that while uncertainties in tissue conductivities have negligible effects on TMS operation, variations in coil position/orientation and brain size significantly affect the induced E-fields. Our numerical results have several implications for the use of TMS during depression therapy: 1) uncertainty in the coil position and orientation may reduce the response rates of patients; 2) practitioners should favor targets on the crest of a gyrus to obtain maximal stimulation; and 3) an increasing scalp-to-cortex distance reduces the magnitude of E-fields on the surface and inside the cortex.
International Nuclear Information System (INIS)
Kaul, Dean C.; Egbert, Stephen D.; Woolson, William A.
2005-01-01
In order to avoid the pitfalls that so discredited DS86 and its uncertainty estimates, and to provide DS02 uncertainties that are both defensible and credible, this report not only presents the ensemble uncertainties assembled from uncertainties in individual computational elements and radiation dose components but also describes how these relate to comparisons between observed and computed quantities at critical intervals in the computational process. These comparisons include those between observed and calculated radiation free-field components, where observations include thermal- and fast-neutron activation and gamma-ray thermoluminescence, which are relevant to the estimated systematic uncertainty for DS02. The comparisons also include those between calculated and observed survivor shielding, where the observations consist of biodosimetric measurements for individual survivors, which are relevant to the estimated random uncertainty for DS02. (J.P.N.)
International Nuclear Information System (INIS)
Jin Hosang; Palta, Jatinder R.; Kim, You-Hyun; Kim, Siyong
2010-01-01
Purpose: To analyze dose uncertainty using a previously published dose-uncertainty model, and to assess potential dosimetric risks existing in prostate intensity-modulated radiotherapy (IMRT). Methods and Materials: The dose-uncertainty model provides a three-dimensional (3D) dose-uncertainty distribution in a given confidence level. For 8 retrospectively selected patients, dose-uncertainty maps were constructed using the dose-uncertainty model at the 95% CL. In addition to uncertainties inherent to the radiation treatment planning system, four scenarios of spatial errors were considered: machine only (S1), S1 + intrafraction, S1 + interfraction, and S1 + both intrafraction and interfraction errors. To evaluate the potential risks of the IMRT plans, three dose-uncertainty-based plan evaluation tools were introduced: confidence-weighted dose-volume histogram, confidence-weighted dose distribution, and dose-uncertainty-volume histogram. Results: Dose uncertainty caused by interfraction setup error was more significant than that of intrafraction motion error. The maximum dose uncertainty (95% confidence) of the clinical target volume (CTV) was smaller than 5% of the prescribed dose in all but two cases (13.9% and 10.2%). The dose uncertainty for 95% of the CTV volume ranged from 1.3% to 2.9% of the prescribed dose. Conclusions: The dose uncertainty in prostate IMRT could be evaluated using the dose-uncertainty model. Prostate IMRT plans satisfying the same plan objectives could generate a significantly different dose uncertainty because a complex interplay of many uncertainty sources. The uncertainty-based plan evaluation contributes to generating reliable and error-resistant treatment plans.
Effects of habitat features on size-biased predation on salmon by bears.
Andersson, Luke C; Reynolds, John D
2017-05-01
Predators can drive trait divergence among populations of prey by imposing differential selection on prey traits. Habitat characteristics can mediate predator selectivity by providing refuge for prey. We quantified the effects of stream characteristics on biases in the sizes of spawning salmon caught by bears (Ursus arctos and U. americanus) on the central coast of British Columbia, Canada by measuring size-biased predation on spawning chum (Oncorhynchus keta) and pink (O. gorbuscha) salmon in 12 streams with varying habitat characteristics. We tested the hypotheses that bears would catch larger than average salmon (size-biased predation) and that this bias toward larger fish would be higher in streams that provide less protection to spawning salmon from predation (e.g., less pools, wood, undercut banks). We then we tested for how such size biases in turn translate into differences among populations in the sizes of the fish. Bears caught larger-than-average salmon as the spawning season progressed and as predicted, this was most pronounced in streams with fewer refugia for the fish (i.e., wood and undercut banks). Salmon were marginally smaller in streams with more pronounced size-biased predation but this predictor was less reliable than physical characteristics of streams, with larger fish in wider, deeper streams. These results support the hypothesis that selective forces imposed by predators can be mediated by habitat characteristics, with potential consequences for physical traits of prey.
Uncertainty analyses of the countermeasures module of the program system UFOMOD
International Nuclear Information System (INIS)
Fischer, F.; Ehrhardt, J.; Burkart, K.
1989-10-01
This report refers to uncertainty analyses of the countermeasures submodule of the program system UFOMOD, version NE 87/1, whose important input parameters are linked with probability distributions derived from expert judgement. Uncertainty bands show how much variability exists, sensitivity measures determine what causes this variability in consequences. Results are presented as confidence bands of complementary cumulative frequency distributions (CCFDs) of individual acute organ doses (lung, bone marrow), individual risks (pulmonary and hematopoietic syndrome) and the corresponding number of early fatalities, partially as a function of distance from the site. In addition the ranked influence of the uncertain parameters on the different consequence types is shown. For the estimation of confidence bands a model parameter sample size of n=60 equal to 3 times the number of uncertain model parameters is chosen. For a reduced set of nine model parameters a sample size of n=50 is selected. A total of 20 uncertain parameters is considered. The most sensitive parameters of the countermeasures submodule of UFOMOD appeared to be the initial delay of emergency actions in a keyhole shaped area A and the fractions of the population evacuating area A spontaneously during the sheltering period or staying outdoors. Under the conditions of the source term the influence on the overall uncertainty in the consequence variables - individual acute organ doses, individual risks and early fatalities - of driving times to leave the evacuation area is small. (orig./HP) [de
Ruminations On NDA Measurement Uncertainty Compared TO DA Uncertainty
International Nuclear Information System (INIS)
Salaymeh, S.; Ashley, W.; Jeffcoat, R.
2010-01-01
It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.
RUMINATIONS ON NDA MEASUREMENT UNCERTAINTY COMPARED TO DA UNCERTAINTY
Energy Technology Data Exchange (ETDEWEB)
Salaymeh, S.; Ashley, W.; Jeffcoat, R.
2010-06-17
It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.
Embracing uncertainty in applied ecology.
Milner-Gulland, E J; Shea, K
2017-12-01
Applied ecologists often face uncertainty that hinders effective decision-making.Common traps that may catch the unwary are: ignoring uncertainty, acknowledging uncertainty but ploughing on, focussing on trivial uncertainties, believing your models, and unclear objectives.We integrate research insights and examples from a wide range of applied ecological fields to illustrate advances that are generally underused, but could facilitate ecologists' ability to plan and execute research to support management.Recommended approaches to avoid uncertainty traps are: embracing models, using decision theory, using models more effectively, thinking experimentally, and being realistic about uncertainty. Synthesis and applications . Applied ecologists can become more effective at informing management by using approaches that explicitly take account of uncertainty.
Decision-Making under Criteria Uncertainty
Kureychik, V. M.; Safronenkova, I. B.
2018-05-01
Uncertainty is an essential part of a decision-making procedure. The paper deals with the problem of decision-making under criteria uncertainty. In this context, decision-making under uncertainty, types and conditions of uncertainty were examined. The decision-making problem under uncertainty was formalized. A modification of the mathematical decision support method under uncertainty via ontologies was proposed. A critical distinction of the developed method is ontology usage as its base elements. The goal of this work is a development of a decision-making method under criteria uncertainty with the use of ontologies in the area of multilayer board designing. This method is oriented to improvement of technical-economic values of the examined domain.
Physical Uncertainty Bounds (PUB)
Energy Technology Data Exchange (ETDEWEB)
Vaughan, Diane Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Dean L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-03-19
This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.
International Nuclear Information System (INIS)
Nanty, Simon
2015-01-01
This work relates to the framework of uncertainty quantification for numerical simulators, and more precisely studies two industrial applications linked to the safety studies of nuclear plants. These two applications have several common features. The first one is that the computer code inputs are functional and scalar variables, functional ones being dependent. The second feature is that the probability distribution of functional variables is known only through a sample of their realizations. The third feature, relative to only one of the two applications, is the high computational cost of the code, which limits the number of possible simulations. The main objective of this work was to propose a complete methodology for the uncertainty analysis of numerical simulators for the two considered cases. First, we have proposed a methodology to quantify the uncertainties of dependent functional random variables from a sample of their realizations. This methodology enables to both model the dependency between variables and their link to another variable, called co-variate, which could be, for instance, the output of the considered code. Then, we have developed an adaptation of a visualization tool for functional data, which enables to simultaneously visualize the uncertainties and features of dependent functional variables. Second, a method to perform the global sensitivity analysis of the codes used in the two studied cases has been proposed. In the case of a computationally demanding code, the direct use of quantitative global sensitivity analysis methods is intractable. To overcome this issue, the retained solution consists in building a surrogate model or meta model, a fast-running model approximating the computationally expensive code. An optimized uniform sampling strategy for scalar and functional variables has been developed to build a learning basis for the meta model. Finally, a new approximation approach for expensive codes with functional outputs has been
Feature singletons attract spatial attention independently of feature priming.
Yashar, Amit; White, Alex L; Fang, Wanghaoming; Carrasco, Marisa
2017-08-01
People perform better in visual search when the target feature repeats across trials (intertrial feature priming [IFP]). Here, we investigated whether repetition of a feature singleton's color modulates stimulus-driven shifts of spatial attention by presenting a probe stimulus immediately after each singleton display. The task alternated every two trials between a probe discrimination task and a singleton search task. We measured both stimulus-driven spatial attention (via the distance between the probe and singleton) and IFP (via repetition of the singleton's color). Color repetition facilitated search performance (IFP effect) when the set size was small. When the probe appeared at the singleton's location, performance was better than at the opposite location (stimulus-driven attention effect). The magnitude of this attention effect increased with the singleton's set size (which increases its saliency) but did not depend on whether the singleton's color repeated across trials, even when the previous singleton had been attended as a search target. Thus, our findings show that repetition of a salient singleton's color affects performance when the singleton is task relevant and voluntarily attended (as in search trials). However, color repetition does not affect performance when the singleton becomes irrelevant to the current task, even though the singleton does capture attention (as in probe trials). Therefore, color repetition per se does not make a singleton more salient for stimulus-driven attention. Rather, we suggest that IFP requires voluntary selection of color singletons in each consecutive trial.
Uncertainty Propagation in OMFIT
Smith, Sterling; Meneghini, Orso; Sung, Choongki
2017-10-01
A rigorous comparison of power balance fluxes and turbulent model fluxes requires the propagation of uncertainties in the kinetic profiles and their derivatives. Making extensive use of the python uncertainties package, the OMFIT framework has been used to propagate covariant uncertainties to provide an uncertainty in the power balance calculation from the ONETWO code, as well as through the turbulent fluxes calculated by the TGLF code. The covariant uncertainties arise from fitting 1D (constant on flux surface) density and temperature profiles and associated random errors with parameterized functions such as a modified tanh. The power balance and model fluxes can then be compared with quantification of the uncertainties. No effort is made at propagating systematic errors. A case study will be shown for the effects of resonant magnetic perturbations on the kinetic profiles and fluxes at the top of the pedestal. A separate attempt at modeling the random errors with Monte Carlo sampling will be compared to the method of propagating the fitting function parameter covariant uncertainties. Work supported by US DOE under DE-FC02-04ER54698, DE-FG2-95ER-54309, DE-SC 0012656.
Stereo-particle image velocimetry uncertainty quantification
International Nuclear Information System (INIS)
Bhattacharya, Sayantan; Vlachos, Pavlos P; Charonko, John J
2017-01-01
Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric
Methodologies of Uncertainty Propagation Calculation
International Nuclear Information System (INIS)
Chojnacki, Eric
2002-01-01
After recalling the theoretical principle and the practical difficulties of the methodologies of uncertainty propagation calculation, the author discussed how to propagate input uncertainties. He said there were two kinds of input uncertainty: - variability: uncertainty due to heterogeneity, - lack of knowledge: uncertainty due to ignorance. It was therefore necessary to use two different propagation methods. He demonstrated this in a simple example which he generalised, treating the variability uncertainty by the probability theory and the lack of knowledge uncertainty by the fuzzy theory. He cautioned, however, against the systematic use of probability theory which may lead to unjustifiable and illegitimate precise answers. Mr Chojnacki's conclusions were that the importance of distinguishing variability and lack of knowledge increased as the problem was getting more and more complex in terms of number of parameters or time steps, and that it was necessary to develop uncertainty propagation methodologies combining probability theory and fuzzy theory
Energy Technology Data Exchange (ETDEWEB)
Ossokina, I.
2003-07-01
It is generally known that natural environment is profoundly influenced by technological change. The direction and the size of this influence are, however, surrounded by uncertainties, which substantially complicate environmental policy making. This dissertation uses game-theoretical models to study policy making under uncertainty about (a) the costs of technological advances in pollution control, (b) the preferences of the policy maker and the voters, and (c) the consequences of policy measures. From a positive point of view the analysis provides explanations for environmental policies in modern democracies. From a normative point of view it gives a number of recommendations to improve environmental policies.
Directory of Open Access Journals (Sweden)
Richard M. Palin
2016-07-01
porphyroblasts (primarily staurolite and kyanite, indicating that point counting preserves small-scale petrographic features that are otherwise averaged out in XRF analysis of a larger sample. Careful consideration of the size of the equilibration volume, the constituents that comprise the effective bulk composition, and the best technique to employ for its determination based on rock type and petrographic character, offer the best chance to produce trustworthy data from pseudosection analysis.
The state of the art of the impact of sampling uncertainty on measurement uncertainty
Leite, V. J.; Oliveira, E. C.
2018-03-01
The measurement uncertainty is a parameter that marks the reliability and can be divided into two large groups: sampling and analytical variations. Analytical uncertainty is a controlled process, performed in the laboratory. The same does not occur with the sampling uncertainty, which, because it faces several obstacles and there is no clarity on how to perform the procedures, has been neglected, although it is admittedly indispensable to the measurement process. This paper aims at describing the state of the art of sampling uncertainty and at assessing its relevance to measurement uncertainty.
International Nuclear Information System (INIS)
Carli, L; Cantatore, A; De Chiffre, L; Genta, G; Barbato, G; Levi, R
2011-01-01
3D-SEM is a method, based on the stereophotogrammetry technique, which obtains three-dimensional topographic reconstructions starting typically from two SEM images, called the stereo-pair. In this work, a theoretical uncertainty evaluation of the stereo-pair technique, according to GUM (Guide to the Expression of Uncertainty in Measurement), was carried out, considering 3D-SEM reconstructions of a wire gauge with a reference diameter of 250 µm. Starting from the more commonly used tilting strategy, one based on the item rotation inside the SEM chamber was also adopted. The latter enables multiple-view reconstructions of the cylindrical item under consideration. Uncertainty evaluation was performed starting from a modified version of the Piazzesi equation, enabling the calculation of the z-coordinate from a given stereo-pair. The metrological characteristics of each input variable have been taken into account and a SEM stage calibration has been performed. Uncertainty tables for the cases of tilt and rotation were then produced, leading to the calculation of expanded uncertainty. For the case of rotation, the largest uncertainty contribution resulted to be the rotational angle; however, for the case of tilt it resulted to be the pixel size. A relative expanded uncertainty equal to 5% and 4% was obtained for the case of rotation and tilt, respectively
Directory of Open Access Journals (Sweden)
Dmitry V. Lifintsev
2017-06-01
Full Text Available The paper presents the results of a study of social support for young males and females, and also its relationship with tolerance of uncertainty. A series of psychodiagnostic tools were used to study gender determinants of social support, tolerance of uncertainty and interpersonal intolerance in young people with different levels of emotional and instrumental support. Young males and females aged 18–22 years with a high level of tolerance of uncertainty are susceptible to various forms of social support. The ability to accept uncertainty, to function in the system of unclear interpersonal communication and to act in the face of changing circumstances determine the level of satisfaction with social support in the participants. The research (N=165 confirmed the assumption that first and foremost social support as a communicative phenomenon has differences in the perception of emotional forms in young males and females. Secondly, the specific features of person functioning in the social supporting act system are interrelated, including the level of tolerance of uncertainty. Thirdly, social support can reduce human state of uncertainty and eventually neutralize the negative impact of stressful events. The human ability to «see and discover» the social support, be sensitive and attentive to the supporting acts of social environment has a close relationship with the ability to accept uncertainty and maintain stability in a state of discomfort if any.
Sensitivity and uncertainty studies of the CRAC2 computer code
International Nuclear Information System (INIS)
Kocher, D.C.; Ward, R.C.; Killough, G.G.; Dunning, D.E. Jr.; Hicks, B.B.; Hosker, R.P. Jr.; Ku, J.Y.; Rao, K.S.
1987-01-01
The authors have studied the sensitivity of health impacts from nuclear reactor accidents, as predicted by the CRAC2 computer code, to the following sources of uncertainty: (1) the model for plume rise, (2) the model for wet deposition, (3) the meteorological bin-sampling procedure for selecting weather sequences with rain, (4) the dose conversion factors for inhalation as affected by uncertainties in the particle size of the carrier aerosol and the clearance rates of radionuclides from the respiratory tract, (5) the weathering half-time for external ground-surface exposure, and (6) the transfer coefficients for terrestrial foodchain pathways. Predicted health impacts usually showed little sensitivity to use of an alternative plume-rise model or a modified rain-bin structure in bin-sampling. Health impacts often were quite sensitive to use of an alternative wet-deposition model in single-trial runs with rain during plume passage, but were less sensitive to the model in bin-sampling runs. Uncertainties in the inhalation dose conversion factors had important effects on early injuries in single-trial runs. Latent cancer fatalities were moderately sensitive to uncertainties in the weathering half-time for ground-surface exposures, but showed little sensitivity to the transfer coefficients for terrestrial foodchain pathways. Sensitivities of CRAC2 predictions to uncertainties in the models and parameters also depended on the magnitude of the source term, and some of the effects on early health effects were comparable to those that were due only to selection of different sets of weather sequences in bin-sampling
Funding Higher Education and Wage Uncertainty: Income Contingent Loan versus Mortgage Loan
Migali, Giuseppe
2012-01-01
We propose a simple theoretical model which shows how the combined effect of wage uncertainty and risk aversion can modify the individual willingness to pay for a HE system financed by an ICL or a ML. We calibrate our model using real data from the 1970 British Cohort Survey together with the features of the English HE financing system. We allow…
An Evaluation of Test and Physical Uncertainty of Measuring Vibration in Wooden Junctions
DEFF Research Database (Denmark)
Dickow, Kristoffer Ahrens; Kirkegaard, Poul Henning; Andersen, Lars Vabbersgaard
2012-01-01
In the present paper a study of test and material uncertainty in modal analysis of certain wooden junctions is presented. The main structure considered here is a T-junction made from a particleboard plate connected to a spruce beam of rectangular cross section. The size of the plate is 1.2 m by 0.......6 m. The T-junctions represent cut-outs of actual full size floor assemblies. The aim of the experiments is to investigate the underlying uncertainties of both the test method as well as variation in material and craftmanship. For this purpose, ten nominally identical junctions are tested and compared...... to each other in terms of modal parameters such as natural frequencies, modeshapes and damping. Considerations regarding the measurement procedure and test setup are discussed. The results indicate a large variation of the response at modes where the coupling of torsion in the beam to bending of the plate...
Uncertainties in hydrogen combustion
International Nuclear Information System (INIS)
Stamps, D.W.; Wong, C.C.; Nelson, L.S.
1988-01-01
Three important areas of hydrogen combustion with uncertainties are identified: high-temperature combustion, flame acceleration and deflagration-to-detonation transition, and aerosol resuspension during hydrogen combustion. The uncertainties associated with high-temperature combustion may affect at least three different accident scenarios: the in-cavity oxidation of combustible gases produced by core-concrete interactions, the direct containment heating hydrogen problem, and the possibility of local detonations. How these uncertainties may affect the sequence of various accident scenarios is discussed and recommendations are made to reduce these uncertainties. 40 references
Uncertainty in artificial intelligence
Kanal, LN
1986-01-01
How to deal with uncertainty is a subject of much controversy in Artificial Intelligence. This volume brings together a wide range of perspectives on uncertainty, many of the contributors being the principal proponents in the controversy.Some of the notable issues which emerge from these papers revolve around an interval-based calculus of uncertainty, the Dempster-Shafer Theory, and probability as the best numeric model for uncertainty. There remain strong dissenting opinions not only about probability but even about the utility of any numeric method in this context.
Bound entangled states violate a nonsymmetric local uncertainty relation
International Nuclear Information System (INIS)
Hofmann, Holger F.
2003-01-01
As a consequence of having a positive partial transpose, bound entangled states lack many of the properties otherwise associated with entanglement. It is therefore interesting to identify properties that distinguish bound entangled states from separable states. In this paper, it is shown that some bound entangled states violate a nonsymmetric class of local uncertainty relations [H. F. Hofmann and S. Takeuchi, Phys. Rev. A 68, 032103 (2003)]. This result indicates that the asymmetry of nonclassical correlations may be a characteristic feature of bound entanglement
Delivered dose uncertainty analysis at the tumor apex for ocular brachytherapy
Energy Technology Data Exchange (ETDEWEB)
Morrison, Hali, E-mail: hamorris@ualberta.ca; Menon, Geetha; Larocque, Matthew P.; Jans, Hans-Sonke; Sloboda, Ron S. [Department of Medical Physics, Cross Cancer Institute, Edmonton, Alberta T6G 1Z2, Canada and Department of Oncology, University of Alberta, Edmonton, Alberta T6G 2R3 (Canada); Weis, Ezekiel [Department of Ophthalmology, University of Alberta, Edmonton, Alberta T6G 2R3 (Canada)
2016-08-15
Purpose: To estimate the total dosimetric uncertainty at the tumor apex for ocular brachytherapy treatments delivered using 16 mm Collaborative Ocular Melanoma Study (COMS) and Super9 plaques loaded with {sup 125}I seeds in order to determine the size of the apex margin that would be required to ensure adequate dosimetric coverage of the tumor. Methods: The total dosimetric uncertainty was assessed for three reference tumor heights: 3, 5, and 10 mm, using the Guide to the expression of Uncertainty in Measurement/National Institute of Standards and Technology approach. Uncertainties pertaining to seed construction, source strength, plaque assembly, treatment planning calculations, tumor height measurement, plaque placement, and plaque tilt for a simple dome-shaped tumor were investigated and quantified to estimate the total dosimetric uncertainty at the tumor apex. Uncertainties in seed construction were determined using EBT3 Gafchromic film measurements around single seeds, plaque assembly uncertainties were determined using high resolution microCT scanning of loaded plaques to measure seed positions in the plaques, and all other uncertainties were determined from the previously published studies and recommended values. All dose calculations were performed using PLAQUESIMULATOR v5.7.6 ophthalmic treatment planning system with the inclusion of plaque heterogeneity corrections. Results: The total dosimetric uncertainties at 3, 5, and 10 mm tumor heights for the 16 mm COMS plaque were 17.3%, 16.1%, and 14.2%, respectively, and for the Super9 plaque were 18.2%, 14.4%, and 13.1%, respectively (all values with coverage factor k = 2). The apex margins at 3, 5, and 10 mm tumor heights required to adequately account for these uncertainties were 1.3, 1.3, and 1.4 mm, respectively, for the 16 mm COMS plaque, and 1.8, 1.4, and 1.2 mm, respectively, for the Super9 plaque. These uncertainties and associated margins are dependent on the dose gradient at the given prescription
The grey relational approach for evaluating measurement uncertainty with poor information
International Nuclear Information System (INIS)
Luo, Zai; Wang, Yanqing; Zhou, Weihu; Wang, Zhongyu
2015-01-01
The Guide to the Expression of Uncertainty in Measurement (GUM) is the master document for measurement uncertainty evaluation. However, the GUM may encounter problems and does not work well when the measurement data have poor information. In most cases, poor information means a small data sample and an unknown probability distribution. In these cases, the evaluation of measurement uncertainty has become a bottleneck in practical measurement. To solve this problem, a novel method called the grey relational approach (GRA), different from the statistical theory, is proposed in this paper. The GRA does not require a large sample size or probability distribution information of the measurement data. Mathematically, the GRA can be divided into three parts. Firstly, according to grey relational analysis, the grey relational coefficients between the ideal and the practical measurement output series are obtained. Secondly, the weighted coefficients and the measurement expectation function will be acquired based on the grey relational coefficients. Finally, the measurement uncertainty is evaluated based on grey modeling. In order to validate the performance of this method, simulation experiments were performed and the evaluation results show that the GRA can keep the average error around 5%. Besides, the GRA was also compared with the grey method, the Bessel method, and the Monte Carlo method by a real stress measurement. Both the simulation experiments and real measurement show that the GRA is appropriate and effective to evaluate the measurement uncertainty with poor information. (paper)
Directory of Open Access Journals (Sweden)
Yaping Ju
2016-05-01
Full Text Available The Monte Carlo simulation method for turbomachinery uncertainty analysis often requires performing a huge number of simulations, the computational cost of which can be greatly alleviated with the help of metamodeling techniques. An intensive comparative study was performed on the approximation performance of three prospective artificial intelligence metamodels, that is, artificial neural network, radial basis function, and support vector regression. The genetic algorithm was used to optimize the predetermined parameters of each metamodel for the sake of a fair comparison. Through testing on 10 nonlinear functions with different problem scales and sample sizes, the genetic algorithm–support vector regression metamodel was found more accurate and robust than the other two counterparts. Accordingly, the genetic algorithm–support vector regression metamodel was selected and combined with the Monte Carlo simulation method for the uncertainty analysis of a wind turbine airfoil under two types of surface roughness uncertainties. The results show that the genetic algorithm–support vector regression metamodel can capture well the uncertainty propagation from the surface roughness to the airfoil aerodynamic performance. This work is useful to the application of metamodeling techniques in the robust design optimization of turbomachinery.
Chitty, L. S.; Griffin, D. R.; Meaney, C.; Barrett, A.; Khalil, A.; Pajkrt, E.; Cole, T. J.
2011-01-01
To improve the prenatal diagnosis of achondroplasia by constructing charts of fetal size, defining frequency of sonographic features and exploring the role of non-invasive molecular diagnosis based on cell-free fetal deoxyribonucleic acid (DNA) in maternal plasma. Data on fetuses with a confirmed
Lacey, Ronald E; Faulkner, William Brock
2015-07-01
matter (PM) concentrations approach regulatory limits, the uncertainty of the measurement is essential in determining the sample size and the probability of type II errors in hypothesis testing. This is an important factor in determining if ambient PM concentrations exceed regulatory limits. The technique described in this paper can be applied to other measurement systems and is especially useful where there are no methods available to generate these values empirically.
Directory of Open Access Journals (Sweden)
Fu Yuhua
2015-03-01
Full Text Available The most famous contribution of Heisenberg is uncertainty principle. But the original uncertainty principle is improper. Considering all the possible situations (including the case that people can create laws and applying Neutrosophy and Quad-stage Method, this paper presents "certainty-uncertainty principles" with general form and variable dimension fractal form. According to the classification of Neutrosophy, "certainty-uncertainty principles" can be divided into three principles in different conditions: "certainty principle", namely a particle’s position and momentum can be known simultaneously; "uncertainty principle", namely a particle’s position and momentum cannot be known simultaneously; and neutral (fuzzy "indeterminacy principle", namely whether or not a particle’s position and momentum can be known simultaneously is undetermined. The special cases of "certain ty-uncertainty principles" include the original uncertainty principle and Ozawa inequality. In addition, in accordance with the original uncertainty principle, discussing high-speed particle’s speed and track with Newton mechanics is unreasonable; but according to "certaintyuncertainty principles", Newton mechanics can be used to discuss the problem of gravitational defection of a photon orbit around the Sun (it gives the same result of deflection angle as given by general relativity. Finally, for the reason that in physics the principles, laws and the like that are regardless of the principle (law of conservation of energy may be invalid; therefore "certaintyuncertainty principles" should be restricted (or constrained by principle (law of conservation of energy, and thus it can satisfy the principle (law of conservation of energy.
Uncertainty enabled Sensor Observation Services
Cornford, Dan; Williams, Matthew; Bastin, Lucy
2010-05-01
Almost all observations of reality are contaminated with errors, which introduce uncertainties into the actual observation result. Such uncertainty is often held to be a data quality issue, and quantification of this uncertainty is essential for the principled exploitation of the observations. Many existing systems treat data quality in a relatively ad-hoc manner, however if the observation uncertainty is a reliable estimate of the error on the observation with respect to reality then knowledge of this uncertainty enables optimal exploitation of the observations in further processes, or decision making. We would argue that the most natural formalism for expressing uncertainty is Bayesian probability theory. In this work we show how the Open Geospatial Consortium Sensor Observation Service can be implemented to enable the support of explicit uncertainty about observations. We show how the UncertML candidate standard is used to provide a rich and flexible representation of uncertainty in this context. We illustrate this on a data set of user contributed weather data where the INTAMAP interpolation Web Processing Service is used to help estimate the uncertainty on the observations of unknown quality, using observations with known uncertainty properties. We then go on to discuss the implications of uncertainty for a range of existing Open Geospatial Consortium standards including SWE common and Observations and Measurements. We discuss the difficult decisions in the design of the UncertML schema and its relation and usage within existing standards and show various options. We conclude with some indications of the likely future directions for UncertML in the context of Open Geospatial Consortium services.
Megías, Alberto; Navas, Juan F; Perandrés-Gómez, Ana; Maldonado, Antonio; Catena, Andrés; Perales, José C
2018-06-01
Putting money at stake produces anticipatory uncertainty, a process that has been linked to key features of gambling. Here we examined how learning and individual differences modulate the stimulus preceding negativity (SPN, an electroencephalographic signature of perceived uncertainty of valued outcomes) in gambling disorder patients (GDPs) and healthy controls (HCs), during a non-gambling contingency learning task. Twenty-four GDPs and 26 HCs performed a causal learning task under conditions of high and medium uncertainty (HU, MU; null and positive cue-outcome contingency, respectively). Participants were asked to predict the outcome trial-by-trial, and to regularly judge the strength of the cue-outcome contingency. A pre-outcome SPN was extracted from simultaneous electroencephalographic recordings for each participant, uncertainty level, and task block. The two groups similarly learnt to predict the occurrence of the outcome in the presence/absence of the cue. In HCs, SPN amplitude decreased as the outcome became predictable in the MU condition, a decrement that was absent in the HU condition, where the outcome remained unpredictable during the task. Most importantly, GDPs' SPN remained high and insensitive to task type and block. In GDPs, the SPN amplitude was linked to gambling preferences. When both groups were considered together, SPN amplitude was also related to impulsivity. GDPs thus showed an abnormal electrophysiological response to outcome uncertainty, not attributable to faulty contingency learning. Differences with controls were larger in frequent players of passive games, and smaller in players of more active games. Potential psychological mechanisms underlying this set of effects are discussed.
Assessing scenario and parametric uncertainties in risk analysis: a model uncertainty audit
International Nuclear Information System (INIS)
Tarantola, S.; Saltelli, A.; Draper, D.
1999-01-01
In the present study a process of model audit is addressed on a computational model used for predicting maximum radiological doses to humans in the field of nuclear waste disposal. Global uncertainty and sensitivity analyses are employed to assess output uncertainty and to quantify the contribution of parametric and scenario uncertainties to the model output. These tools are of fundamental importance for risk analysis and decision making purposes
Directory of Open Access Journals (Sweden)
Hong Zhang
2017-01-01
Full Text Available In order to formulate water allocation schemes under uncertainties in the water resources management systems, an inexact multistage stochastic chance constrained programming (IMSCCP model is proposed. The model integrates stochastic chance constrained programming, multistage stochastic programming, and inexact stochastic programming within a general optimization framework to handle the uncertainties occurring in both constraints and objective. These uncertainties are expressed as probability distributions, interval with multiply distributed stochastic boundaries, dynamic features of the long-term water allocation plans, and so on. Compared with the existing inexact multistage stochastic programming, the IMSCCP can be used to assess more system risks and handle more complicated uncertainties in water resources management systems. The IMSCCP model is applied to a hypothetical case study of water resources management. In order to construct an approximate solution for the model, a hybrid algorithm, which incorporates stochastic simulation, back propagation neural network, and genetic algorithm, is proposed. The results show that the optimal value represents the maximal net system benefit achieved with a given confidence level under chance constraints, and the solutions provide optimal water allocation schemes to multiple users over a multiperiod planning horizon.
Uncertainty Communication. Issues and good practice
International Nuclear Information System (INIS)
Kloprogge, P.; Van der Sluijs, J.; Wardekker, A.
2007-12-01
In 2003 the Netherlands Environmental Assessment Agency (MNP) published the RIVM/MNP Guidance for Uncertainty Assessment and Communication. The Guidance assists in dealing with uncertainty in environmental assessments. Dealing with uncertainty is essential because assessment results regarding complex environmental issues are of limited value if the uncertainties have not been taken into account adequately. A careful analysis of uncertainties in an environmental assessment is required, but even more important is the effective communication of these uncertainties in the presentation of assessment results. The Guidance yields rich and differentiated insights in uncertainty, but the relevance of this uncertainty information may vary across audiences and uses of assessment results. Therefore, the reporting of uncertainties is one of the six key issues that is addressed in the Guidance. In practice, users of the Guidance felt a need for more practical assistance in the reporting of uncertainty information. This report explores the issue of uncertainty communication in more detail, and contains more detailed guidance on the communication of uncertainty. In order to make this a 'stand alone' document several questions that are mentioned in the detailed Guidance have been repeated here. This document thus has some overlap with the detailed Guidance. Part 1 gives a general introduction to the issue of communicating uncertainty information. It offers guidelines for (fine)tuning the communication to the intended audiences and context of a report, discusses how readers of a report tend to handle uncertainty information, and ends with a list of criteria that uncertainty communication needs to meet to increase its effectiveness. Part 2 helps writers to analyze the context in which communication takes place, and helps to map the audiences, and their information needs. It further helps to reflect upon anticipated uses and possible impacts of the uncertainty information on the
Impact of magnitude uncertainties on seismic catalogue properties
Leptokaropoulos, K. M.; Adamaki, A. K.; Roberts, R. G.; Gkarlaouni, C. G.; Paradisopoulou, P. M.
2018-05-01
Catalogue-based studies are of central importance in seismological research, to investigate the temporal, spatial and size distribution of earthquakes in specified study areas. Methods for estimating the fundamental catalogue parameters like the Gutenberg-Richter (G-R) b-value and the completeness magnitude (Mc) are well established and routinely applied. However, the magnitudes reported in seismicity catalogues contain measurement uncertainties which may significantly distort the estimation of the derived parameters. In this study, we use numerical simulations of synthetic data sets to assess the reliability of different methods for determining b-value and Mc, assuming the G-R law validity. After contaminating the synthetic catalogues with Gaussian noise (with selected standard deviations), the analysis is performed for numerous data sets of different sample size (N). The noise introduced to the data generally leads to a systematic overestimation of magnitudes close to and above Mc. This fact causes an increase of the average number of events above Mc, which in turn leads to an apparent decrease of the b-value. This may result to a significant overestimation of seismicity rate even well above the actual completeness level. The b-value can in general be reliably estimated even for relatively small data sets (N < 1000) when only magnitudes higher than the actual completeness level are used. Nevertheless, a correction of the total number of events belonging in each magnitude class (i.e. 0.1 unit) should be considered, to deal with the magnitude uncertainty effect. Because magnitude uncertainties (here with the form of Gaussian noise) are inevitable in all instrumental catalogues, this finding is fundamental for seismicity rate and seismic hazard assessment analyses. Also important is that for some data analyses significant bias cannot necessarily be avoided by choosing a high Mc value for analysis. In such cases, there may be a risk of severe miscalculation of
Reliability analysis under epistemic uncertainty
International Nuclear Information System (INIS)
Nannapaneni, Saideep; Mahadevan, Sankaran
2016-01-01
This paper proposes a probabilistic framework to include both aleatory and epistemic uncertainty within model-based reliability estimation of engineering systems for individual limit states. Epistemic uncertainty is considered due to both data and model sources. Sparse point and/or interval data regarding the input random variables leads to uncertainty regarding their distribution types, distribution parameters, and correlations; this statistical uncertainty is included in the reliability analysis through a combination of likelihood-based representation, Bayesian hypothesis testing, and Bayesian model averaging techniques. Model errors, which include numerical solution errors and model form errors, are quantified through Gaussian process models and included in the reliability analysis. The probability integral transform is used to develop an auxiliary variable approach that facilitates a single-level representation of both aleatory and epistemic uncertainty. This strategy results in an efficient single-loop implementation of Monte Carlo simulation (MCS) and FORM/SORM techniques for reliability estimation under both aleatory and epistemic uncertainty. Two engineering examples are used to demonstrate the proposed methodology. - Highlights: • Epistemic uncertainty due to data and model included in reliability analysis. • A novel FORM-based approach proposed to include aleatory and epistemic uncertainty. • A single-loop Monte Carlo approach proposed to include both types of uncertainties. • Two engineering examples used for illustration.
Simplified propagation of standard uncertainties
International Nuclear Information System (INIS)
Shull, A.H.
1997-01-01
An essential part of any measurement control program is adequate knowledge of the uncertainties of the measurement system standards. Only with an estimate of the standards'' uncertainties can one determine if the standard is adequate for its intended use or can one calculate the total uncertainty of the measurement process. Purchased standards usually have estimates of uncertainty on their certificates. However, when standards are prepared and characterized by a laboratory, variance propagation is required to estimate the uncertainty of the standard. Traditional variance propagation typically involves tedious use of partial derivatives, unfriendly software and the availability of statistical expertise. As a result, the uncertainty of prepared standards is often not determined or determined incorrectly. For situations meeting stated assumptions, easier shortcut methods of estimation are now available which eliminate the need for partial derivatives and require only a spreadsheet or calculator. A system of simplifying the calculations by dividing into subgroups of absolute and relative uncertainties is utilized. These methods also incorporate the International Standards Organization (ISO) concepts for combining systematic and random uncertainties as published in their Guide to the Expression of Measurement Uncertainty. Details of the simplified methods and examples of their use are included in the paper
UNCERTAINTY IN THE DEVELOPMENT AND USE OF EQUATION OF STATE MODELS
Weirs, V. Gregory; Fabian, Nathan; Potter, Kristin; McNamara, Laura; Otahal, Thomas
2013-01-01
In this paper we present the results from a series of focus groups on the visualization of uncertainty in equation-of-state (EOS) models. The initial goal was to identify the most effective ways to present EOS uncertainty to analysts, code developers, and material modelers. Four prototype visualizations were developed to present EOS surfaces in a three-dimensional, thermodynamic space. Focus group participants, primarily from Sandia National Laboratories, evaluated particular features of the various techniques for different use cases and discussed their individual workflow processes, experiences with other visualization tools, and the impact of uncertainty on their work. Related to our prototypes, we found the 3D presentations to be helpful for seeing a large amount of information at once and for a big-picture view; however, participants also desired relatively simple, two-dimensional graphics for better quantitative understanding and because these plots are part of the existing visual language for material models. In addition to feedback on the prototypes, several themes and issues emerged that are as compelling as the original goal and will eventually serve as a starting point for further development of visualization and analysis tools. In particular, a distributed workflow centered around material models was identified. Material model stakeholders contribute and extract information at different points in this workflow depending on their role, but encounter various institutional and technical barriers which restrict the flow of information. An effective software tool for this community must be cognizant of this workflow and alleviate the bottlenecks and barriers within it. Uncertainty in EOS models is defined and interpreted differently at the various stages of the workflow. In this context, uncertainty propagation is difficult to reduce to the mathematical problem of estimating the uncertainty of an output from uncertain inputs.
Hybrid feature selection algorithm using symmetrical uncertainty and a harmony search algorithm
Salameh Shreem, Salam; Abdullah, Salwani; Nazri, Mohd Zakree Ahmad
2016-04-01
Microarray technology can be used as an efficient diagnostic system to recognise diseases such as tumours or to discriminate between different types of cancers in normal tissues. This technology has received increasing attention from the bioinformatics community because of its potential in designing powerful decision-making tools for cancer diagnosis. However, the presence of thousands or tens of thousands of genes affects the predictive accuracy of this technology from the perspective of classification. Thus, a key issue in microarray data is identifying or selecting the smallest possible set of genes from the input data that can achieve good predictive accuracy for classification. In this work, we propose a two-stage selection algorithm for gene selection problems in microarray data-sets called the symmetrical uncertainty filter and harmony search algorithm wrapper (SU-HSA). Experimental results show that the SU-HSA is better than HSA in isolation for all data-sets in terms of the accuracy and achieves a lower number of genes on 6 out of 10 instances. Furthermore, the comparison with state-of-the-art methods shows that our proposed approach is able to obtain 5 (out of 10) new best results in terms of the number of selected genes and competitive results in terms of the classification accuracy.
Evaluating prediction uncertainty
International Nuclear Information System (INIS)
McKay, M.D.
1995-03-01
The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented
Oil price uncertainty in Canada
Energy Technology Data Exchange (ETDEWEB)
Elder, John [Department of Finance and Real Estate, 1272 Campus Delivery, Colorado State University, Fort Collins, CO 80523 (United States); Serletis, Apostolos [Department of Economics, University of Calgary, Calgary, Alberta (Canada)
2009-11-15
Bernanke [Bernanke, Ben S. Irreversibility, uncertainty, and cyclical investment. Quarterly Journal of Economics 98 (1983), 85-106.] shows how uncertainty about energy prices may induce optimizing firms to postpone investment decisions, thereby leading to a decline in aggregate output. Elder and Serletis [Elder, John and Serletis, Apostolos. Oil price uncertainty.] find empirical evidence that uncertainty about oil prices has tended to depress investment in the United States. In this paper we assess the robustness of these results by investigating the effects of oil price uncertainty in Canada. Our results are remarkably similar to existing results for the United States, providing additional evidence that uncertainty about oil prices may provide another explanation for why the sharp oil price declines of 1985 failed to produce rapid output growth. Impulse-response analysis suggests that uncertainty about oil prices may tend to reinforce the negative response of output to positive oil shocks. (author)
Dental work force strategies during a period of change and uncertainty.
Brown, L J
2001-12-01
Both supply and demand influence the ability of the dental work force to adequately and efficiently provide dental care to a U.S. population growing in size and diversity. Major changes are occurring on both sides of the dental care market. Among factors shaping the demand for dental care are changing disease patterns, shifting population demographics, the extent and features of third-party payment, and growth of the economy and the population. The capacity of the dental work force to provide care is influenced by enhancements of productivity and numbers of dental health personnel, as well as their demographic and practice characteristics. The full impact of these changes is difficult to predict. The dentist-to-population ratio does not reflect all the factors that must be considered to develop an effective dental work force policy. Nationally, the dental work force is likely to be adequate for the next several years, but regional work force imbalances appear to exist and may get worse. Against this backdrop of change and uncertainty, future dental work force strategies should strive for short-term responsiveness while avoiding long-term inflexibility. Trends in the work force must be continually monitored. Thorough analysis is required, and action should be taken when necessary.
Evaluation of Uncertainty of IMRT QA Using 2 Dimensional Array Detector for Head and Neck Patients
International Nuclear Information System (INIS)
Ban, Tae Joon; Lee, Woo Suk; Kim, Dae Sup; Baek, Geum Mun; Kwak, Jung Won
2011-01-01
IMRT QA using 2 Dimensional array detector is carried out with condition for discrete dose distribution clinically. And it can affect uncertainty of evaluation using gamma method. We analyze gamma index variation according to grid size and suggest validate range of grid size for IMRT QA in Hospital. We performed QA using OniPro I'mRT system software version 1.7b on 10 patients (head and neck) for IMRT. The reference dose plane (grid size, 0.1 cm; location, [0, 0, 0]) from RTP was compared with the dose plane that has different grid size (0.1 cm, 0.5 cm, 1.0 cm, 2.0 cm, 4.0 cm) and different location (along Y-axis 0 cm, 0.2 cm, 0.5 cm, 1.0 cm). The gamma index variation was evaluated by observing the level of changes in Gamma pass rate, Average signal, Standard deviation for each case. The average signal for each grid size showed difference levels of 0%, -0.19%, -0.04%, -0.46%, -8.32% and the standard deviation for each grid size showed difference levels of 0%, -0.30%, 1.24%, -0.70%, -7.99%. The gamma pass rate for each grid size showed difference levels of 0%, 0.27%, -1.43%, 5.32%, 5.60%. The gamma evaluation results according to distance in grid size range of 0.1 cm to 1.0 cm showed good agreement with reference condition (grid size 0.1 cm) within 1.5% and over 5% in case of the grid size was greater than 2.0 cm. We recognize that the grid size of gamma evaluation can make errors of IMRT QA. So we have to consider uncertainty of gamma evaluation according to the grid size and apply smaller than 2 cm grid size to reduce error and increase accuracy clinically.
Analyzing ROC curves using the effective set-size model
Samuelson, Frank W.; Abbey, Craig K.; He, Xin
2018-03-01
The Effective Set-Size model has been used to describe uncertainty in various signal detection experiments. The model regards images as if they were an effective number (M*) of searchable locations, where the observer treats each location as a location-known-exactly detection task with signals having average detectability d'. The model assumes a rational observer behaves as if he searches an effective number of independent locations and follows signal detection theory at each location. Thus the location-known-exactly detectability (d') and the effective number of independent locations M* fully characterize search performance. In this model the image rating in a single-response task is assumed to be the maximum response that the observer would assign to these many locations. The model has been used by a number of other researchers, and is well corroborated. We examine this model as a way of differentiating imaging tasks that radiologists perform. Tasks involving more searching or location uncertainty may have higher estimated M* values. In this work we applied the Effective Set-Size model to a number of medical imaging data sets. The data sets include radiologists reading screening and diagnostic mammography with and without computer-aided diagnosis (CAD), and breast tomosynthesis. We developed an algorithm to fit the model parameters using two-sample maximum-likelihood ordinal regression, similar to the classic bi-normal model. The resulting model ROC curves are rational and fit the observed data well. We find that the distributions of M* and d' differ significantly among these data sets, and differ between pairs of imaging systems within studies. For example, on average tomosynthesis increased readers' d' values, while CAD reduced the M* parameters. We demonstrate that the model parameters M* and d' are correlated. We conclude that the Effective Set-Size model may be a useful way of differentiating location uncertainty from the diagnostic uncertainty in medical
Predicting incident size from limited information
International Nuclear Information System (INIS)
Englehardt, J.D.
1995-01-01
Predicting the size of low-probability, high-consequence natural disasters, industrial accidents, and pollutant releases is often difficult due to limitations in the availability of data on rare events and future circumstances. When incident data are available, they may be difficult to fit with a lognormal distribution. Two Bayesian probability distributions for inferring future incident-size probabilities from limited, indirect, and subjective information are proposed in this paper. The distributions are derived from Pareto distributions that are shown to fit data on different incident types and are justified theoretically. The derived distributions incorporate both inherent variability and uncertainty due to information limitations. Results were analyzed to determine the amount of data needed to predict incident-size probabilities in various situations. Information requirements for incident-size prediction using the methods were low, particularly when the population distribution had a thick tail. Use of the distributions to predict accumulated oil-spill consequences was demonstrated
Uncertainty analysis of suppression pool heating during an ATWS in a BWR-5 plant
International Nuclear Information System (INIS)
Wulff, W.; Cheng, H.S.; Mallen, A.N.; Johnsen, G.W.; Lellouche, G.S.
1994-03-01
The uncertainty has been estimated of predicting the peak temperature in the suppression pool of a BWR power plant, which undergoes an NRC-postulated Anticipated Transient Without Scram (ATWS). The ATWS is initiated by recirculation-pump trips, and then leads to power and flow oscillations as they had occurred at the LaSalle-2 Power Station in March of 1988. After limit-cycle oscillations have been established, the turbines are tripped, but without MSIV closure, allowing steam discharge through the turbine bypass into the condenser. Postulated operator actions, namely to lower the reactor vessel pressure and the level elevation in the downcomer, are simulated by a robot model which accounts for operator uncertainty. All balance of plant and control systems modeling uncertainties were part of the statistical uncertainty analysis that was patterned after the Code Scaling, Applicability and Uncertainty (CSAU) evaluation methodology. The analysis showed that the predicted suppression-pool peak temperature of 329.3 K (133 degrees F) has a 95-percentile uncertainty of 14.4 K (26 degrees F), and that the size of this uncertainty bracket is dominated by the experimental uncertainty of measuring Safety and Relief Valve mass flow rates under critical-flow conditions. The analysis showed also that the probability of exceeding the suppression-pool temperature limit of 352.6 K (175 degrees F) is most likely zero (it is estimated as < 5-104). The square root of the sum of the squares of all the computed peak pool temperatures is 350.7 K (171.6 degrees F)
Additivity of entropic uncertainty relations
Directory of Open Access Journals (Sweden)
René Schwonnek
2018-03-01
Full Text Available We consider the uncertainty between two pairs of local projective measurements performed on a multipartite system. We show that the optimal bound in any linear uncertainty relation, formulated in terms of the Shannon entropy, is additive. This directly implies, against naive intuition, that the minimal entropic uncertainty can always be realized by fully separable states. Hence, in contradiction to proposals by other authors, no entanglement witness can be constructed solely by comparing the attainable uncertainties of entangled and separable states. However, our result gives rise to a huge simplification for computing global uncertainty bounds as they now can be deduced from local ones. Furthermore, we provide the natural generalization of the Maassen and Uffink inequality for linear uncertainty relations with arbitrary positive coefficients.
International Nuclear Information System (INIS)
Espana, Samuel; Paganetti, Harald
2011-01-01
Dose calculation for lung tumors can be challenging due to the low density and the fine structure of the geometry. The latter is not fully considered in the CT image resolution used in treatment planning causing the prediction of a more homogeneous tissue distribution. In proton therapy, this could result in predicting an unrealistically sharp distal dose falloff, i.e. an underestimation of the distal dose falloff degradation. The goal of this work was the quantification of such effects. Two computational phantoms resembling a two-dimensional heterogeneous random lung geometry and a swine lung were considered applying a variety of voxel sizes for dose calculation. Monte Carlo simulations were used to compare the dose distributions predicted with the voxel size typically used for the treatment planning procedure with those expected to be delivered using the finest resolution. The results show, for example, distal falloff position differences of up to 4 mm between planned and expected dose at the 90% level for the heterogeneous random lung (assuming treatment plan on a 2 x 2 x 2.5 mm 3 grid). For the swine lung, differences of up to 38 mm were seen when airways are present in the beam path when the treatment plan was done on a 0.8 x 0.8 x 2.4 mm 3 grid. The two-dimensional heterogeneous random lung phantom apparently does not describe the impact of the geometry adequately because of the lack of heterogeneities in the axial direction. The differences observed in the swine lung between planned and expected dose are presumably due to the poor axial resolution of the CT images used in clinical routine. In conclusion, when assigning margins for treatment planning for lung cancer, proton range uncertainties due to the heterogeneous lung geometry and CT image resolution need to be considered.
International Nuclear Information System (INIS)
Ren, M J; Cheung, C F; Kong, L B
2012-01-01
In the measurement of ultra-precision freeform surfaces, least-squares-based form characterization methods are widely used to evaluate the form error of the measured surfaces. Although many methodologies have been proposed in recent years to improve the efficiency of the characterization process, relatively little research has been conducted on the analysis of associated uncertainty in the characterization results which may result from those characterization methods being used. As a result, this paper presents a task specific uncertainty analysis method with application in the least-squares-based form characterization of ultra-precision freeform surfaces. That is, the associated uncertainty in the form characterization results is estimated when the measured data are extracted from a specific surface with specific sampling strategy. Three factors are considered in this study which include measurement error, surface form error and sample size. The task specific uncertainty analysis method has been evaluated through a series of experiments. The results show that the task specific uncertainty analysis method can effectively estimate the uncertainty of the form characterization results for a specific freeform surface measurement
Attention has memory: priming for the size of the attentional focus.
Fuggetta, Giorgio; Lanfranchi, Silvia; Campana, Gianluca
2009-01-01
Repeating the same target's features or spatial position, as well as repeating the same context (e.g. distractor sets) in visual search leads to a decrease of reaction times. This modulation can occur on a trial by trial basis (the previous trial primes the following one), but can also occur across multiple trials (i.e. performance in the current trial can benefit from features, position or context seen several trials earlier), and includes inhibition of different features, position or contexts besides facilitation of the same ones. Here we asked whether a similar implicit memory mechanism exists for the size of the attentional focus. By manipulating the size of the attentional focus with the repetition of search arrays with the same vs. different size, we found both facilitation for the same array size and inhibition for a different array size, as well as a progressive improvement in performance with increasing the number of repetition of search arrays with the same size. These results show that implicit memory for the size of the attentional focus can guide visual search even in the absence of feature or position priming, or distractor's contextual effects.
Measurement uncertainty and probability
Willink, Robin
2013-01-01
A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.
International Nuclear Information System (INIS)
Ánchel, F.; Barrachina, T.; Miró, R.; Verdú, G.; Juanas, J.; Macián-Juan, R.
2012-01-01
Highlights: ► Best-estimate codes are affected by the uncertainty in the methods and the models. ► Influence of the uncertainty in the macroscopic cross-sections in a BWR and PWR RIA accidents analysis. ► The fast diffusion coefficient, the scattering cross section and both fission cross sections are the most influential factors. ► The absorption cross sections very little influence. ► Using a normal pdf the results are more “conservative” comparing the power peak reached with uncertainty quantified with a uniform pdf. - Abstract: The Best Estimate analysis consists of a coupled thermal-hydraulic and neutronic description of the nuclear system's behavior; uncertainties from both aspects should be included and jointly propagated. This paper presents a study of the influence of the uncertainty in the macroscopic neutronic information that describes a three-dimensional core model on the most relevant results of the simulation of a Reactivity Induced Accident (RIA). The analyses of a BWR-RIA and a PWR-RIA have been carried out with a three-dimensional thermal-hydraulic and neutronic model for the coupled system TRACE-PARCS and RELAP-PARCS. The cross section information has been generated by the SIMTAB methodology based on the joint use of CASMO-SIMULATE. The statistically based methodology performs a Monte-Carlo kind of sampling of the uncertainty in the macroscopic cross sections. The size of the sampling is determined by the characteristics of the tolerance intervals by applying the Noether–Wilks formulas. A number of simulations equal to the sample size have been carried out in which the cross sections used by PARCS are directly modified with uncertainty, and non-parametric statistical methods are applied to the resulting sample of the values of the output variables to determine their intervals of tolerance.
SUSD, Sensitivity and Uncertainty in Neutron Transport and Detector Response
International Nuclear Information System (INIS)
Furuta, Lazuo; Kondo, Shunsuke; Oka, Yoshika
1991-01-01
1 - Description of program or function: SUSD calculates sensitivity coefficients for one and two-dimensional transport problems. Variance and standard deviation of detector responses or design parameters can be obtained using cross-section covariance matrices. In neutron transport problems, this code is able to perform sensitivity-uncertainty analysis for secondary angular distribution (SAD) or secondary energy distribution (SED). 2 - Method of solution: The first-order perturbation theory is used to obtain sensitivity coefficients. The method described in the distributed report is employed to consider SAD/SED effect. 3 - Restrictions on the complexity of the problem: Variable dimension is used so that there is no limitation in each array size but the total core size
International Nuclear Information System (INIS)
Leggett, R.; Harrison, J.; Phipps, A.
2007-01-01
The biokinetic and dosimetric model of the gastrointestinal (GI) tract applied in current documents of the International Commission on Radiological Protection (ICRP) was developed in the mid-1960's. The model was based on features of a reference adult male and was first used by the ICRP in Publication 30, Limits for Intakes of Radionuclides by Workers (Part 1, 1979). In the late 1990's an ICRP task group was appointed to develop a biokinetic and dosimetric model of the alimentary tract that reflects updated information and addresses current needs in radiation protection. The new age-specific and gender-specific model, called the Human Alimentary Tract Model (HATM), has been completed and will replace the GI model of Publication 30 in upcoming ICRP documents. This paper discusses the basis for the structure and parameter values of the HATM, summarises the uncertainties associated with selected features and types of predictions of the HATM and examines the sensitivity of dose estimates to these uncertainties for selected radionuclides. Emphasis is on generic biokinetic features of the HATM, particularly transit times through the lumen of the alimentary tract, but key dosimetric features of the model are outlined, and the sensitivity of tissue dose estimates to uncertainties in dosimetric as well as biokinetic features of the HATM are examined for selected radionuclides. (authors)
Model uncertainty and probability
International Nuclear Information System (INIS)
Parry, G.W.
1994-01-01
This paper discusses the issue of model uncertainty. The use of probability as a measure of an analyst's uncertainty as well as a means of describing random processes has caused some confusion, even though the two uses are representing different types of uncertainty with respect to modeling a system. The importance of maintaining the distinction between the two types is illustrated with a simple example
Agriculture-driven deforestation in the tropics from 1990-2015: emissions, trends and uncertainties
Carter, Sarah; Herold, Martin; Avitabile, Valerio; de Bruin, Sytze; De Sy, Veronique; Kooistra, Lammert; Rufino, Mariana C.
2018-01-01
Limited data exists on emissions from agriculture-driven deforestation, and available data are typically uncertain. In this paper, we provide comparable estimates of emissions from both all deforestation and agriculture-driven deforestation, with uncertainties for 91 countries across the tropics between 1990 and 2015. Uncertainties associated with input datasets (activity data and emissions factors) were used to combine the datasets, where most certain datasets contribute the most. This method utilizes all the input data, while minimizing the uncertainty of the emissions estimate. The uncertainty of input datasets was influenced by the quality of the data, the sample size (for sample-based datasets), and the extent to which the timeframe of the data matches the period of interest. Area of deforestation, and the agriculture-driver factor (extent to which agriculture drives deforestation), were the most uncertain components of the emissions estimates, thus improvement in the uncertainties related to these estimates will provide the greatest reductions in uncertainties of emissions estimates. Over the period of the study, Latin America had the highest proportion of deforestation driven by agriculture (78%), and Africa had the lowest (62%). Latin America had the highest emissions from agriculture-driven deforestation, and these peaked at 974 ± 148 Mt CO2 yr-1 in 2000-2005. Africa saw a continuous increase in emissions between 1990 and 2015 (from 154 ± 21-412 ± 75 Mt CO2 yr-1), so mitigation initiatives could be prioritized there. Uncertainties for emissions from agriculture-driven deforestation are ± 62.4% (average over 1990-2015), and uncertainties were highest in Asia and lowest in Latin America. Uncertainty information is crucial for transparency when reporting, and gives credibility to related mitigation initiatives. We demonstrate that uncertainty data can also be useful when combining multiple open datasets, so we recommend new data
A new uncertainty importance measure
International Nuclear Information System (INIS)
Borgonovo, E.
2007-01-01
Uncertainty in parameters is present in many risk assessment problems and leads to uncertainty in model predictions. In this work, we introduce a global sensitivity indicator which looks at the influence of input uncertainty on the entire output distribution without reference to a specific moment of the output (moment independence) and which can be defined also in the presence of correlations among the parameters. We discuss its mathematical properties and highlight the differences between the present indicator, variance-based uncertainty importance measures and a moment independent sensitivity indicator previously introduced in the literature. Numerical results are discussed with application to the probabilistic risk assessment model on which Iman [A matrix-based approach to uncertainty and sensitivity analysis for fault trees. Risk Anal 1987;7(1):22-33] first introduced uncertainty importance measures
de'Michieli Vitturi, Mattia; Pardini, Federica; Spanu, Antonio; Neri, Augusto; Vittoria Salvetti, Maria
2015-04-01
Volcanic ash clouds represent a major hazard for populations living nearby volcanic centers producing a risk for humans and a potential threat to crops, ground infrastructures, and aviation traffic. Lagrangian particle dispersal models are commonly used for tracking ash particles emitted from volcanic plumes and transported under the action of atmospheric wind fields. In this work, we present the results of an uncertainty propagation analysis applied to volcanic ash dispersal from weak plumes with specific focus on the uncertainties related to the grain-size distribution of the mixture. To this aim, the Eulerian fully compressible mesoscale non-hydrostatic model WRF was used to generate the driving wind, representative of the atmospheric conditions occurring during the event of November 24, 2006 at Mt. Etna. Then, the Lagrangian particle model LPAC (de' Michieli Vitturi et al., JGR 2010) was used to simulate the transport of mass particles under the action of atmospheric conditions. The particle motion equations were derived by expressing the Lagrangian particle acceleration as the sum of the forces acting along its trajectory, with drag forces calculated as a function of particle diameter, density, shape and Reynolds number. The simulations were representative of weak plume events of Mt. Etna and aimed to quantify the effect on the dispersal process of the uncertainty in the particle sphericity and in the mean and variance of a log-normal distribution function describing the grain-size of ash particles released from the eruptive column. In order to analyze the sensitivity of particle dispersal to these uncertain parameters with a reasonable number of simulations, and therefore with affordable computational costs, response surfaces in the parameter space were built by using the generalized polynomial chaos technique. The uncertainty analysis allowed to quantify the most probable values, as well as their pdf, of the number of particles as well as of the mean and
Chitty, L S; Griffin, D R; Meaney, C; Barrett, A; Khalil, A; Pajkrt, E; Cole, T J
2011-03-01
To improve the prenatal diagnosis of achondroplasia by constructing charts of fetal size, defining frequency of sonographic features and exploring the role of non-invasive molecular diagnosis based on cell-free fetal deoxyribonucleic acid (DNA) in maternal plasma. Data on fetuses with a confirmed diagnosis of achondroplasia were obtained from our databases, records reviewed, sonographic features and measurements determined and charts of fetal size constructed using the LMS (lambda-mu-sigma) method and compared with charts used in normal pregnancies. Cases referred to our regional genetics laboratory for molecular diagnosis using cell-free fetal DNA were identified and results reviewed. Twenty-six cases were scanned in our unit. Fetal size charts showed that femur length was usually on or below the 3(rd) centile by 25 weeks' gestation, and always below the 3(rd) by 30 weeks. Head circumference was above the 50(th) centile, increasing to above the 95(th) when compared with normal for the majority of fetuses. The abdominal circumference was also increased but to a lesser extent. Commonly reported sonographic features were bowing of the femora, frontal bossing, short fingers, a small chest and polyhydramnios. Analysis of cell-free fetal DNA in six pregnancies confirmed the presence of the c.1138G > A mutation in the FGRF3 gene in four cases with achondroplasia, but not the two subsequently found to be growth restricted. These data should improve the accuracy of diagnosis of achondroplasia based on sonographic findings, and have implications for targeted molecular confirmation that can reliably and safely be carried out using cell-free fetal DNA. Copyright © 2011 ISUOG. Published by John Wiley & Sons, Ltd.
Risk, uncertainty and prophet: The psychological insights of Frank H. Knight
Directory of Open Access Journals (Sweden)
Tim Rakow
2010-10-01
Full Text Available Economist Frank H. Knight (1885--1972 is commonly credited with defining the distinction between decisions under ``risk'' (known chance and decisions under ``uncertainty'' (unmeasurable probability in his 1921 book Risk, Uncertainty and Profit. A closer reading of Knight (1921 reveals a host of psychological insights beyond this risk-uncertainty distinction, many of which foreshadow revolutionary advances in psychological decision theory from the latter half of the 20th century. Knight's description of economic decision making shared much with Simon's (1955, 1956 notion of bounded rationality, whereby choice behavior is regulated by cognitive and environmental constraints. Knight described features of risky choice that were to become key components of prospect theory (Kahneman and Tversky, 1979: the reference dependent valuation of outcomes, and the non-linear weighting of probabilities. Knight also discussed several biases in human decision making, and pointed to two systems of reasoning: one quick, intuitive but error prone, and a slower, more deliberate, rule-based system. A discussion of Knight's potential contribution to psychological decision theory emphasises the importance of a historical perspective on theory development, and the potential value of sourcing ideas from other disciplines or from earlier periods of time.
Sources of patient uncertainty when reviewing medical disclosure and consent documentation.
Donovan-Kicken, Erin; Mackert, Michael; Guinn, Trey D; Tollison, Andrew C; Breckinridge, Barbara
2013-02-01
Despite evidence that medical disclosure and consent forms are ineffective at communicating the risks and hazards of treatment and diagnostic procedures, little is known about exactly why they are difficult for patients to understand. The objective of this research was to examine what features of the forms increase people's uncertainty. Interviews were conducted with 254 individuals. After reading a sample consent form, participants described what they found confusing in the document. With uncertainty management as a theoretical framework, interview responses were analyzed for prominent themes. Four distinct sources of uncertainty emerged from participants' responses: (a) language, (b) risks and hazards, (c) the nature of the procedure, and (d) document composition and format. Findings indicate the value of simplifying medico-legal jargon, signposting definitions of terms, removing language that addresses multiple readers simultaneously, reorganizing bulleted lists of risks, and adding section breaks or negative space. These findings offer suggestions for providing more straightforward details about risks and hazards to patients, not necessarily through greater amounts of information but rather through more clear and sufficient material and better formatting. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Online updating and uncertainty quantification using nonstationary output-only measurement
Yuen, Ka-Veng; Kuok, Sin-Chi
2016-01-01
Extended Kalman filter (EKF) is widely adopted for state estimation and parametric identification of dynamical systems. In this algorithm, it is required to specify the covariance matrices of the process noise and measurement noise based on prior knowledge. However, improper assignment of these noise covariance matrices leads to unreliable estimation and misleading uncertainty estimation on the system state and model parameters. Furthermore, it may induce diverging estimation. To resolve these problems, we propose a Bayesian probabilistic algorithm for online estimation of the noise parameters which are used to characterize the noise covariance matrices. There are three major appealing features of the proposed approach. First, it resolves the divergence problem in the conventional usage of EKF due to improper choice of the noise covariance matrices. Second, the proposed approach ensures the reliability of the uncertainty quantification. Finally, since the noise parameters are allowed to be time-varying, nonstationary process noise and/or measurement noise are explicitly taken into account. Examples using stationary/nonstationary response of linear/nonlinear time-varying dynamical systems are presented to demonstrate the efficacy of the proposed approach. Furthermore, comparison with the conventional usage of EKF will be provided to reveal the necessity of the proposed approach for reliable model updating and uncertainty quantification.
Uncertainty calculations made easier
International Nuclear Information System (INIS)
Hogenbirk, A.
1994-07-01
The results are presented of a neutron cross section sensitivity/uncertainty analysis performed in a complicated 2D model of the NET shielding blanket design inside the ITER torus design, surrounded by the cryostat/biological shield as planned for ITER. The calculations were performed with a code system developed at ECN Petten, with which sensitivity/uncertainty calculations become relatively simple. In order to check the deterministic neutron transport calculations (performed with DORT), calculations were also performed with the Monte Carlo code MCNP. Care was taken to model the 2.0 cm wide gaps between two blanket segments, as the neutron flux behind the vacuum vessel is largely determined by neutrons streaming through these gaps. The resulting neutron flux spectra are in excellent agreement up to the end of the cryostat. It is noted, that at this position the attenuation of the neutron flux is about 1 l orders of magnitude. The uncertainty in the energy integrated flux at the beginning of the vacuum vessel and at the beginning of the cryostat was determined in the calculations. The uncertainty appears to be strongly dependent on the exact geometry: if the gaps are filled with stainless steel, the neutron spectrum changes strongly, which results in an uncertainty of 70% in the energy integrated flux at the beginning of the cryostat in the no-gap-geometry, compared to an uncertainty of only 5% in the gap-geometry. Therefore, it is essential to take into account the exact geometry in sensitivity/uncertainty calculations. Furthermore, this study shows that an improvement of the covariance data is urgently needed in order to obtain reliable estimates of the uncertainties in response parameters in neutron transport calculations. (orig./GL)
Unexpected uncertainty, volatility and decision-making
Directory of Open Access Journals (Sweden)
Amy Rachel Bland
2012-06-01
Full Text Available The study of uncertainty in decision making is receiving greater attention in the fields of cognitive and computational neuroscience. Several lines of evidence are beginning to elucidate different variants of uncertainty. Particularly, risk, ambiguity and expected and unexpected forms of uncertainty are well articulated in the literature. In this article we review both empirical and theoretical evidence arguing for the potential distinction between three forms of uncertainty; expected uncertainty, unexpected uncertainty and volatility. Particular attention will be devoted to exploring the distinction between unexpected uncertainty and volatility which has been less appreciated in the literature. This includes evidence from computational modelling, neuromodulation, neuroimaging and electrophysiological studies. We further address the possible differentiation of cognitive control mechanisms used to deal with these forms of uncertainty. Particularly we explore a role for conflict monitoring and the temporal integration of information into working memory. Finally, we explore whether the Dual Modes of Control theory provides a theoretical framework for understanding the distinction between unexpected uncertainty and volatility.
Harvest Regulations and Implementation Uncertainty in Small Game Harvest Management
Directory of Open Access Journals (Sweden)
Pål F. Moa
2017-09-01
Full Text Available A main challenge in harvest management is to set policies that maximize the probability that management goals are met. While the management cycle includes multiple sources of uncertainty, only some of these has received considerable attention. Currently, there is a large gap in our knowledge about implemention of harvest regulations, and to which extent indirect control methods such as harvest regulations are actually able to regulate harvest in accordance with intended management objectives. In this perspective article, we first summarize and discuss hunting regulations currently used in management of grouse species (Tetraonidae in Europe and North America. Management models suggested for grouse are most often based on proportional harvest or threshold harvest principles. These models are all built on theoretical principles for sustainable harvesting, and provide in the end an estimate on a total allowable catch. However, implementation uncertainty is rarely examined in empirical or theoretical harvest studies, and few general findings have been reported. Nevertheless, circumstantial evidence suggest that many of the most popular regulations are acting depensatory so that harvest bag sizes is more limited in years (or areas where game density is high, contrary to general recommendations. A better understanding of the implementation uncertainty related to harvest regulations is crucial in order to establish sustainable management systems. We suggest that scenario tools like Management System Evaluation (MSE should be more frequently used to examine robustness of currently applied harvest regulations to such implementation uncertainty until more empirical evidence is available.
Model uncertainty in safety assessment
International Nuclear Information System (INIS)
Pulkkinen, U.; Huovinen, T.
1996-01-01
The uncertainty analyses are an essential part of any risk assessment. Usually the uncertainties of reliability model parameter values are described by probability distributions and the uncertainty is propagated through the whole risk model. In addition to the parameter uncertainties, the assumptions behind the risk models may be based on insufficient experimental observations and the models themselves may not be exact descriptions of the phenomena under analysis. The description and quantification of this type of uncertainty, model uncertainty, is the topic of this report. The model uncertainty is characterized and some approaches to model and quantify it are discussed. The emphasis is on so called mixture models, which have been applied in PSAs. Some of the possible disadvantages of the mixture model are addressed. In addition to quantitative analyses, also qualitative analysis is discussed shortly. To illustrate the models, two simple case studies on failure intensity and human error modeling are described. In both examples, the analysis is based on simple mixture models, which are observed to apply in PSA analyses. (orig.) (36 refs., 6 figs., 2 tabs.)
Model uncertainty in safety assessment
Energy Technology Data Exchange (ETDEWEB)
Pulkkinen, U; Huovinen, T [VTT Automation, Espoo (Finland). Industrial Automation
1996-01-01
The uncertainty analyses are an essential part of any risk assessment. Usually the uncertainties of reliability model parameter values are described by probability distributions and the uncertainty is propagated through the whole risk model. In addition to the parameter uncertainties, the assumptions behind the risk models may be based on insufficient experimental observations and the models themselves may not be exact descriptions of the phenomena under analysis. The description and quantification of this type of uncertainty, model uncertainty, is the topic of this report. The model uncertainty is characterized and some approaches to model and quantify it are discussed. The emphasis is on so called mixture models, which have been applied in PSAs. Some of the possible disadvantages of the mixture model are addressed. In addition to quantitative analyses, also qualitative analysis is discussed shortly. To illustrate the models, two simple case studies on failure intensity and human error modeling are described. In both examples, the analysis is based on simple mixture models, which are observed to apply in PSA analyses. (orig.) (36 refs., 6 figs., 2 tabs.).
Uncertainty Management and Sensitivity Analysis
DEFF Research Database (Denmark)
Rosenbaum, Ralph K.; Georgiadis, Stylianos; Fantke, Peter
2018-01-01
Uncertainty is always there and LCA is no exception to that. The presence of uncertainties of different types and from numerous sources in LCA results is a fact, but managing them allows to quantify and improve the precision of a study and the robustness of its conclusions. LCA practice sometimes...... suffers from an imbalanced perception of uncertainties, justifying modelling choices and omissions. Identifying prevalent misconceptions around uncertainties in LCA is a central goal of this chapter, aiming to establish a positive approach focusing on the advantages of uncertainty management. The main...... objectives of this chapter are to learn how to deal with uncertainty in the context of LCA, how to quantify it, interpret and use it, and how to communicate it. The subject is approached more holistically than just focusing on relevant statistical methods or purely mathematical aspects. This chapter...
Uncertainty Reduction for Stochastic Processes on Complex Networks
Radicchi, Filippo; Castellano, Claudio
2018-05-01
Many real-world systems are characterized by stochastic dynamical rules where a complex network of interactions among individual elements probabilistically determines their state. Even with full knowledge of the network structure and of the stochastic rules, the ability to predict system configurations is generally characterized by a large uncertainty. Selecting a fraction of the nodes and observing their state may help to reduce the uncertainty about the unobserved nodes. However, choosing these points of observation in an optimal way is a highly nontrivial task, depending on the nature of the stochastic process and on the structure of the underlying interaction pattern. In this paper, we introduce a computationally efficient algorithm to determine quasioptimal solutions to the problem. The method leverages network sparsity to reduce computational complexity from exponential to almost quadratic, thus allowing the straightforward application of the method to mid-to-large-size systems. Although the method is exact only for equilibrium stochastic processes defined on trees, it turns out to be effective also for out-of-equilibrium processes on sparse loopy networks.
Directory of Open Access Journals (Sweden)
Rianne M. Bijlsma
2011-03-01
Full Text Available Stakeholder participation is advocated widely, but there is little structured, empirical research into its influence on policy development. We aim to further the insight into the characteristics of participatory policy development by comparing it to expert-based policy development for the same case. We describe the process of problem framing and analysis, as well as the knowledge base used. We apply an uncertainty perspective to reveal differences between the approaches and speculate about possible explanations. We view policy development as a continuous handling of substantive uncertainty and process uncertainty, and investigate how the methods of handling uncertainty of actors influence the policy development. Our findings suggest that the wider frame that was adopted in the participatory approach was the result of a more active handling of process uncertainty. The stakeholders handled institutional uncertainty by broadening the problem frame, and they handled strategic uncertainty by negotiating commitment and by including all important stakeholder criteria in the frame. In the expert-based approach, we observed a more passive handling of uncertainty, apparently to avoid complexity. The experts handled institutional uncertainty by reducing the scope and by anticipating windows of opportunity in other policy arenas. Strategic uncertainty was handled by assuming stakeholders' acceptance of noncontroversial measures that balanced benefits and sacrifices. Three other observations are of interest to the scientific debate on participatory policy processes. Firstly, the participatory policy was less adaptive than the expert-based policy. The observed low tolerance for process uncertainty of participants made them opt for a rigorous "once and for all" settling of the conflict. Secondly, in the participatory approach, actors preferred procedures of traceable knowledge acquisition over controversial topics to handle substantive uncertainty. This
MO-B-207B-01: Harmonization & Robustness in Radiomics
Energy Technology Data Exchange (ETDEWEB)
Court, L. [UT MD Anderson Cancer Center (United States)
2016-06-15
Feature extraction for radiomics studies typically comprises the following stages: Imaging, segmentation, image processing, and feature extraction. Each of these stages has associated uncertainties that can affect the quality of a radiomics model created using the resulting image features. For example, the imaging device manufacturer and model have been shown to impact the values of image features, as have pixel size and imaging protocol parameters. Image processing, such as low-pass filtering to reduce noise, also changes calculated image features and should be designed to optimize the information content of the resulting features. The details of certain feature algorithms, such as co-occurrence matrix bin sizes, are also important, and should be optimized for specific radiomics tasks. The volume of the region of interest should be considered as image features can be related to volume and can give unanticipated results when the volumes are too small. In this session we will describe approaches to quantify the variabilities in radiomics studies, including the most recent results quantifying these variabilities for CT, MRI and PET imaging. We will discuss methods to optimize image processing and feature extraction in order to maximize the information content of the image features. Finally, we will describe work to harmonize imaging protocols and feature calculations to help minimize uncertainties in radiomics studies. Learning Objectives: At the end of this session, participants will be able to: Identify the sources of uncertainty in radiomics studies (CT, PET, and MRI imaging) Describe methods for quantifying the magnitude of uncertainties Describe approaches for mitigating the effects of the uncertainties on radiomics models Funding from NIH, CPRIT, Varian, Elekta; L. Court, NCI, CPRIT, Varian, Elekta.
MO-B-207B-00: Harmonization & Robustness in Radiomics
Energy Technology Data Exchange (ETDEWEB)
NONE
2016-06-15
Feature extraction for radiomics studies typically comprises the following stages: Imaging, segmentation, image processing, and feature extraction. Each of these stages has associated uncertainties that can affect the quality of a radiomics model created using the resulting image features. For example, the imaging device manufacturer and model have been shown to impact the values of image features, as have pixel size and imaging protocol parameters. Image processing, such as low-pass filtering to reduce noise, also changes calculated image features and should be designed to optimize the information content of the resulting features. The details of certain feature algorithms, such as co-occurrence matrix bin sizes, are also important, and should be optimized for specific radiomics tasks. The volume of the region of interest should be considered as image features can be related to volume and can give unanticipated results when the volumes are too small. In this session we will describe approaches to quantify the variabilities in radiomics studies, including the most recent results quantifying these variabilities for CT, MRI and PET imaging. We will discuss methods to optimize image processing and feature extraction in order to maximize the information content of the image features. Finally, we will describe work to harmonize imaging protocols and feature calculations to help minimize uncertainties in radiomics studies. Learning Objectives: At the end of this session, participants will be able to: Identify the sources of uncertainty in radiomics studies (CT, PET, and MRI imaging) Describe methods for quantifying the magnitude of uncertainties Describe approaches for mitigating the effects of the uncertainties on radiomics models Funding from NIH, CPRIT, Varian, Elekta; L. Court, NCI, CPRIT, Varian, Elekta.
The Quark-Gluon Plasma Equation of State and the Generalized Uncertainty Principle
Directory of Open Access Journals (Sweden)
L. I. Abou-Salem
2015-01-01
Full Text Available The quark-gluon plasma (QGP equation of state within a minimal length scenario or Generalized Uncertainty Principle (GUP is studied. The Generalized Uncertainty Principle is implemented on deriving the thermodynamics of ideal QGP at a vanishing chemical potential. We find a significant effect for the GUP term. The main features of QCD lattice results were quantitatively achieved in case of nf=0, nf=2, and nf=2+1 flavors for the energy density, the pressure, and the interaction measure. The exciting point is the large value of bag pressure especially in case of nf=2+1 flavor which reflects the strong correlation between quarks in this bag which is already expected. One can notice that the asymptotic behavior which is characterized by Stephan-Boltzmann limit would be satisfied.
A Bayesian foundation for individual learning under uncertainty
Directory of Open Access Journals (Sweden)
Christoph eMathys
2011-05-01
Full Text Available Computational learning models are critical for understanding mechanisms of adaptive behavior. However, the two major current frameworks, reinforcement learning (RL and Bayesian learning, both have certain limitations. For example, many Bayesian models are agnostic of inter-individual variability and involve complicated integrals, making online learning difficult. Here, we introduce a generic hierarchical Bayesian framework for individual learning under multiple forms of uncertainty (e.g., environmental volatility and perceptual uncertainty. The model assumes Gaussian random walks of states at all but the first level, with the step size determined by the next higher level. The coupling between levels is controlled by parameters that shape the influence of uncertainty on learning in a subject-specific fashion. Using variational Bayes under a mean field approximation and a novel approximation to the posterior energy function, we derive trial-by-trial update equations which (i are analytical and extremely efficient, enabling real-time learning, (ii have a natural interpretation in terms of RL, and (iii contain parameters representing processes which play a key role in current theories of learning, e.g., precision-weighting of prediction error. These parameters allow for the expression of individual differences in learning and may relate to specific neuromodulatory mechanisms in the brain. Our model is very general: it can deal with both discrete and continuous states and equally accounts for deterministic and probabilistic relations between environmental events and perceptual states (i.e., situations with and without perceptual uncertainty. These properties are illustrated by simulations and analyses of empirical time series. Overall, our framework provides a novel foundation for understanding normal and pathological learning that contextualizes RL within a generic Bayesian scheme and thus connects it to principles of optimality from probability
A bayesian foundation for individual learning under uncertainty.
Mathys, Christoph; Daunizeau, Jean; Friston, Karl J; Stephan, Klaas E
2011-01-01
Computational learning models are critical for understanding mechanisms of adaptive behavior. However, the two major current frameworks, reinforcement learning (RL) and Bayesian learning, both have certain limitations. For example, many Bayesian models are agnostic of inter-individual variability and involve complicated integrals, making online learning difficult. Here, we introduce a generic hierarchical Bayesian framework for individual learning under multiple forms of uncertainty (e.g., environmental volatility and perceptual uncertainty). The model assumes Gaussian random walks of states at all but the first level, with the step size determined by the next highest level. The coupling between levels is controlled by parameters that shape the influence of uncertainty on learning in a subject-specific fashion. Using variational Bayes under a mean-field approximation and a novel approximation to the posterior energy function, we derive trial-by-trial update equations which (i) are analytical and extremely efficient, enabling real-time learning, (ii) have a natural interpretation in terms of RL, and (iii) contain parameters representing processes which play a key role in current theories of learning, e.g., precision-weighting of prediction error. These parameters allow for the expression of individual differences in learning and may relate to specific neuromodulatory mechanisms in the brain. Our model is very general: it can deal with both discrete and continuous states and equally accounts for deterministic and probabilistic relations between environmental events and perceptual states (i.e., situations with and without perceptual uncertainty). These properties are illustrated by simulations and analyses of empirical time series. Overall, our framework provides a novel foundation for understanding normal and pathological learning that contextualizes RL within a generic Bayesian scheme and thus connects it to principles of optimality from probability theory.
Can agent based models effectively reduce fisheries management implementation uncertainty?
Drexler, M.
2016-02-01
Uncertainty is an inherent feature of fisheries management. Implementation uncertainty remains a challenge to quantify often due to unintended responses of users to management interventions. This problem will continue to plague both single species and ecosystem based fisheries management advice unless the mechanisms driving these behaviors are properly understood. Equilibrium models, where each actor in the system is treated as uniform and predictable, are not well suited to forecast the unintended behaviors of individual fishers. Alternatively, agent based models (AMBs) can simulate the behaviors of each individual actor driven by differing incentives and constraints. This study evaluated the feasibility of using AMBs to capture macro scale behaviors of the US West Coast Groundfish fleet. Agent behavior was specified at the vessel level. Agents made daily fishing decisions using knowledge of their own cost structure, catch history, and the histories of catch and quota markets. By adding only a relatively small number of incentives, the model was able to reproduce highly realistic macro patterns of expected outcomes in response to management policies (catch restrictions, MPAs, ITQs) while preserving vessel heterogeneity. These simulations indicate that agent based modeling approaches hold much promise for simulating fisher behaviors and reducing implementation uncertainty. Additional processes affecting behavior, informed by surveys, are continually being added to the fisher behavior model. Further coupling of the fisher behavior model to a spatial ecosystem model will provide a fully integrated social, ecological, and economic model capable of performing management strategy evaluations to properly consider implementation uncertainty in fisheries management.
Religion in the face of uncertainty: an uncertainty-identity theory account of religiousness.
Hogg, Michael A; Adelman, Janice R; Blagg, Robert D
2010-02-01
The authors characterize religions as social groups and religiosity as the extent to which a person identifies with a religion, subscribes to its ideology or worldview, and conforms to its normative practices. They argue that religions have attributes that make them well suited to reduce feelings of self-uncertainty. According to uncertainty-identity theory, people are motivated to reduce feelings of uncertainty about or reflecting on self; and identification with groups, particularly highly entitative groups, is a very effective way to reduce uncertainty. All groups provide belief systems and normative prescriptions related to everyday life. However, religions also address the nature of existence, invoking sacred entities and associated rituals and ceremonies. They are entitative groups that provide a moral compass and rules for living that pervade a person's life, making them particularly attractive in times of uncertainty. The authors document data supporting their analysis and discuss conditions that transform religiosity into religious zealotry and extremism.
Uncertainty analysis of atmospheric friction torque on the solid Earth
Directory of Open Access Journals (Sweden)
Haoming Yan
2016-05-01
Full Text Available The wind stress acquired from European Centre for Medium-Range Weather Forecasts (ECMWF, National Centers for Environmental Prediction (NCEP climate models and QSCAT satellite observations are analyzed by using frequency-wavenumber spectrum method. The spectrum of two climate models, i.e., ECMWF and NCEP, is similar for both 10 m wind data and model output wind stress data, which indicates that both the climate models capture the key feature of wind stress. While the QSCAT wind stress data shows the similar characteristics with the two climate models in both spectrum domain and the spatial distribution, but with a factor of approximately 1.25 times larger than that of climate models in energy. These differences show the uncertainty in the different wind stress products, which inevitably cause the atmospheric friction torque uncertainties on solid Earth with a 60% departure in annual amplitude, and furtherly affect the precise estimation of the Earth's rotation.
Treatment planning for prostate focal laser ablation in the face of needle placement uncertainty
Energy Technology Data Exchange (ETDEWEB)
Cepek, Jeremy, E-mail: jcepek@robarts.ca; Fenster, Aaron [Robarts Research Institute, London, Ontario N6A 5K8, Canada and Biomedical Engineering, The University of Western Ontario, London, Ontario N6A 5B9 (Canada); Lindner, Uri; Trachtenberg, John [Department of Surgical Oncology, Division of Urology, University Health Network, Toronto, Ontario M5G 2C4 (Canada); Davidson, Sean R. H. [Ontario Cancer Institute, University Health Network, Toronto, Ontario M5G 2M9 (Canada); Haider, Masoom A. [Department of Medical Imaging, Sunnybrook Health Sciences Center, Toronto, Ontario M4N 3M5, Canada and Department of Medical Imaging, University of Toronto, Toronto, Ontario M5S 2J7 (Canada); Ghai, Sangeet [Department of Medical Imaging, University Health Network, Toronto, Ontario M5G 2M9 (Canada)
2014-01-15
Purpose: To study the effect of needle placement uncertainty on the expected probability of achieving complete focal target destruction in focal laser ablation (FLA) of prostate cancer. Methods: Using a simplified model of prostate cancer focal target, and focal laser ablation region shapes, Monte Carlo simulations of needle placement error were performed to estimate the probability of completely ablating a region of target tissue. Results: Graphs of the probability of complete focal target ablation are presented over clinically relevant ranges of focal target sizes and shapes, ablation region sizes, and levels of needle placement uncertainty. In addition, a table is provided for estimating the maximum target size that is treatable. The results predict that targets whose length is at least 5 mm smaller than the diameter of each ablation region can be confidently ablated using, at most, four laser fibers if the standard deviation in each component of needle placement error is less than 3 mm. However, targets larger than this (i.e., near to or exceeding the diameter of each ablation region) require more careful planning. This process is facilitated by using the table provided. Conclusions: The probability of completely ablating a focal target using FLA is sensitive to the level of needle placement uncertainty, especially as the target length approaches and becomes greater than the diameter of ablated tissue that each individual laser fiber can achieve. The results of this work can be used to help determine individual patient eligibility for prostate FLA, to guide the planning of prostate FLA, and to quantify the clinical benefit of using advanced systems for accurate needle delivery for this treatment modality.
Erha Uncertainty Analysis: Planning for the future
International Nuclear Information System (INIS)
Brami, T.R.; Hopkins, D.F.; Loguer, W.L.; Cornagia, D.M.; Braisted, A.W.C.
2002-01-01
The Erha field (OPL 209) was discovered in 1999 approximately 100 km off the coast of Nigeria in 1,100 m of water. The discovery well (Erha-1) encountered oil and gas in deep-water clastic reservoirs. The first appraisal well (Erha-2) drilled 1.6 km downdip to the northwest penetrated an oil-water contact and confirmed a potentially commercial discovery. However, the Erha-3 and Erha-3 ST-1 boreholes, drilled on the faulted east-side of the field in 2001, encountered shallower fluid contacts. As a result of these findings, a comprehensive field-wide uncertainty analysis was performed to better understand what we know versus what we think regarding resource size and economic viability The uncertainty analysis process applied at Erha is an integrated scenario-based probabilistic approach to model resource and reserves. Its goal is to provide quantitative results for a variety of scenarios, thus allowing identification of and focus on critical controls (the variables that are likely to impose the greatest influence).The initial focus at Erha was to incorporate the observed fluid contacts and to develop potential scenarios that included the range of possibilities in unpenetrated portions of the field. Four potential compartmentalization scenarios were hypothesized. The uncertainty model combines these scenarios with reservoir parameters and their plausible ranges. Input data comes from multiple sources including: wells, 3D seismic, reservoir flow simulation, geochemistry, fault-seal analysis, sequence stratigraphic analysis, and analogs. Once created, the model is sampled using Monte-Carlo techniques to create probability density functions for a variety of variables including oil in-place and recoverable reserves.Results of the uncertainty analysis support that despite a thinner oil column on the faulted east-side of the field, Erha is an economically attractive opportunity. Further, the results have been to develop data acquisition plans and mitigation strategies that
Frenkel, Robert B; Farrance, Ian
2018-01-01
The "Guide to the Expression of Uncertainty in Measurement" (GUM) is the foundational document of metrology. Its recommendations apply to all areas of metrology including metrology associated with the biomedical sciences. When the output of a measurement process depends on the measurement of several inputs through a measurement equation or functional relationship, the propagation of uncertainties in the inputs to the uncertainty in the output demands a level of understanding of the differential calculus. This review is intended as an elementary guide to the differential calculus and its application to uncertainty in measurement. The review is in two parts. In Part I, Section 3, we consider the case of a single input and introduce the concepts of error and uncertainty. Next we discuss, in the following sections in Part I, such notions as derivatives and differentials, and the sensitivity of an output to errors in the input. The derivatives of functions are obtained using very elementary mathematics. The overall purpose of this review, here in Part I and subsequently in Part II, is to present the differential calculus for those in the medical sciences who wish to gain a quick but accurate understanding of the propagation of uncertainties. © 2018 Elsevier Inc. All rights reserved.
Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A
2013-01-01
Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall pdecrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.
Parabolic features and the erosion rate on Venus
Strom, Robert G.
1993-01-01
The impact cratering record on Venus consists of 919 craters covering 98 percent of the surface. These craters are remarkably well preserved, and most show pristine structures including fresh ejecta blankets. Only 35 craters (3.8 percent) have had their ejecta blankets embayed by lava and most of these occur in the Atla-Beta Regio region; an area thought to be recently active. parabolic features are associated with 66 of the 919 craters. These craters range in size from 6 to 105 km diameter. The parabolic features are thought to be the result of the deposition of fine-grained ejecta by winds in the dense venusian atmosphere. The deposits cover about 9 percent of the surface and none appear to be embayed by younger volcanic materials. However, there appears to be a paucity of these deposits in the Atla-Beta Regio region, and this may be due to the more recent volcanism in this area of Venus. Since parabolic features are probably fine-grain, wind-deposited ejecta, then all impact craters on Venus probably had these deposits at some time in the past. The older deposits have probably been either eroded or buried by eolian processes. Therefore, the present population of these features is probably associated with the most recent impact craters on the planet. Furthermore, the size/frequency distribution of craters with parabolic features is virtually identical to that of the total crater population. This suggests that there has been little loss of small parabolic features compared to large ones, otherwise there should be a significant and systematic paucity of craters with parabolic features with decreasing size compared to the total crater population. Whatever is erasing the parabolic features apparently does so uniformly regardless of the areal extent of the deposit. The lifetime of parabolic features and the eolian erosion rate on Venus can be estimated from the average age of the surface and the present population of parabolic features.
Radiographic features of periapical cysts and granulomas
Zain, R. B.; Roswati, N.; Ismail, K.
1989-01-01
Many studies have been reported on radiographic lesion sizes of periapical lesions. However no studies have been reported on prevalences of subjective radiographic features in these lesions except for the early assumption that a periapical cyst usually exhibit a radiopaque cortex. This study is conducted to evaluate the prevalences of several subjective radiographic features of periapical cysts and granulomas in the hope to identify features that maybe suggestive of either diagnosis. The resu...
Honti, Mark; Reichert, Peter; Scheidegger, Andreas; Stamm, Christian
2013-04-01
Climate change impact assessments have become more and more popular in hydrology since the middle 1980's with another boost after the publication of the IPCC AR4 report. During hundreds of impact studies a quasi-standard methodology emerged, which is mainly shaped by the growing public demand for predicting how water resources management or flood protection should change in the close future. The ``standard'' workflow considers future climate under a specific IPCC emission scenario simulated by global circulation models (GCMs), possibly downscaled by a regional climate model (RCM) and/or a stochastic weather generator. The output from the climate models is typically corrected for bias before feeding it into a calibrated hydrological model, which is run on the past and future meteorological data to analyse the impacts of climate change on the hydrological indicators of interest. The impact predictions are as uncertain as any forecast that tries to describe the behaviour of an extremely complex system decades into the future. Future climate predictions are uncertain due to the scenario uncertainty and the GCM model uncertainty that is obvious on finer resolution than continental scale. Like in any hierarchical model system, uncertainty propagates through the descendant components. Downscaling increases uncertainty with the deficiencies of RCMs and/or weather generators. Bias correction adds a strong deterministic shift to the input data. Finally the predictive uncertainty of the hydrological model ends the cascade that leads to the total uncertainty of the hydrological impact assessment. There is an emerging consensus between many studies on the relative importance of the different uncertainty sources. The prevailing perception is that GCM uncertainty dominates hydrological impact studies. There are only few studies, which found that the predictive uncertainty of hydrological models can be in the same range or even larger than climatic uncertainty. We carried out a
International Nuclear Information System (INIS)
Pasanisi, Alberto; Keller, Merlin; Parent, Eric
2012-01-01
In the context of risk analysis under uncertainty, we focus here on the problem of estimating a so-called quantity of interest of an uncertainty analysis problem, i.e. a given feature of the probability distribution function (pdf) of the output of a deterministic model with uncertain inputs. We will stay here in a fully probabilistic setting. A common problem is how to account for epistemic uncertainty tainting the parameter of the probability distribution of the inputs. In the standard practice, this uncertainty is often neglected (plug-in approach). When a specific uncertainty assessment is made, under the basis of the available information (expertise and/or data), a common solution consists in marginalizing the joint distribution of both observable inputs and parameters of the probabilistic model (i.e. computing the predictive pdf of the inputs), then propagating it through the deterministic model. We will reinterpret this approach in the light of Bayesian decision theory, and will put into evidence that this practice leads the analyst to adopt implicitly a specific loss function which may be inappropriate for the problem under investigation, and suboptimal from a decisional perspective. These concepts are illustrated on a simple numerical example, concerning a case of flood risk assessment.
Deterministic uncertainty analysis
International Nuclear Information System (INIS)
Worley, B.A.
1987-01-01
Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig
International Nuclear Information System (INIS)
Depres, B.; Dossantos-Uzarralde, P.
2009-01-01
More than 150 researchers and engineers from universities and the industrial world met to discuss on the new methodologies developed around assessing uncertainty. About 20 papers were presented and the main topics were: methods to study the propagation of uncertainties, sensitivity analysis, nuclear data covariances or multi-parameter optimisation. This report gathers the contributions of CEA researchers and engineers
Supporting qualified database for V and V and uncertainty evaluation of best-estimate system codes
International Nuclear Information System (INIS)
Petruzzi, A.; D'Auria, F.
2014-01-01
Uncertainty evaluation constitutes a key feature of BEPU (Best Estimate Plus Uncertainty) process. The uncertainty can be the result of a Monte Carlo type analysis involving input uncertainty parameters or the outcome of a process involving the use of experimental data and connected code calculations. Those uncertainty methods are discussed in several papers and guidelines (IAEA-SRS- 52, OECD/NEA BEMUSE reports). The present paper aims at discussing the role and the depth of the analysis required for merging from one side suitable experimental data and on the other side qualified code calculation results. This aspect is mostly connected with the second approach for uncertainty mentioned above, but it can be used also in the framework of the first approach. Namely, the paper discusses the features and structure of the database that includes the following kinds of documents: 1. The' RDS-facility' (Reference Data Set for the selected facility): this includes the description of the facility, the geometrical characterization of any component of the facility, the instrumentations, the data acquisition system, the evaluation of pressure losses, the physical properties of the material and the characterization of pumps, valves and heat losses; 2. The 'RDS-test' (Reference Data Set for the selected test of the facility): this includes the description of the main phenomena investigated during the test, the configuration of the facility for the selected test (possible new evaluation of pressure and heat losses if needed) and the specific boundary and initial conditions; 3. The 'QP' (Qualification Report) of the code calculation results: this includes the description of the nodalization developed following a set of homogeneous techniques, the achievement of the steady state conditions and the qualitative and quantitative analysis of the transient with the characterization of the Relevant Thermal-Hydraulics Aspects (RTA); 4. The EH (Engineering
Cuticular features as indicators of environmental pollution
G. K. Sharma
1976-01-01
Several leaf cuticular features such as stomatal frequency, stomatal size, trichome length, type, and frequency, and subsidiary cell complex respond to environmental pollution in different ways and hence can be used as indicators of environmental pollution in an area. Several modifications in cuticular features under polluted environments seem to indicate ecotypic or...
The uncertainty budget in pharmaceutical industry
DEFF Research Database (Denmark)
Heydorn, Kaj
of their uncertainty, exactly as described in GUM [2]. Pharmaceutical industry has therefore over the last 5 years shown increasing interest in accreditation according to ISO 17025 [3], and today uncertainty budgets are being developed for all so-called critical measurements. The uncertainty of results obtained...... that the uncertainty of a particular result is independent of the method used for its estimation. Several examples of uncertainty budgets for critical parameters based on the bottom-up procedure will be discussed, and it will be shown how the top-down method is used as a means of verifying uncertainty budgets, based...
Confidence from uncertainty - A multi-target drug screening method from robust control theory
Directory of Open Access Journals (Sweden)
Petzold Linda R
2010-11-01
Full Text Available Abstract Background Robustness is a recognized feature of biological systems that evolved as a defence to environmental variability. Complex diseases such as diabetes, cancer, bacterial and viral infections, exploit the same mechanisms that allow for robust behaviour in healthy conditions to ensure their own continuance. Single drug therapies, while generally potent regulators of their specific protein/gene targets, often fail to counter the robustness of the disease in question. Multi-drug therapies offer a powerful means to restore disrupted biological networks, by targeting the subsystem of interest while preventing the diseased network from reconciling through available, redundant mechanisms. Modelling techniques are needed to manage the high number of combinatorial possibilities arising in multi-drug therapeutic design, and identify synergistic targets that are robust to system uncertainty. Results We present the application of a method from robust control theory, Structured Singular Value or μ- analysis, to identify highly effective multi-drug therapies by using robustness in the face of uncertainty as a new means of target discrimination. We illustrate the method by means of a case study of a negative feedback network motif subject to parametric uncertainty. Conclusions The paper contributes to the development of effective methods for drug screening in the context of network modelling affected by parametric uncertainty. The results have wide applicability for the analysis of different sources of uncertainty like noise experienced in the data, neglected dynamics, or intrinsic biological variability.
LOFT uncertainty-analysis methodology
International Nuclear Information System (INIS)
Lassahn, G.D.
1983-01-01
The methodology used for uncertainty analyses of measurements in the Loss-of-Fluid Test (LOFT) nuclear-reactor-safety research program is described and compared with other methodologies established for performing uncertainty analyses
LOFT uncertainty-analysis methodology
International Nuclear Information System (INIS)
Lassahn, G.D.
1983-01-01
The methodology used for uncertainty analyses of measurements in the Loss-of-Fluid Test (LOFT) nuclear reactor safety research program is described and compared with other methodologies established for performing uncertainty analyses
Do Orthopaedic Surgeons Acknowledge Uncertainty?
Teunis, Teun; Janssen, Stein; Guitton, Thierry G.; Ring, David; Parisien, Robert
2016-01-01
Much of the decision-making in orthopaedics rests on uncertain evidence. Uncertainty is therefore part of our normal daily practice, and yet physician uncertainty regarding treatment could diminish patients' health. It is not known if physician uncertainty is a function of the evidence alone or if
International Nuclear Information System (INIS)
Allison, C.M.; Hohorst, J.K.; Perez, M.; Reventos, F.
2010-01-01
The RELAP/SCDAPSIM/MOD4.0 code, designed to predict the behavior of reactor systems during normal and accident conditions, is being developed as part of the international SCDAP Development and Training Program (SDTP). RELAP/SCDAPSIM/MOD4.0, which is the first version of RELAP5 completely rewritten to FORTRAN 90/95/2000 standards, uses publicly available RELAP5 and SCDAP models in combination with advanced programming and numerical techniques and other SDTP-member modeling/user options. One such member developed option is an integrated uncertainty analysis package being developed jointly by the Technical University of Catalonia (UPC) and Innovative Systems Software (ISS). This paper briefly summarizes the features of RELAP/SCDAPSIM/MOD4.0 and the integrated uncertainty analysis package, and then presents an example of how the integrated uncertainty package can be setup and used for a simple pipe flow problem. (author)
The Sample Size Influence in the Accuracy of the Image Classification of the Remote Sensing
Directory of Open Access Journals (Sweden)
Thomaz C. e C. da Costa
2004-12-01
Full Text Available Landuse/landcover maps produced by classification of remote sensing images incorporate uncertainty. This uncertainty is measured by accuracy indices using reference samples. The size of the reference sample is defined by approximation by a binomial function without the use of a pilot sample. This way the accuracy are not estimated, but fixed a priori. In case of divergency between the estimated and a priori accuracy the error of the sampling will deviate from the expected error. The size using pilot sample (theorically correct procedure justify when haven´t estimate of accuracy for work area, referent the product remote sensing utility.
International Nuclear Information System (INIS)
Koch, J.; Peterson, S-R.
1995-10-01
Models used to simulate environmental transfer of radionuclides typically include many parameters, the values of which are uncertain. An estimation of the uncertainty associated with the predictions is therefore essential. Difference methods to quantify the uncertainty in the prediction parameter uncertainties are reviewed. A statistical approach using random sampling techniques is recommended for complex models with many uncertain parameters. In this approach, the probability density function of the model output is obtained from multiple realizations of the model according to a multivariate random sample of the different input parameters. Sampling efficiency can be improved by using a stratified scheme (Latin Hypercube Sampling). Sample size can also be restricted when statistical tolerance limits needs to be estimated. Methods to rank parameters according to their contribution to uncertainty in the model prediction are also reviewed. Recommended are measures of sensitivity, correlation and regression coefficients that can be calculated on values of input and output variables generated during the propagation of uncertainties through the model. A parameter uncertainty analysis is performed for the CHERPAC food chain model which estimates subjective confidence limits and intervals on the predictions at a 95% confidence level. A sensitivity analysis is also carried out using partial rank correlation coefficients. This identified and ranks the parameters which are the main contributors to uncertainty in the predictions, thereby guiding further research efforts. (author). 44 refs., 2 tabs., 4 figs
Energy Technology Data Exchange (ETDEWEB)
Koch, J. [Israel Atomic Energy Commission, Yavne (Israel). Soreq Nuclear Research Center; Peterson, S-R.
1995-10-01
Models used to simulate environmental transfer of radionuclides typically include many parameters, the values of which are uncertain. An estimation of the uncertainty associated with the predictions is therefore essential. Difference methods to quantify the uncertainty in the prediction parameter uncertainties are reviewed. A statistical approach using random sampling techniques is recommended for complex models with many uncertain parameters. In this approach, the probability density function of the model output is obtained from multiple realizations of the model according to a multivariate random sample of the different input parameters. Sampling efficiency can be improved by using a stratified scheme (Latin Hypercube Sampling). Sample size can also be restricted when statistical tolerance limits needs to be estimated. Methods to rank parameters according to their contribution to uncertainty in the model prediction are also reviewed. Recommended are measures of sensitivity, correlation and regression coefficients that can be calculated on values of input and output variables generated during the propagation of uncertainties through the model. A parameter uncertainty analysis is performed for the CHERPAC food chain model which estimates subjective confidence limits and intervals on the predictions at a 95% confidence level. A sensitivity analysis is also carried out using partial rank correlation coefficients. This identified and ranks the parameters which are the main contributors to uncertainty in the predictions, thereby guiding further research efforts. (author). 44 refs., 2 tabs., 4 figs.
Uncertainty in geological and hydrogeological data
Directory of Open Access Journals (Sweden)
B. Nilsson
2007-09-01
Full Text Available Uncertainty in conceptual model structure and in environmental data is of essential interest when dealing with uncertainty in water resources management. To make quantification of uncertainty possible is it necessary to identify and characterise the uncertainty in geological and hydrogeological data. This paper discusses a range of available techniques to describe the uncertainty related to geological model structure and scale of support. Literature examples on uncertainty in hydrogeological variables such as saturated hydraulic conductivity, specific yield, specific storage, effective porosity and dispersivity are given. Field data usually have a spatial and temporal scale of support that is different from the one on which numerical models for water resources management operate. Uncertainty in hydrogeological data variables is characterised and assessed within the methodological framework of the HarmoniRiB classification.
Uncertainty propagation in a multiscale model of nanocrystalline plasticity
International Nuclear Information System (INIS)
Koslowski, M.; Strachan, Alejandro
2011-01-01
We characterize how uncertainties propagate across spatial and temporal scales in a physics-based model of nanocrystalline plasticity of fcc metals. Our model combines molecular dynamics (MD) simulations to characterize atomic-level processes that govern dislocation-based-plastic deformation with a phase field approach to dislocation dynamics (PFDD) that describes how an ensemble of dislocations evolve and interact to determine the mechanical response of the material. We apply this approach to a nanocrystalline Ni specimen of interest in micro-electromechanical (MEMS) switches. Our approach enables us to quantify how internal stresses that result from the fabrication process affect the properties of dislocations (using MD) and how these properties, in turn, affect the yield stress of the metallic membrane (using the PFMM model). Our predictions show that, for a nanocrystalline sample with small grain size (4 nm), a variation in residual stress of 20 MPa (typical in today's microfabrication techniques) would result in a variation on the critical resolved shear yield stress of approximately 15 MPa, a very small fraction of the nominal value of approximately 9 GPa. - Highlights: → Quantify how fabrication uncertainties affect yield stress in a microswitch component. → Propagate uncertainties in a multiscale model of single crystal plasticity. → Molecular dynamics quantifies how fabrication variations affect dislocations. → Dislocation dynamics relate variations in dislocation properties to yield stress.
International Nuclear Information System (INIS)
Lousteau, D.C.; Jernigan, T.C.; Schaffer, M.J.; Hussung, R.O.
1975-01-01
ISX, an Impurity Study Experiment, is presently being designed at Oak Ridge National Laboratory as a joint scientific effort between ORNL and General Atomic Company. ISX is a moderate size tokamak dedicated to the study of impurity production, diffusion, and control. The significant engineering features of this device are discussed
Methods for obtaining true particle size distributions from cross section measurements
Energy Technology Data Exchange (ETDEWEB)
Lord, Kristina Alyse [Iowa State Univ., Ames, IA (United States)
2013-01-01
Sectioning methods are frequently used to measure grain sizes in materials. These methods do not provide accurate grain sizes for two reasons. First, the sizes of features observed on random sections are always smaller than the true sizes of solid spherical shaped objects, as noted by Wicksell [1]. This is the case because the section very rarely passes through the center of solid spherical shaped objects randomly dispersed throughout a material. The sizes of features observed on random sections are inversely related to the distance of the center of the solid object from the section [1]. Second, on a plane section through the solid material, larger sized features are more frequently observed than smaller ones due to the larger probability for a section to come into contact with the larger sized portion of the spheres than the smaller sized portion. As a result, it is necessary to find a method that takes into account these reasons for inaccurate particle size measurements, while providing a correction factor for accurately determining true particle size measurements. I present a method for deducing true grain size distributions from those determined from specimen cross sections, either by measurement of equivalent grain diameters or linear intercepts.
Justification for recommended uncertainties
International Nuclear Information System (INIS)
Pronyaev, V.G.; Badikov, S.A.; Carlson, A.D.
2007-01-01
The uncertainties obtained in an earlier standards evaluation were considered to be unrealistically low by experts of the US Cross Section Evaluation Working Group (CSEWG). Therefore, the CSEWG Standards Subcommittee replaced the covariance matrices of evaluated uncertainties by expanded percentage errors that were assigned to the data over wide energy groups. There are a number of reasons that might lead to low uncertainties of the evaluated data: Underestimation of the correlations existing between the results of different measurements; The presence of unrecognized systematic uncertainties in the experimental data can lead to biases in the evaluated data as well as to underestimations of the resulting uncertainties; Uncertainties for correlated data cannot only be characterized by percentage uncertainties or variances. Covariances between evaluated value at 0.2 MeV and other points obtained in model (RAC R matrix and PADE2 analytical expansion) and non-model (GMA) fits of the 6 Li(n,t) TEST1 data and the correlation coefficients are presented and covariances between the evaluated value at 0.045 MeV and other points (along the line or column of the matrix) as obtained in EDA and RAC R matrix fits of the data available for reactions that pass through the formation of the 7 Li system are discussed. The GMA fit with the GMA database is shown for comparison. The following diagrams are discussed: Percentage uncertainties of the evaluated cross section for the 6 Li(n,t) reaction and the for the 235 U(n,f) reaction; estimation given by CSEWG experts; GMA result with full GMA database, including experimental data for the 6 Li(n,t), 6 Li(n,n) and 6 Li(n,total) reactions; uncertainties in the GMA combined fit for the standards; EDA and RAC R matrix results, respectively. Uncertainties of absolute and 252 Cf fission spectrum averaged cross section measurements, and deviations between measured and evaluated values for 235 U(n,f) cross-sections in the neutron energy range 1
Adjoint-Based Uncertainty Quantification with MCNP
Energy Technology Data Exchange (ETDEWEB)
Seifried, Jeffrey E. [Univ. of California, Berkeley, CA (United States)
2011-09-01
This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.
TreeBASIS Feature Descriptor and Its Hardware Implementation
Directory of Open Access Journals (Sweden)
Spencer Fowers
2014-01-01
Full Text Available This paper presents a novel feature descriptor called TreeBASIS that provides improvements in descriptor size, computation time, matching speed, and accuracy. This new descriptor uses a binary vocabulary tree that is computed using basis dictionary images and a test set of feature region images. To facilitate real-time implementation, a feature region image is binary quantized and the resulting quantized vector is passed into the BASIS vocabulary tree. A Hamming distance is then computed between the feature region image and the effectively descriptive basis dictionary image at a node to determine the branch taken and the path the feature region image takes is saved as a descriptor. The TreeBASIS feature descriptor is an excellent candidate for hardware implementation because of its reduced descriptor size and the fact that descriptors can be created and features matched without the use of floating point operations. The TreeBASIS descriptor is more computationally and space efficient than other descriptors such as BASIS, SIFT, and SURF. Moreover, it can be computed entirely in hardware without the support of a CPU for additional software-based computations. Experimental results and a hardware implementation show that the TreeBASIS descriptor compares well with other descriptors for frame-to-frame homography computation while requiring fewer hardware resources.
Discriminative Random Field Models for Subsurface Contamination Uncertainty Quantification
Arshadi, M.; Abriola, L. M.; Miller, E. L.; De Paolis Kaluza, C.
2017-12-01
Application of flow and transport simulators for prediction of the release, entrapment, and persistence of dense non-aqueous phase liquids (DNAPLs) and associated contaminant plumes is a computationally intensive process that requires specification of a large number of material properties and hydrologic/chemical parameters. Given its computational burden, this direct simulation approach is particularly ill-suited for quantifying both the expected performance and uncertainty associated with candidate remediation strategies under real field conditions. Prediction uncertainties primarily arise from limited information about contaminant mass distributions, as well as the spatial distribution of subsurface hydrologic properties. Application of direct simulation to quantify uncertainty would, thus, typically require simulating multiphase flow and transport for a large number of permeability and release scenarios to collect statistics associated with remedial effectiveness, a computationally prohibitive process. The primary objective of this work is to develop and demonstrate a methodology that employs measured field data to produce equi-probable stochastic representations of a subsurface source zone that capture the spatial distribution and uncertainty associated with key features that control remediation performance (i.e., permeability and contamination mass). Here we employ probabilistic models known as discriminative random fields (DRFs) to synthesize stochastic realizations of initial mass distributions consistent with known, and typically limited, site characterization data. Using a limited number of full scale simulations as training data, a statistical model is developed for predicting the distribution of contaminant mass (e.g., DNAPL saturation and aqueous concentration) across a heterogeneous domain. Monte-Carlo sampling methods are then employed, in conjunction with the trained statistical model, to generate realizations conditioned on measured borehole data
Application of intelligence based uncertainty analysis for HLW disposal
International Nuclear Information System (INIS)
Kato, Kazuyuki
2003-01-01
Safety assessment for geological disposal of high level radioactive waste inevitably involves factors that cannot be specified in a deterministic manner. These are namely: (1) 'variability' that arises from stochastic nature of the processes and features considered, e.g., distribution of canister corrosion times and spatial heterogeneity of a host geological formation; (2) 'ignorance' due to incomplete or imprecise knowledge of the processes and conditions expected in the future, e.g., uncertainty in the estimation of solubilities and sorption coefficients for important nuclides. In many cases, a decision in assessment, e.g., selection among model options or determination of a parameter value, is subjected to both variability and ignorance in a combined form. It is clearly important to evaluate both influences of variability and ignorance on the result of a safety assessment in a consistent manner. We developed a unified methodology to handle variability and ignorance by using probabilistic and possibilistic techniques respectively. The methodology has been applied to safety assessment of geological disposal of high level radioactive waste. Uncertainties associated with scenarios, models and parameters were defined in terms of fuzzy membership functions derived through a series of interviews to the experts while variability was formulated by means of probability density functions (pdfs) based on available data set. The exercise demonstrated applicability of the new methodology and, in particular, its advantage in quantifying uncertainties based on expert's opinion and in providing information on dependence of assessment result on the level of conservatism. In addition, it was also shown that sensitivity analysis could identify key parameters in reducing uncertainties associated with the overall assessment. The above information can be used to support the judgment process and guide the process of disposal system development in optimization of protection against
Linear Programming Problems for Generalized Uncertainty
Thipwiwatpotjana, Phantipa
2010-01-01
Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…
The idiopathic interstitial pneumonias: understanding key radiological features
Energy Technology Data Exchange (ETDEWEB)
Dixon, S. [Department of Radiology, Churchill Hospital, Old Road, Oxford OX3 7LJ (United Kingdom); Benamore, R., E-mail: Rachel.Benamore@orh.nhs.u [Department of Radiology, Churchill Hospital, Old Road, Oxford OX3 7LJ (United Kingdom)
2010-10-15
Many radiologists find it challenging to distinguish between the different interstitial idiopathic pneumonias (IIPs). The British Thoracic Society guidelines on interstitial lung disease (2008) recommend the formation of multidisciplinary meetings, with diagnoses made by combined radiological, pathological, and clinical findings. This review focuses on understanding typical and atypical radiological features on high-resolution computed tomography between the different IIPs, to help the radiologist determine when a confident diagnosis can be made and how to deal with uncertainty.
The idiopathic interstitial pneumonias: understanding key radiological features
International Nuclear Information System (INIS)
Dixon, S.; Benamore, R.
2010-01-01
Many radiologists find it challenging to distinguish between the different interstitial idiopathic pneumonias (IIPs). The British Thoracic Society guidelines on interstitial lung disease (2008) recommend the formation of multidisciplinary meetings, with diagnoses made by combined radiological, pathological, and clinical findings. This review focuses on understanding typical and atypical radiological features on high-resolution computed tomography between the different IIPs, to help the radiologist determine when a confident diagnosis can be made and how to deal with uncertainty.
Measurement uncertainty: Friend or foe?
Infusino, Ilenia; Panteghini, Mauro
2018-02-02
The definition and enforcement of a reference measurement system, based on the implementation of metrological traceability of patients' results to higher order reference methods and materials, together with a clinically acceptable level of measurement uncertainty, are fundamental requirements to produce accurate and equivalent laboratory results. The uncertainty associated with each step of the traceability chain should be governed to obtain a final combined uncertainty on clinical samples fulfilling the requested performance specifications. It is important that end-users (i.e., clinical laboratory) may know and verify how in vitro diagnostics (IVD) manufacturers have implemented the traceability of their calibrators and estimated the corresponding uncertainty. However, full information about traceability and combined uncertainty of calibrators is currently very difficult to obtain. Laboratory professionals should investigate the need to reduce the uncertainty of the higher order metrological references and/or to increase the precision of commercial measuring systems. Accordingly, the measurement uncertainty should not be considered a parameter to be calculated by clinical laboratories just to fulfil the accreditation standards, but it must become a key quality indicator to describe both the performance of an IVD measuring system and the laboratory itself. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Mensing, R.W.
1985-01-01
This report proposes a method for comparing the effects of the uncertainty in probabilistic risk analysis (PRA) input parameters on the uncertainty in the predicted risks. The proposed method is applied to compare the effect of uncertainties in the descriptions of (1) the seismic hazard at a nuclear power plant site and (2) random variations in plant subsystem responses and component fragility on the uncertainty in the predicted probability of core melt. The PRA used is that developed by the Seismic Safety Margins Research Program
Uncertainty Characterization of Reactor Vessel Fracture Toughness
International Nuclear Information System (INIS)
Li, Fei; Modarres, Mohammad
2002-01-01
To perform fracture mechanics analysis of reactor vessel, fracture toughness (K Ic ) at various temperatures would be necessary. In a best estimate approach, K Ic uncertainties resulting from both lack of sufficient knowledge and randomness in some of the variables of K Ic must be characterized. Although it may be argued that there is only one type of uncertainty, which is lack of perfect knowledge about the subject under study, as a matter of practice K Ic uncertainties can be divided into two types: aleatory and epistemic. Aleatory uncertainty is related to uncertainty that is very difficult to reduce, if not impossible; epistemic uncertainty, on the other hand, can be practically reduced. Distinction between aleatory and epistemic uncertainties facilitates decision-making under uncertainty and allows for proper propagation of uncertainties in the computation process. Typically, epistemic uncertainties representing, for example, parameters of a model are sampled (to generate a 'snapshot', single-value of the parameters), but the totality of aleatory uncertainties is carried through the calculation as available. In this paper a description of an approach to account for these two types of uncertainties associated with K Ic has been provided. (authors)
When we stay alone: Resist in uncertainty. Emancipation and empowerment practices
Directory of Open Access Journals (Sweden)
Ester Jordana Lluch
2013-03-01
Full Text Available In view of the uncertainty that surrounds us, there appears the need to resist in the university without any diagnosis of the present. For it, we can depart from the critiques to two of the features that have been characterized the modern university: autonomy and emancipation. We will to try to glimpse what practices of resistance could take place here and now, re-elaborating critically both principles using the reflections of Jacques Rancière and Michel Foucault.
The role of models in managing the uncertainty of software-intensive systems
International Nuclear Information System (INIS)
Littlewood, Bev; Neil, Martin; Ostrolenk, Gary
1995-01-01
It is increasingly argued that uncertainty is an inescapable feature of the design and operational behaviour of software-intensive systems. This paper elaborates the role of models in managing such uncertainty, in relation to evidence and claims for dependability. Personal and group models are considered with regard to abstraction, consensus and corroboration. The paper focuses on the predictive property of models, arguing for the need for empirical validation of their trustworthiness through experimentation and observation. The impact on trustworthiness of human fallibility, formality of expression and expressiveness is discussed. The paper identifies two criteria for deciding the degree of trust to be placed in a model, and hence also for choosing between models, namely accuracy and informativeness. Finally, analogy and reuse are proposed as the only means by which empirical evidence can be established for models in software engineering
The Fundamental Uncertainty of Business: Real Options
Dyer, James S.
The purpose of this paper is to discuss the manner in which uncertainty is currently evaluated in business, with an emphasis on economic measures. In recent years, the accepted approach for the valuation of capital investment decisions has become one based on the theory of real options. From the standpoint of this workshop, the interesting aspect of real options is its focus on the flexibility of management to respond to changes in the environment as a feature of an alternative that has unique value, known as "option value." While this may not be surprising to most participants in this workshop, it does represent a radical change in traditional thinking about risk in business, where efforts have primarily been focused on the elimination of risk when possible.
Generalised Brown Clustering and Roll-up Feature Generation
DEFF Research Database (Denmark)
Derczynski, Leon; Chester, Sean
2016-01-01
active set size. Moreover, the generalisation permits a novel approach to feature selection from Brown clusters: We show that the standard approach of shearing the Brown clustering output tree at arbitrary bitlengths is lossy and that features should be chosen instead by rolling up Generalised Brown...
Courtney, H; Kirkland, J; Viguerie, P
1997-01-01
At the heart of the traditional approach to strategy lies the assumption that by applying a set of powerful analytic tools, executives can predict the future of any business accurately enough to allow them to choose a clear strategic direction. But what happens when the environment is so uncertain that no amount of analysis will allow us to predict the future? What makes for a good strategy in highly uncertain business environments? The authors, consultants at McKinsey & Company, argue that uncertainty requires a new way of thinking about strategy. All too often, they say, executives take a binary view: either they underestimate uncertainty to come up with the forecasts required by their companies' planning or capital-budging processes, or they overestimate it, abandon all analysis, and go with their gut instinct. The authors outline a new approach that begins by making a crucial distinction among four discrete levels of uncertainty that any company might face. They then explain how a set of generic strategies--shaping the market, adapting to it, or reserving the right to play at a later time--can be used in each of the four levels. And they illustrate how these strategies can be implemented through a combination of three basic types of actions: big bets, options, and no-regrets moves. The framework can help managers determine which analytic tools can inform decision making under uncertainty--and which cannot. At a broader level, it offers executives a discipline for thinking rigorously and systematically about uncertainty and its implications for strategy.
Skinner, Daniel J C; Rocks, Sophie A; Pollard, Simon J T
2016-12-01
A reliable characterisation of uncertainties can aid uncertainty identification during environmental risk assessments (ERAs). However, typologies can be implemented inconsistently, causing uncertainties to go unidentified. We present an approach based on nine structured elicitations, in which subject-matter experts, for pesticide risks to surface water organisms, validate and assess three dimensions of uncertainty: its level (the severity of uncertainty, ranging from determinism to ignorance); nature (whether the uncertainty is epistemic or aleatory); and location (the data source or area in which the uncertainty arises). Risk characterisation contains the highest median levels of uncertainty, associated with estimating, aggregating and evaluating the magnitude of risks. Regarding the locations in which uncertainty is manifest, data uncertainty is dominant in problem formulation, exposure assessment and effects assessment. The comprehensive description of uncertainty described will enable risk analysts to prioritise the required phases, groups of tasks, or individual tasks within a risk analysis according to the highest levels of uncertainty, the potential for uncertainty to be reduced or quantified, or the types of location-based uncertainty, thus aiding uncertainty prioritisation during environmental risk assessments. In turn, it is expected to inform investment in uncertainty reduction or targeted risk management action. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Weisbi, C.R.; Oblow, E.M.; Ching, J.; White, J.E.; Wright, R.Q.; Drischler, J.
1975-08-01
Sensitivity analysis is applied to the study of an air transport benchmark calculation to quantify and distinguish between cross-section and method uncertainties. The boundary detector response was converged with respect to spatial and angular mesh size, P/sub l/ expansion of the scattering kernel, and the number and location of energy grid boundaries. The uncertainty in the detector response due to uncertainties in nuclear data is 17.0 percent (one standard deviation, not including uncertainties in energy and angular distribution) based upon the ENDF/B-IV ''error files'' including correlations in energy and reaction type. Differences of approximately 6 percent can be attributed exclusively to differences in processing multigroup transfer matrices. Formal documentation of the PUFF computer program for the generation of multigroup covariance matrices is presented. (47 figures, 14 tables) (U.S.)
International Nuclear Information System (INIS)
Robouch, P.; Arana, G.; Eguskiza, M.; Etxebarria, N.
2000-01-01
The concepts of the Guide to the expression of Uncertainties in Measurements for chemical measurements (GUM) and the recommendations of the Eurachem document 'Quantifying Uncertainty in Analytical Methods' are applied to set up the uncertainty budget for k 0 -NAA. The 'universally applicable spreadsheet technique', described by KRAGTEN, is applied to the k 0 -NAA basic equations for the computation of uncertainties. The variance components - individual standard uncertainties - highlight the contribution and the importance of the different parameters to be taken into account. (author)
Uncertainty governance: an integrated framework for managing and communicating uncertainties
International Nuclear Information System (INIS)
Umeki, H.; Naito, M.; Takase, H.
2004-01-01
Treatment of uncertainty, or in other words, reasoning with imperfect information is widely recognised as being of great importance within performance assessment (PA) of the geological disposal mainly because of the time scale of interest and spatial heterogeneity that geological environment exhibits. A wide range of formal methods have been proposed for the optimal processing of incomplete information. Many of these methods rely on the use of numerical information, the frequency based concept of probability in particular, to handle the imperfections. However, taking quantitative information as a base for models that solve the problem of handling imperfect information merely creates another problem, i.e., how to provide the quantitative information. In many situations this second problem proves more resistant to solution, and in recent years several authors have looked at a particularly ingenious way in accordance with the rules of well-founded methods such as Bayesian probability theory, possibility theory, and the Dempster-Shafer theory of evidence. Those methods, while drawing inspiration from quantitative methods, do not require the kind of complete numerical information required by quantitative methods. Instead they provide information that, though less precise than that provided by quantitative techniques, is often, if not sufficient, the best that could be achieved. Rather than searching for the best method for handling all imperfect information, our strategy for uncertainty management, that is recognition and evaluation of uncertainties associated with PA followed by planning and implementation of measures to reduce them, is to use whichever method best fits the problem at hand. Such an eclectic position leads naturally to integration of the different formalisms. While uncertainty management based on the combination of semi-quantitative methods forms an important part of our framework for uncertainty governance, it only solves half of the problem
Verification of uncertainty budgets
DEFF Research Database (Denmark)
Heydorn, Kaj; Madsen, B.S.
2005-01-01
, and therefore it is essential that the applicability of the overall uncertainty budget to actual measurement results be verified on the basis of current experimental data. This should be carried out by replicate analysis of samples taken in accordance with the definition of the measurand, but representing...... the full range of matrices and concentrations for which the budget is assumed to be valid. In this way the assumptions made in the uncertainty budget can be experimentally verified, both as regards sources of variability that are assumed negligible, and dominant uncertainty components. Agreement between...
Directory of Open Access Journals (Sweden)
Pitchaiah Mandava
Full Text Available OBJECTIVE: Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS, a range of scores ("Shift" is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. METHODS: We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. RESULTS: Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD. Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001. Taking errors into account, SAINT I would have required 24% more subjects than were randomized. CONCLUSION: We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We
Nonlinear parameter estimation in inviscid compressible flows in presence of uncertainties
International Nuclear Information System (INIS)
Jemcov, A.; Mathur, S.
2004-01-01
The focus of this paper is on the formulation and solution of inverse problems of parameter estimation using algorithmic differentiation. The inverse problem formulated here seeks to determine the input parameters that minimize a least squares functional with respect to certain target data. The formulation allows for uncertainty in the target data by considering the least squares functional in a stochastic basis described by the covariance of the target data. Furthermore, to allow for robust design, the formulation also accounts for uncertainties in the input parameters. This is achieved using the method of propagation of uncertainties using the directional derivatives of the output parameters with respect to unknown parameters. The required derivatives are calculated simultaneously with the solution using generic programming exploiting the template and operator overloading features of the C++ language. The methodology described here is general and applicable to any numerical solution procedure for any set of governing equations but for the purpose of this paper we consider a finite volume solution of the compressible Euler equations. In particular, we illustrate the method for the case of supersonic flow in a duct with a wedge. The parameter to be determined is the inlet Mach number and the target data is the axial component of velocity at the exit of the duct. (author)
Entropic uncertainty relations-a survey
International Nuclear Information System (INIS)
Wehner, Stephanie; Winter, Andreas
2010-01-01
Uncertainty relations play a central role in quantum mechanics. Entropic uncertainty relations in particular have gained significant importance within quantum information, providing the foundation for the security of many quantum cryptographic protocols. Yet, little is known about entropic uncertainty relations with more than two measurement settings. In the present survey, we review known results and open questions.
Propagation of dynamic measurement uncertainty
International Nuclear Information System (INIS)
Hessling, J P
2011-01-01
The time-dependent measurement uncertainty has been evaluated in a number of recent publications, starting from a known uncertain dynamic model. This could be defined as the 'downward' propagation of uncertainty from the model to the targeted measurement. The propagation of uncertainty 'upward' from the calibration experiment to a dynamic model traditionally belongs to system identification. The use of different representations (time, frequency, etc) is ubiquitous in dynamic measurement analyses. An expression of uncertainty in dynamic measurements is formulated for the first time in this paper independent of representation, joining upward as well as downward propagation. For applications in metrology, the high quality of the characterization may be prohibitive for any reasonably large and robust model to pass the whiteness test. This test is therefore relaxed by not directly requiring small systematic model errors in comparison to the randomness of the characterization. Instead, the systematic error of the dynamic model is propagated to the uncertainty of the measurand, analogously but differently to how stochastic contributions are propagated. The pass criterion of the model is thereby transferred from the identification to acceptance of the total accumulated uncertainty of the measurand. This increases the relevance of the test of the model as it relates to its final use rather than the quality of the calibration. The propagation of uncertainty hence includes the propagation of systematic model errors. For illustration, the 'upward' propagation of uncertainty is applied to determine if an appliance box is damaged in an earthquake experiment. In this case, relaxation of the whiteness test was required to reach a conclusive result
Uncertainty Visualization Using Copula-Based Analysis in Mixed Distribution Models.
Hazarika, Subhashis; Biswas, Ayan; Shen, Han-Wei
2018-01-01
Distributions are often used to model uncertainty in many scientific datasets. To preserve the correlation among the spatially sampled grid locations in the dataset, various standard multivariate distribution models have been proposed in visualization literature. These models treat each grid location as a univariate random variable which models the uncertainty at that location. Standard multivariate distributions (both parametric and nonparametric) assume that all the univariate marginals are of the same type/family of distribution. But in reality, different grid locations show different statistical behavior which may not be modeled best by the same type of distribution. In this paper, we propose a new multivariate uncertainty modeling strategy to address the needs of uncertainty modeling in scientific datasets. Our proposed method is based on a statistically sound multivariate technique called Copula, which makes it possible to separate the process of estimating the univariate marginals and the process of modeling dependency, unlike the standard multivariate distributions. The modeling flexibility offered by our proposed method makes it possible to design distribution fields which can have different types of distribution (Gaussian, Histogram, KDE etc.) at the grid locations, while maintaining the correlation structure at the same time. Depending on the results of various standard statistical tests, we can choose an optimal distribution representation at each location, resulting in a more cost efficient modeling without significantly sacrificing on the analysis quality. To demonstrate the efficacy of our proposed modeling strategy, we extract and visualize uncertain features like isocontours and vortices in various real world datasets. We also study various modeling criterion to help users in the task of univariate model selection.
Kestens, Vikram; Bozatzidis, Vassili; De Temmerman, Pieter-Jan; Ramaye, Yannic; Roebben, Gert
2017-08-01
Particle tracking analysis (PTA) is an emerging technique suitable for size analysis of particles with external dimensions in the nano- and sub-micrometre scale range. Only limited attempts have so far been made to investigate and quantify the performance of the PTA method for particle size analysis. This article presents the results of a validation study during which selected colloidal silica and polystyrene latex reference materials with particle sizes in the range of 20 nm to 200 nm were analysed with NS500 and LM10-HSBF NanoSight instruments and video analysis software NTA 2.3 and NTA 3.0. Key performance characteristics such as working range, linearity, limit of detection, limit of quantification, sensitivity, robustness, precision and trueness were examined according to recommendations proposed by EURACHEM. A model for measurement uncertainty estimation following the principles described in ISO/IEC Guide 98-3 was used for quantifying random and systematic variations. For nominal 50 nm and 100 nm polystyrene and a nominal 80 nm silica reference materials, the relative expanded measurement uncertainties for the three measurands of interest, being the mode, median and arithmetic mean of the number-weighted particle size distribution, varied from about 10% to 12%. For the nominal 50 nm polystyrene material, the relative expanded uncertainty of the arithmetic mean of the particle size distributions increased up to 18% which was due to the presence of agglomerates. Data analysis was performed with software NTA 2.3 and NTA 3.0. The latter showed to be superior in terms of sensitivity and resolution.
Methods for Assessing Uncertainties in Climate Change, Impacts and Responses (Invited)
Manning, M. R.; Swart, R.
2009-12-01
Assessing the scientific uncertainties or confidence levels for the many different aspects of climate change is particularly important because of the seriousness of potential impacts and the magnitude of economic and political responses that are needed to mitigate climate change effectively. This has made the treatment of uncertainty and confidence a key feature in the assessments carried out by the Intergovernmental Panel on Climate Change (IPCC). Because climate change is very much a cross-disciplinary area of science, adequately dealing with uncertainties requires recognition of their wide range and different perspectives on assessing and communicating those uncertainties. The structural differences that exist across disciplines are often embedded deeply in the corresponding literature that is used as the basis for an IPCC assessment. The assessment of climate change science by the IPCC has from its outset tried to report the levels of confidence and uncertainty in the degree of understanding in both the underlying multi-disciplinary science and in projections for future climate. The growing recognition of the seriousness of this led to the formation of a detailed approach for consistent treatment of uncertainties in the IPCC’s Third Assessment Report (TAR) [Moss and Schneider, 2000]. However, in completing the TAR there remained some systematic differences between the disciplines raising concerns about the level of consistency. So further consideration of a systematic approach to uncertainties was undertaken for the Fourth Assessment Report (AR4). The basis for the approach used in the AR4 was developed at an expert meeting of scientists representing many different disciplines. This led to the introduction of a broader way of addressing uncertainties in the AR4 [Manning et al., 2004] which was further refined by lengthy discussions among many IPCC Lead Authors, for over a year, resulting in a short summary of a standard approach to be followed for that
Do Orthopaedic Surgeons Acknowledge Uncertainty?
Teunis, Teun; Janssen, Stein; Guitton, Thierry G; Ring, David; Parisien, Robert
2016-06-01
Much of the decision-making in orthopaedics rests on uncertain evidence. Uncertainty is therefore part of our normal daily practice, and yet physician uncertainty regarding treatment could diminish patients' health. It is not known if physician uncertainty is a function of the evidence alone or if other factors are involved. With added experience, uncertainty could be expected to diminish, but perhaps more influential are things like physician confidence, belief in the veracity of what is published, and even one's religious beliefs. In addition, it is plausible that the kind of practice a physician works in can affect the experience of uncertainty. Practicing physicians may not be immediately aware of these effects on how uncertainty is experienced in their clinical decision-making. We asked: (1) Does uncertainty and overconfidence bias decrease with years of practice? (2) What sociodemographic factors are independently associated with less recognition of uncertainty, in particular belief in God or other deity or deities, and how is atheism associated with recognition of uncertainty? (3) Do confidence bias (confidence that one's skill is greater than it actually is), degree of trust in the orthopaedic evidence, and degree of statistical sophistication correlate independently with recognition of uncertainty? We created a survey to establish an overall recognition of uncertainty score (four questions), trust in the orthopaedic evidence base (four questions), confidence bias (three questions), and statistical understanding (six questions). Seven hundred six members of the Science of Variation Group, a collaboration that aims to study variation in the definition and treatment of human illness, were approached to complete our survey. This group represents mainly orthopaedic surgeons specializing in trauma or hand and wrist surgery, practicing in Europe and North America, of whom the majority is involved in teaching. Approximately half of the group has more than 10 years
Uncertainty quantification in resonance absorption
International Nuclear Information System (INIS)
Williams, M.M.R.
2012-01-01
We assess the uncertainty in the resonance escape probability due to uncertainty in the neutron and radiation line widths for the first 21 resonances in 232 Th as given by . Simulation, quadrature and polynomial chaos methods are used and the resonance data are assumed to obey a beta distribution. We find the uncertainty in the total resonance escape probability to be the equivalent, in reactivity, of 75–130 pcm. Also shown are pdfs of the resonance escape probability for each resonance and the variation of the uncertainty with temperature. The viability of the polynomial chaos expansion method is clearly demonstrated.
Van Uytven, E.; Willems, P.
2018-03-01
Climate change impact assessment on meteorological variables involves large uncertainties as a result of incomplete knowledge on the future greenhouse gas concentrations and climate model physics, next to the inherent internal variability of the climate system. Given that the alteration in greenhouse gas concentrations is the driver for the change, one expects the impacts to be highly dependent on the considered greenhouse gas scenario (GHS). In this study, we denote this behavior as GHS sensitivity. Due to the climate model related uncertainties, this sensitivity is, at local scale, not always that strong as expected. This paper aims to study the GHS sensitivity and its contributing role to climate scenarios for a case study in Belgium. An ensemble of 160 CMIP5 climate model runs is considered and climate change signals are studied for precipitation accumulation, daily precipitation intensities and wet day frequencies. This was done for the different seasons of the year and the scenario periods 2011-2040, 2031-2060, 2051-2081 and 2071-2100. By means of variance decomposition, the total variance in the climate change signals was separated in the contribution of the differences in GHSs and the other model-related uncertainty sources. These contributions were found dependent on the variable and season. Following the time of emergence concept, the GHS uncertainty contribution is found dependent on the time horizon and increases over time. For the most distinct time horizon (2071-2100), the climate model uncertainty accounts for the largest uncertainty contribution. The GHS differences explain up to 18% of the total variance in the climate change signals. The results point further at the importance of the climate model ensemble design, specifically the ensemble size and the combination of climate models, whereupon climate scenarios are based. The numerical noise, introduced at scales smaller than the skillful scale, e.g. at local scale, was not considered in this study.
International Nuclear Information System (INIS)
Su Rui; Wang Ju; Chen Weiming; Zong Zihua; Zhao Honggang
2008-01-01
CRP-GEORC concept model is an artificial system of geological disposal for High-Level radioactive waste. Sensitivity analysis and uncertainties simulation of the migration of radionuclide Se-79 and I-129 in the far field of this system by using GoldSim Code have been conducted. It can be seen from the simulation results that variables used to describe the geological features and characterization of groundwater flow are sensitive variables of whole geological disposal system. The uncertainties of parameters have remarkable influence on the simulation results. (authors)
International Nuclear Information System (INIS)
Campolina, Daniel de Almeida Magalhães
2015-01-01
There is an uncertainty for all the components that comprise the model of a nuclear system. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a realistic calculation that has been replacing conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. By analyzing the propagated uncertainty to the effective neutron multiplication factor (k eff ), the effects of the sample size, computational uncertainty and efficiency of a random number generator to represent the distributions that characterize physical uncertainty in a light water reactor was investigated. A program entitled GB s ample was implemented to enable the application of the random sampling method, which requires an automated process and robust statistical tools. The program was based on the black box model and the MCNPX code was used in and parallel processing for the calculation of particle transport. The uncertainties considered were taken from a benchmark experiment in which the effects in k eff due to physical uncertainties is done through a conservative method. In this work a script called GB s ample was implemented to automate the sampling based method, use multiprocessing and assure the necessary robustness. It has been found the possibility of improving the efficiency of the random sampling method by selecting distributions obtained from a random number generator in order to obtain a better representation of uncertainty figures. After the convergence of the method is achieved, in order to reduce the variance of the uncertainty propagated without increase in computational time, it was found the best number o components to be sampled. It was also observed that if the sampling method is used to calculate the effect on k eff due to physical uncertainties reported by
Extraction and representation of common feature from uncertain facial expressions with cloud model.
Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing
2017-12-01
Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.
Sensitivity and Uncertainty Analysis of IAEA CRP HTGR Benchmark Using McCARD
International Nuclear Information System (INIS)
Jang, Sang Hoon; Shim, Hyung Jin
2016-01-01
The benchmark consists of 4 phases starting from the local standalone modeling (Phase I) to the safety calculation of coupled system with transient situation (Phase IV). As a preliminary study of UAM on HTGR, this paper covers the exercise 1 and 2 of Phase I which defines the unit cell and lattice geometry of MHTGR-350 (General Atomics). The objective of these exercises is to quantify the uncertainty of the multiplication factor induced by perturbing nuclear data as well as to analyze the specific features of HTGR such as double heterogeneity and self-shielding treatment. The uncertainty quantification of IAEA CRP HTGR UAM benchmarks were conducted using first-order AWP method in McCARD. Uncertainty of the multiplication factor was estimated only for the microscopic cross section perturbation. To reduce the computation time and memory shortage, recently implemented uncertainty analysis module in MC wielandt calculation was adjusted. The covariance data of cross section was generated by NJOY/ERRORR module with ENDF/B-VII.1. The numerical result was compared with evaluation result of DeCART/MUSAD code system developed by KAERI. IAEA CRP HTGR UAM benchmark problems were analyzed using McCARD. The numerical results were compared with Serpent for eigenvalue calculation and DeCART/MUSAD for S/U analysis. In eigenvalue calculation, inconsistencies were found in the result with ENDF/B-VII.1 cross section library and it was found to be the effect of thermal scattering data of graphite. As to S/U analysis, McCARD results matched well with DeCART/MUSAD, but showed some discrepancy in 238U capture regarding implicit uncertainty.
Uncertainty estimation of the velocity model for the TrigNet GPS network
Hackl, Matthias; Malservisi, Rocco; Hugentobler, Urs; Wonnacott, Richard
2010-05-01
Satellite based geodetic techniques - above all GPS - provide an outstanding tool to measure crustal motions. They are widely used to derive geodetic velocity models that are applied in geodynamics to determine rotations of tectonic blocks, to localize active geological features, and to estimate rheological properties of the crust and the underlying asthenosphere. However, it is not a trivial task to derive GPS velocities and their uncertainties from positioning time series. In general time series are assumed to be represented by linear models (sometimes offsets, annual, and semi-annual signals are included) and noise. It has been shown that models accounting only for white noise tend to underestimate the uncertainties of rates derived from long time series and that different colored noise components (flicker noise, random walk, etc.) need to be considered. However, a thorough error analysis including power spectra analyses and maximum likelihood estimates is quite demanding and are usually not carried out for every site, but the uncertainties are scaled by latitude dependent factors. Analyses of the South Africa continuous GPS network TrigNet indicate that the scaled uncertainties overestimate the velocity errors. So we applied a method similar to the Allan Variance that is commonly used in the estimation of clock uncertainties and is able to account for time dependent probability density functions (colored noise) to the TrigNet time series. Finally, we compared these estimates to the results obtained by spectral analyses using CATS. Comparisons with synthetic data show that the noise can be represented quite well by a power law model in combination with a seasonal signal in agreement with previous studies.
Kennedy, J. J.; Rayner, N. A.; Smith, R. O.; Parker, D. E.; Saunby, M.
2011-07-01
Changes in instrumentation and data availability have caused time-varying biases in estimates of global and regional average sea surface temperature. The size of the biases arising from these changes are estimated and their uncertainties evaluated. The estimated biases and their associated uncertainties are largest during the period immediately following the Second World War, reflecting the rapid and incompletely documented changes in shipping and data availability at the time. Adjustments have been applied to reduce these effects in gridded data sets of sea surface temperature and the results are presented as a set of interchangeable realizations. Uncertainties of estimated trends in global and regional average sea surface temperature due to bias adjustments since the Second World War are found to be larger than uncertainties arising from the choice of analysis technique, indicating that this is an important source of uncertainty in analyses of historical sea surface temperatures. Despite this, trends over the twentieth century remain qualitatively consistent.
Lemaire, Maurice
2014-01-01
Science is a quest for certainty, but lack of certainty is the driving force behind all of its endeavors. This book, specifically, examines the uncertainty of technological and industrial science. Uncertainty and Mechanics studies the concepts of mechanical design in an uncertain setting and explains engineering techniques for inventing cost-effective products. Though it references practical applications, this is a book about ideas and potential advances in mechanical science.
Needs of the CSAU uncertainty method
International Nuclear Information System (INIS)
Prosek, A.; Mavko, B.
2000-01-01
The use of best estimate codes for safety analysis requires quantification of the uncertainties. These uncertainties are inherently linked to the chosen safety analysis methodology. Worldwide, various methods were proposed for this quantification. The purpose of this paper was to identify the needs of the Code Scaling, Applicability, and Uncertainty (CSAU) methodology and then to answer the needs. The specific procedural steps were combined from other methods for uncertainty evaluation and new tools and procedures were proposed. The uncertainty analysis approach and tools were then utilized for confirmatory study. The uncertainty was quantified for the RELAP5/MOD3.2 thermalhydraulic computer code. The results of the adapted CSAU approach to the small-break loss-of-coolant accident (SB LOCA) show that the adapted CSAU can be used for any thermal-hydraulic safety analysis with uncertainty evaluation. However, it was indicated that there are still some limitations in the CSAU approach that need to be resolved. (author)
CEC/USDOE workshop on uncertainty analysis
International Nuclear Information System (INIS)
Elderkin, C.E.; Kelly, G.N.
1990-07-01
Any measured or assessed quantity contains uncertainty. The quantitative estimation of such uncertainty is becoming increasingly important, especially in assuring that safety requirements are met in design, regulation, and operation of nuclear installations. The CEC/USDOE Workshop on Uncertainty Analysis, held in Santa Fe, New Mexico, on November 13 through 16, 1989, was organized jointly by the Commission of European Communities (CEC's) Radiation Protection Research program, dealing with uncertainties throughout the field of consequence assessment, and DOE's Atmospheric Studies in Complex Terrain (ASCOT) program, concerned with the particular uncertainties in time and space variant transport and dispersion. The workshop brought together US and European scientists who have been developing or applying uncertainty analysis methodologies, conducted in a variety of contexts, often with incomplete knowledge of the work of others in this area. Thus, it was timely to exchange views and experience, identify limitations of approaches to uncertainty and possible improvements, and enhance the interface between developers and users of uncertainty analysis methods. Furthermore, the workshop considered the extent to which consistent, rigorous methods could be used in various applications within consequence assessment. 3 refs
Refinement of the concept of uncertainty.
Penrod, J
2001-04-01
To analyse the conceptual maturity of uncertainty; to develop an expanded theoretical definition of uncertainty; to advance the concept using methods of concept refinement; and to analyse congruency with the conceptualization of uncertainty presented in the theory of hope, enduring, and suffering. Uncertainty is of concern in nursing as people experience complex life events surrounding health. In an earlier nursing study that linked the concepts of hope, enduring, and suffering into a single theoretical scheme, a state best described as 'uncertainty' arose. This study was undertaken to explore how this conceptualization fit with the scientific literature on uncertainty and to refine the concept. Initially, a concept analysis using advanced methods described by Morse, Hupcey, Mitcham and colleagues was completed. The concept was determined to be partially mature. A theoretical definition was derived and techniques of concept refinement using the literature as data were applied. The refined concept was found to be congruent with the concept of uncertainty that had emerged in the model of hope, enduring and suffering. Further investigation is needed to explore the extent of probabilistic reasoning and the effects of confidence and control on feelings of uncertainty and certainty.
Zatarain Salazar, Jazmin; Reed, Patrick M.; Quinn, Julianne D.; Giuliani, Matteo; Castelletti, Andrea
2017-11-01
Reservoir operations are central to our ability to manage river basin systems serving conflicting multi-sectoral demands under increasingly uncertain futures. These challenges motivate the need for new solution strategies capable of effectively and efficiently discovering the multi-sectoral tradeoffs that are inherent to alternative reservoir operation policies. Evolutionary many-objective direct policy search (EMODPS) is gaining importance in this context due to its capability of addressing multiple objectives and its flexibility in incorporating multiple sources of uncertainties. This simulation-optimization framework has high potential for addressing the complexities of water resources management, and it can benefit from current advances in parallel computing and meta-heuristics. This study contributes a diagnostic assessment of state-of-the-art parallel strategies for the auto-adaptive Borg Multi Objective Evolutionary Algorithm (MOEA) to support EMODPS. Our analysis focuses on the Lower Susquehanna River Basin (LSRB) system where multiple sectoral demands from hydropower production, urban water supply, recreation and environmental flows need to be balanced. Using EMODPS with different parallel configurations of the Borg MOEA, we optimize operating policies over different size ensembles of synthetic streamflows and evaporation rates. As we increase the ensemble size, we increase the statistical fidelity of our objective function evaluations at the cost of higher computational demands. This study demonstrates how to overcome the mathematical and computational barriers associated with capturing uncertainties in stochastic multiobjective reservoir control optimization, where parallel algorithmic search serves to reduce the wall-clock time in discovering high quality representations of key operational tradeoffs. Our results show that emerging self-adaptive parallelization schemes exploiting cooperative search populations are crucial. Such strategies provide a
International Nuclear Information System (INIS)
Limperopoulos, G.J.
1995-01-01
This report presents an oil project valuation under uncertainty by means of two well-known financial techniques: The Capital Asset Pricing Model (CAPM) and The Black-Scholes Option Pricing Formula. CAPM gives a linear positive relationship between expected rate of return and risk but does not take into consideration the aspect of flexibility which is crucial for an irreversible investment as an oil price is. Introduction of investment decision flexibility by using real options can increase the oil project value substantially. Some simple tests for the importance of uncertainty in stock market for oil investments are performed. Uncertainty in stock returns is correlated with aggregate product market uncertainty according to Pindyck (1991). The results of the tests are not satisfactory due to the short data series but introducing two other explanatory variables the interest rate and Gross Domestic Product make the situation better. 36 refs., 18 figs., 6 tabs
Characterizing Epistemic Uncertainty for Launch Vehicle Designs
Novack, Steven D.; Rogers, Jim; Hark, Frank; Al Hassan, Mohammad
2016-01-01
NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty are rendered obsolete since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods.This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper shows how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.
Representative volume element size of a polycrystalline aggregate with embedded short crack
International Nuclear Information System (INIS)
Simonovski, I.; Cizelj, L.
2007-01-01
A random polycrystalline aggregate model is proposed for evaluation of a representative volume element size (RVE) of a 316L stainless steel with embedded surface crack. RVE size is important since it defines the size of specimen where the influence of local microstructural features averages out, resulting in the same macroscopic response for geometrically similar specimen. On the other hand macroscopic responses of specimen with size smaller than RVE will, due to the microstructural features, differ significantly. Different sizes and orientations of grains, inclusions, voids,... etc are examples of such microstructural features. If a specimen size is above RVE size, classical continuum mechanics can be applied. On the other hand, advanced material models should be used for specimen with size below RVE. This paper proposes one such model, where random size, shape and orientation of grains are explicitly modeled. Crystal plasticity constitutive model is used to account for slip in the grains. RVE size is estimated by calculating the crack tip opening displacements of aggregates with different grain numbers. Progressively larger number of grains are included in the aggregates until the crack tip displacements for two consecutive aggregates of increasing size differ less than 1 %. At this point the model has reached RVE size. (author)
Uncertainty Analyses and Strategy
International Nuclear Information System (INIS)
Kevin Coppersmith
2001-01-01
The DOE identified a variety of uncertainties, arising from different sources, during its assessment of the performance of a potential geologic repository at the Yucca Mountain site. In general, the number and detail of process models developed for the Yucca Mountain site, and the complex coupling among those models, make the direct incorporation of all uncertainties difficult. The DOE has addressed these issues in a number of ways using an approach to uncertainties that is focused on producing a defensible evaluation of the performance of a potential repository. The treatment of uncertainties oriented toward defensible assessments has led to analyses and models with so-called ''conservative'' assumptions and parameter bounds, where conservative implies lower performance than might be demonstrated with a more realistic representation. The varying maturity of the analyses and models, and uneven level of data availability, result in total system level analyses with a mix of realistic and conservative estimates (for both probabilistic representations and single values). That is, some inputs have realistically represented uncertainties, and others are conservatively estimated or bounded. However, this approach is consistent with the ''reasonable assurance'' approach to compliance demonstration, which was called for in the U.S. Nuclear Regulatory Commission's (NRC) proposed 10 CFR Part 63 regulation (64 FR 8640 [DIRS 101680]). A risk analysis that includes conservatism in the inputs will result in conservative risk estimates. Therefore, the approach taken for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) provides a reasonable representation of processes and conservatism for purposes of site recommendation. However, mixing unknown degrees of conservatism in models and parameter representations reduces the transparency of the analysis and makes the development of coherent and consistent probability statements about projected repository
Energy Technology Data Exchange (ETDEWEB)
Makarov, Yuri V.; Huang, Zhenyu; Etingov, Pavel V.; Ma, Jian; Guttromson, Ross T.; Subbarao, Krishnappa; Chakrabarti, Bhujanga B.
2010-01-01
The power system balancing process, which includes the scheduling, real time dispatch (load following) and regulation processes, is traditionally based on deterministic models. Since the conventional generation needs time to be committed and dispatched to a desired megawatt level, the scheduling and load following processes use load and wind and solar power production forecasts to achieve future balance between the conventional generation and energy storage on the one side, and system load, intermittent resources (such as wind and solar generation), and scheduled interchange on the other side. Although in real life the forecasting procedures imply some uncertainty around the load and wind/solar forecasts (caused by forecast errors), only their mean values are actually used in the generation dispatch and commitment procedures. Since the actual load and intermittent generation can deviate from their forecasts, it becomes increasingly unclear (especially, with the increasing penetration of renewable resources) whether the system would be actually able to meet the conventional generation requirements within the look-ahead horizon, what the additional balancing efforts would be needed as we get closer to the real time, and what additional costs would be incurred by those needs. To improve the system control performance characteristics, maintain system reliability, and minimize expenses related to the system balancing functions, it becomes necessary to incorporate the predicted uncertainty ranges into the scheduling, load following, and, in some extent, into the regulation processes. It is also important to address the uncertainty problem comprehensively by including all sources of uncertainty (load, intermittent generation, generators’ forced outages, etc.) into consideration. All aspects of uncertainty such as the imbalance size (which is the same as capacity needed to mitigate the imbalance) and generation ramping requirement must be taken into account. The latter
International Nuclear Information System (INIS)
Wilson, G.E.
1992-01-01
The Analytic Hierarchy Process (AHP) has been used to help determine the importance of components and phenomena in thermal-hydraulic safety analyses of nuclear reactors. The AHP results are based, in part on expert opinion. Therefore, it is prudent to evaluate the uncertainty of the AHP ranks of importance. Prior applications have addressed uncertainty with experimental data comparisons and bounding sensitivity calculations. These methods work well when a sufficient experimental data base exists to justify the comparisons. However, in the case of limited or no experimental data the size of the uncertainty is normally made conservatively large. Accordingly, the author has taken another approach, that of performing a statistically based uncertainty analysis. The new work is based on prior evaluations of the importance of components and phenomena in the thermal-hydraulic safety analysis of the Advanced Neutron Source Reactor (ANSR), a new facility now in the design phase. The uncertainty during large break loss of coolant, and decay heat removal scenarios is estimated by assigning a probability distribution function (pdf) to the potential error in the initial expert estimates of pair-wise importance between the components. Using a Monte Carlo sampling technique, the error pdfs are propagated through the AHP software solutions to determine a pdf of uncertainty in the system wide importance of each component. To enhance the generality of the results, study of one other problem having different number of elements is reported, as are the effects of a larger assumed pdf error in the expert ranks. Validation of the Monte Carlo sample size and repeatability are also documented
Energy Technology Data Exchange (ETDEWEB)
Makarov, Yuri V.; Huang, Zhenyu; Etingov, Pavel V.; Ma, Jian; Guttromson, Ross T.; Subbarao, Krishnappa; Chakrabarti, Bhujanga B.
2010-09-01
The power system balancing process, which includes the scheduling, real time dispatch (load following) and regulation processes, is traditionally based on deterministic models. Since the conventional generation needs time to be committed and dispatched to a desired megawatt level, the scheduling and load following processes use load and wind power production forecasts to achieve future balance between the conventional generation and energy storage on the one side, and system load, intermittent resources (such as wind and solar generation) and scheduled interchange on the other side. Although in real life the forecasting procedures imply some uncertainty around the load and wind forecasts (caused by forecast errors), only their mean values are actually used in the generation dispatch and commitment procedures. Since the actual load and intermittent generation can deviate from their forecasts, it becomes increasingly unclear (especially, with the increasing penetration of renewable resources) whether the system would be actually able to meet the conventional generation requirements within the look-ahead horizon, what the additional balancing efforts would be needed as we get closer to the real time, and what additional costs would be incurred by those needs. In order to improve the system control performance characteristics, maintain system reliability, and minimize expenses related to the system balancing functions, it becomes necessary to incorporate the predicted uncertainty ranges into the scheduling, load following, and, in some extent, into the regulation processes. It is also important to address the uncertainty problem comprehensively, by including all sources of uncertainty (load, intermittent generation, generators’ forced outages, etc.) into consideration. All aspects of uncertainty such as the imbalance size (which is the same as capacity needed to mitigate the imbalance) and generation ramping requirement must be taken into account. The latter unique
Rains, Stephen A; Tukachinsky, Riva
2015-01-01
Uncertainty management theory (UMT; Brashers, 2001, 2007) is rooted in the assumption that, as opposed to being inherently negative, health-related uncertainty is appraised for its meaning. Appraisals influence subsequent behaviors intended to manage uncertainty, such as information seeking. This study explores the connections among uncertainty, appraisal, and information-seeking behavior proposed in UMT. A laboratory study was conducted in which participants (N = 157) were primed to feel and desire more or less uncertainty about skin cancer and were given the opportunity to search for skin cancer information using the World Wide Web. The results show that desired uncertainty level predicted appraisal intensity, and appraisal intensity predicted information-seeking depth-although the latter relationship was in the opposite direction of what was expected.
International Nuclear Information System (INIS)
Silva, T.A. da
1988-01-01
The comparison between the uncertainty method recommended by International Atomic Energy Agency (IAEA) and the and the International Weight and Measure Commitee (CIPM) are showed, for the calibration of clinical dosimeters in the secondary standard Dosimetry Laboratory (SSDL). (C.G.C.) [pt
Uncertainty Evaluation of Reactivity Coefficients for a large advanced SFR Core Design
International Nuclear Information System (INIS)
Khamakhem, Wassim; Rimpault, Gerald
2008-01-01
Sodium Cooled Fast Reactors are currently being reshaped in order to meet Generation IV goals on economics, safety and reliability, sustainability and proliferation resistance. Recent studies have led to large SFR cores for a 3600 MWth power plants, cores which exhibit interesting features. The designs have had to balance between competing aspects such as sustainability and safety characteristics. Sustainability in neutronic terms is translated into positive breeding gain and safety into rather low Na void reactivity effects. The studies have been done on two SFR concepts using oxide and carbide fuels. The use of the sensitivity theory in the ERANOS determinist code system has been used. Calculations have been performed with different sodium evaluations: JEF2.2, ERALIB-1 and the most recent JEFF3.1 and ENDF/B-VII in order to make a broad comparison. Values for the Na void reactivity effect exhibit differences as large as 14% when using the different sodium libraries. Uncertainties due to nuclear data on the reactivity coefficients were performed with BOLNA variances-covariances data, the Na Void Effect uncertainties are near to 12% at 1σ. Since, the uncertainties are far beyond the target accuracy for a design achieving high performance, two directions are envisaged: the first one is to perform new differential measurements or in a second attempt use integral experiments to improve effectively the nuclear data set and its uncertainties such as performed in the past with ERALIB1. (authors)
Alonso, Ariel; Laenen, Annouschka
2013-05-01
Laenen, Alonso, and Molenberghs (2007) and Laenen, Alonso, Molenberghs, and Vangeneugden (2009) proposed a method to assess the reliability of rating scales in a longitudinal context. The methodology is based on hierarchical linear models, and reliability coefficients are derived from the corresponding covariance matrices. However, finding a good parsimonious model to describe complex longitudinal data is a challenging task. Frequently, several models fit the data equally well, raising the problem of model selection uncertainty. When model uncertainty is high one may resort to model averaging, where inferences are based not on one but on an entire set of models. We explored the use of different model building strategies, including model averaging, in reliability estimation. We found that the approach introduced by Laenen et al. (2007, 2009) combined with some of these strategies may yield meaningful results in the presence of high model selection uncertainty and when all models are misspecified, in so far as some of them manage to capture the most salient features of the data. Nonetheless, when all models omit prominent regularities in the data, misleading results may be obtained. The main ideas are further illustrated on a case study in which the reliability of the Hamilton Anxiety Rating Scale is estimated. Importantly, the ambit of model selection uncertainty and model averaging transcends the specific setting studied in the paper and may be of interest in other areas of psychometrics. © 2012 The British Psychological Society.
Uncertainty quantification theory, implementation, and applications
Smith, Ralph C
2014-01-01
The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...
Bermejo-Moreno, Ivan; Campo, Laura; Larsson, Johan; Emory, Mike; Bodart, Julien; Palacios, Francisco; Iaccarino, Gianluca; Eaton, John
2013-11-01
We study the interaction between an oblique shock wave and the turbulent boundary layers inside a nearly-square duct by combining wall-modeled LES, 2D and 3D RANS simulations, targeting the experiment of Campo, Helmer & Eaton, 2012 (nominal conditions: M = 2 . 05 , Reθ = 6 , 500). A primary objective is to quantify the effect of aleatory and epistemic uncertainties on the STBLI. Aleatory uncertainties considered include the inflow conditions (Mach number of the incoming air stream and thickness of the boundary layers) and perturbations of the duct geometry upstream of the interaction. The epistemic uncertainty under consideration focuses on the RANS turbulence model form by injecting perturbations in the Reynolds stress anisotropy in regions of the flow where the model assumptions (in particular, the Boussinesq eddy-viscosity hypothesis) may be invalid. These perturbations are then propagated through the flow solver into the solution. The uncertainty quantification (UQ) analysis is done through 2D and 3D RANS simulations, assessing the importance of the three-dimensional effects imposed by the nearly-square duct geometry. Wall-modeled LES are used to verify elements of the UQ methodology and to explore the flow features and physics of the STBLI for multiple shock strengths. Financial support from the United States Department of Energy under the PSAAP program is gratefully acknowledged.
Directory of Open Access Journals (Sweden)
Douglas A. Fynan
2016-06-01
Full Text Available The Gaussian process model (GPM is a flexible surrogate model that can be used for nonparametric regression for multivariate problems. A unique feature of the GPM is that a prediction variance is automatically provided with the regression function. In this paper, we estimate the safety margin of a nuclear power plant by performing regression on the output of best-estimate simulations of a large-break loss-of-coolant accident with sampling of safety system configuration, sequence timing, technical specifications, and thermal hydraulic parameter uncertainties. The key aspect of our approach is that the GPM regression is only performed on the dominant input variables, the safety injection flow rate and the delay time for AC powered pumps to start representing sequence timing uncertainty, providing a predictive model for the peak clad temperature during a reflood phase. Other uncertainties are interpreted as contributors to the measurement noise of the code output and are implicitly treated in the GPM in the noise variance term, providing local uncertainty bounds for the peak clad temperature. We discuss the applicability of the foregoing method to reduce the use of conservative assumptions in best estimate plus uncertainty (BEPU and Level 1 probabilistic safety assessment (PSA success criteria definitions while dealing with a large number of uncertainties.
Impact of dose-distribution uncertainties on rectal ntcp modeling I: Uncertainty estimates
International Nuclear Information System (INIS)
Fenwick, John D.; Nahum, Alan E.
2001-01-01
A trial of nonescalated conformal versus conventional radiotherapy treatment of prostate cancer has been carried out at the Royal Marsden NHS Trust (RMH) and Institute of Cancer Research (ICR), demonstrating a significant reduction in the rate of rectal bleeding reported for patients treated using the conformal technique. The relationship between planned rectal dose-distributions and incidences of bleeding has been analyzed, showing that the rate of bleeding falls significantly as the extent of the rectal wall receiving a planned dose-level of more than 57 Gy is reduced. Dose-distributions delivered to the rectal wall over the course of radiotherapy treatment inevitably differ from planned distributions, due to sources of uncertainty such as patient setup error, rectal wall movement and variation in the absolute rectal wall surface area. In this paper estimates of the differences between planned and treated rectal dose-distribution parameters are obtained for the RMH/ICR nonescalated conformal technique, working from a distribution of setup errors observed during the RMH/ICR trial, movement data supplied by Lebesque and colleagues derived from repeat CT scans, and estimates of rectal circumference variations extracted from the literature. Setup errors and wall movement are found to cause only limited systematic differences between mean treated and planned rectal dose-distribution parameter values, but introduce considerable uncertainties into the treated values of some dose-distribution parameters: setup errors lead to 22% and 9% relative uncertainties in the highly dosed fraction of the rectal wall and the wall average dose, respectively, with wall movement leading to 21% and 9% relative uncertainties. Estimates obtained from the literature of the uncertainty in the absolute surface area of the distensible rectal wall are of the order of 13%-18%. In a subsequent paper the impact of these uncertainties on analyses of the relationship between incidences of bleeding
A Bayesian approach to model uncertainty
International Nuclear Information System (INIS)
Buslik, A.
1994-01-01
A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given
Directory of Open Access Journals (Sweden)
Vicari Kristin J
2012-04-01
Full Text Available Abstract Background Cost-effective production of lignocellulosic biofuels remains a major financial and technical challenge at the industrial scale. A critical tool in biofuels process development is the techno-economic (TE model, which calculates biofuel production costs using a process model and an economic model. The process model solves mass and energy balances for each unit, and the economic model estimates capital and operating costs from the process model based on economic assumptions. The process model inputs include experimental data on the feedstock composition and intermediate product yields for each unit. These experimental yield data are calculated from primary measurements. Uncertainty in these primary measurements is propagated to the calculated yields, to the process model, and ultimately to the economic model. Thus, outputs of the TE model have a minimum uncertainty associated with the uncertainty in the primary measurements. Results We calculate the uncertainty in the Minimum Ethanol Selling Price (MESP estimate for lignocellulosic ethanol production via a biochemical conversion process: dilute sulfuric acid pretreatment of corn stover followed by enzymatic hydrolysis and co-fermentation of the resulting sugars to ethanol. We perform a sensitivity analysis on the TE model and identify the feedstock composition and conversion yields from three unit operations (xylose from pretreatment, glucose from enzymatic hydrolysis, and ethanol from fermentation as the most important variables. The uncertainty in the pretreatment xylose yield arises from multiple measurements, whereas the glucose and ethanol yields from enzymatic hydrolysis and fermentation, respectively, are dominated by a single measurement: the fraction of insoluble solids (fIS in the biomass slurries. Conclusions We calculate a $0.15/gal uncertainty in MESP from the TE model due to uncertainties in primary measurements. This result sets a lower bound on the error bars of
Generalized uncertainty principle and the maximum mass of ideal white dwarfs
Energy Technology Data Exchange (ETDEWEB)
Rashidi, Reza, E-mail: reza.rashidi@srttu.edu
2016-11-15
The effects of a generalized uncertainty principle on the structure of an ideal white dwarf star is investigated. The equation describing the equilibrium configuration of the star is a generalized form of the Lane–Emden equation. It is proved that the star always has a finite size. It is then argued that the maximum mass of such an ideal white dwarf tends to infinity, as opposed to the conventional case where it has a finite value.
International Nuclear Information System (INIS)
Pendleton, Ph.; Badalyan, A.
2005-01-01
Activated carbon cloth (ACC) is a good adsorbent for high rate adsorption of volatile organic carbons [1] and as a storage media for methane [2]. It has been shown [2] that the capacity of ACC to adsorb methane, in the first instance, depends on its micropore volume. One way of increasing this storage capacity is to increase micropore volume [3]. Therefore, the uncertainty in the determination of ACC micropore volume becomes a very important factor, since it affects the uncertainty of amount adsorbed at high-pressures, which usually accompany storage of methane on ACC. Recently, we developed a method for the calculation of experimental uncertainty in micropore volume using low pressure nitrogen adsorption data at 77 K for FM1/250 ACC (ex. Calgon, USA). We tested several cubic equations of state (EOS) and multiple parameter (EOS) to determine the amount of high-pressure nitrogen adsorbed, and compared these data with amounts calculated via interpolated NIST density data. The amount adsorbed calculated from interpolated NIST density data exhibit the lowest propagated combined uncertainty. Values of relative combined standard uncertainty for FM1/250 calculated using a weighted, mean-least-squares method applied to the low-pressure nitrogen adsorption data (Fig. 1) gave 3.52% for the primary micropore volume and 1.63% for the total micropore volume. Our equipment allows the same sample to be exposed to nitrogen (and other gases) at pressures from 10 -4 Pa to 17-MPa in the temperature range from 176 to 252 K. The maximum uptake of nitrogen was 356-mmol/g at 201.92 K and 15.8-MPa (Fig. 2). The delivery capacity of ACC is determined by the amount of adsorbed gas recovered when the pressure is reduced from that for maximum adsorption to 0.1-MPa [2]. In this regard, the total micropore volume becomes an important parameter in determining the amount of gas delivered during desorption. In the present paper we will discuss the effect of uncertainty in micropore volume
Chemical model reduction under uncertainty
Najm, Habib; Galassi, R. Malpica; Valorani, M.
2016-01-01
We outline a strategy for chemical kinetic model reduction under uncertainty. We present highlights of our existing deterministic model reduction strategy, and describe the extension of the formulation to include parametric uncertainty in the detailed mechanism. We discuss the utility of this construction, as applied to hydrocarbon fuel-air kinetics, and the associated use of uncertainty-aware measures of error between predictions from detailed and simplified models.
Chemical model reduction under uncertainty
Najm, Habib
2016-01-05
We outline a strategy for chemical kinetic model reduction under uncertainty. We present highlights of our existing deterministic model reduction strategy, and describe the extension of the formulation to include parametric uncertainty in the detailed mechanism. We discuss the utility of this construction, as applied to hydrocarbon fuel-air kinetics, and the associated use of uncertainty-aware measures of error between predictions from detailed and simplified models.
Uncertainty analysis on probabilistic fracture mechanics assessment methodology
International Nuclear Information System (INIS)
Rastogi, Rohit; Vinod, Gopika; Chandra, Vikas; Bhasin, Vivek; Babar, A.K.; Rao, V.V.S.S.; Vaze, K.K.; Kushwaha, H.S.; Venkat-Raj, V.
1999-01-01
Fracture Mechanics has found a profound usage in the area of design of components and assessing fitness for purpose/residual life estimation of an operating component. Since defect size and material properties are statistically distributed, various probabilistic approaches have been employed for the computation of fracture probability. Monte Carlo Simulation is one such procedure towards the analysis of fracture probability. This paper deals with uncertainty analysis using the Monte Carlo Simulation methods. These methods were developed based on the R6 failure assessment procedure, which has been widely used in analysing the integrity of structures. The application of this method is illustrated with a case study. (author)
Uncertainty management for aerial vehicles: Coordination, deconfliction, and disturbance rejection
Panyakeow, Prachya
collection of UAVs that are initially scattered in space. The goal is to find shortest trajectories that bring the UAVs to a connected formation where they are in the range of detection of one another and headed in the same direction to maintain the connectivity. Pontryagin Minimum Principle (PMP) is utilized to determine the control law and path synthesis for the UAVs under the turn-rate constraints. We introduce an algorithm to search for the optimal solution when the final network topology is specified; followed by a nonlinear programming method in which the final configuration is emerged from the optimization routine under the constraints that the final topology is connected. Each method has its own advantages based on the size of corporative networks. For the uncertainty due to gust turbulence, we choose a model predictive control (MPC) technique to address gust load alleviation (GLA) for a flexible aircraft. MPC is a discrete method based on repeated online optimization that allows direct consideration of control actuator constraints into the feedba